Part of me feels like letting the TSZ thread go to a full 1,000 comments, but then my sense of responsibility to UD’s bandwidth budget kicks in.
So, let us continue the discussion of the topics from the thread on TSZ issues and Jerad’s concerns continue here.
To prime the pump, let me clip two posts in the thread:
______________
>>912
KF (911) – ooo, spooky
Are you unable to see that when those individual configs come in clusters that are functionally distinct, it is relevant to think about the relative statistical weights of the clusters?
Hitting a cluster would have a higher probability than hitting a single configs but only because a cluster consists of many configs. [a –> The precise point, now work on the implications of this] A purely blind random search means every config is equally likely so groups or clusters of configs would have higher probability, how high would depend on how big they are not their functionality. [b –> Strawman, I never said that he likelihood of finding directly depended on functionality or not, just that because the constraints of multiple well matched parts arranged correctly to function means FSCO/I comes in narrow sectors of the space W. And to see why this is so I gave mechanical and molecular nanotech cases.]
Consider a Cardinal spinning reel. The atoms and parts it is made of can be in certain functional configs, or non-functional ones, including scatered all over the earth. Obviously there are far more non-functional than functional ways. If the individual way is equiprobable, the non functional cluster is far more likely on a blind pick than a functional one.
Yup, under the assumption there are more non-functional configs than functional configs. [c –> Bit this is illustrative of the general pattern as well, and BTW this is why the pricked cell Humpty Dumpty experiments are also relevant.]
This is the reasoning behind the 1,000 LY cubical hay bale and our galactic neighbourhood. The star systems are special zones, but the space between so dominates that a blind 1-straw size sample will all but certainly come up straw. In fact, the likelihood of so getting anything but straw is negligibly different from zero. Where of course our solar system is in effect only able to take a one straw sample of the config space for just 500 bits.
Operating under the assumption that there are few functional states, of course. [e –> Not a dismissible assumption, which is tantamount to saying question begging. I explained and exemplified why I asserted that FSCO/I naturally comes in narrow zones T in W. if you dispute this, which I have many cases for, you need to show the counterexamples, all you have done is say yes under the assumptions. Not an assumption a fact, the atoms of the Cardinal were originally scattered all over the planet, but showed no function until intelligence led to their assembly into that famous fishing reel] But we don’t know the number of functional configs. I agree, it’s probably small compared to the whole space. [f –> Grudging concession, but the crucial one, what follows is given the exponential nature of the config space, that samples on the gamut of solar system or even cosmos that are blind are maximally improbable to hit on the functional clusters.]
The best explanation for seeing the 500 coin BB in a special state, under such circumstances, is that we are not looking at a blind sample.
Given a single random selection out of the whole config space where every config is equally likely then no, you cannot assume it was not a blind sample AFTER getting a config you find surprising or meaningful. [f –> Oh yes you can, if say you saw the first 72 letter for this post in ASCII code, given the utter unlikelihood of finding such a functional cluster by chance as opposed to the very many more non functional ones.]
If you got 5 or 10 or 250 meaningful configs on successive independent random samples THEN you might have an argument that the sample was biased, the null hypothesis is wrong. Or even 250 functional configs out of 400 random samples. [g –> Irrelevant. You know or should know that given the overwhelming imbalance in the statistical weight of the clusters, FSCO/I will to all but absolute certainty be unobservable on blind sampling. AND in the material case, life forms we start at about 100 – 1,000 k bits, that is 200 – 2,000 times over getting samples from the odd and isolated zone of 500 bits apiece.]
If you argue that we effectively have 1000s of random samples that turned out to be functional life forms then you are a) not accounting for samples that turned out not to be functional (we would have no record of those) and b) arguing against a proposition that is not being made by evolutionary theory. [h –> Strawman.] You would be assuming [i –> Strawman, and turnabout of the reasonable burden of empirical warrant.] there exist islands of function in the life config space and that some of our existing life forms come from different islands. Even if there are different islands how do you know our existing life forms are from different ones? [j –> Check out the was it 6,00 protein fold domains to see islands of function as empirically warranted, and then move on up to the 10 – 100 million bits of fresh FSCO/I to make new body plans dozens of times over, then cf the characteristic pattern of sudden appearances, stasis and gaps int eh fossil record. The evidence of islands is there if you are willing to look it in the eye.]
Instead, we know from observation that say coins arranged in the first 72 or so ASCII characters for this comment would be very easily explained on design. And if the Mars rover were to run into a crater with a wall and an inscription or diagram on it, we would instantly and properly infer to design.
A diagram on a wall is not coin tosses or living systems so the analysis is different. [k –> And did you notice how we have consistently shown how to reduce FSCO/I to coded strings, which are equivalent to text on the wall? Where also DNA code in the living cell to assemble proteins and to regulate is text strings, equivalent to writing on the wall.] You have to be very, very sure the diagram has meaning. [l –> You don’t have to know the meaning, once you see a diagram pattern it would be proof positive. Cf the Voynich manuscript, discussed in IOSE.]
People see Jesus’s picture on pieces of toast all the time but that doesn’t mean it was put there or designed. [m –> Well within the FSCO/I limits, cf the IOSE discussion of Man of the Mountain vs Mt Rushmore, you have not done your homework.] Consider the config space of a piece of toast, all the possible ‘looks’ you could get. I bet the space has cardinality bigger than 2^1000. And yet, every so often, a Jesus toast pops up. Paradolia can be very misleading. [n –> Do you see the problem of S = 0 as default, i.e lacking functional specificity? Burn marks on toast are not equivalent to a diagram and you know it.]
Can you see why I have argued as just above? Can you agrre that the argument is reasonable? Why, or why not?
I only disagree that a single randomly selected config out of a huge config space can imply design. [o –> Strawman, the point was that random selection will not credibly hit on FSCO/I, for reasons given in detail. ] The math just doesn’t support that contention.
If not, then kindly explain to us the logic used in Fisherian hypothesis testing on whether an observation is in the bulk or the far skirt of a distribution premised on the null hyp.
Those kinds of analysis are based on (hopefully) large samples and come with confidence intervals. If you want to set up that kind of hypothesis testing please do so. [p –> you are slipping and sliding to a strawman, the issue is, the point of the analysis is that random samples will predominantly come from the bulk not the far tails, which are special zones.] BUT, the point of the confidence interval is to indicate that the conclusion can STILL be wrong. [q –> And any inductive conclusion can be wrong, hence best CURRENT explanation, however there are any number of such that are morally certain, e.g the sun will rise tomorrow, and error exists. That you will not pick up deeply isolated special zones on a sample comparable to 1 straw to a hay bale as thick as our galaxy is the same.]
And again, medical trials and the kind of situations that use Fisherian analysis are not based on a single sample. Your confidence interval in such a situation would be very nearly zero. [r –> Strawman, you are ducking the point that samples taken at random are overwhelmingly likely to come from the bulk not narrow special far skirt zones. You are obviously going our of your way to avoid acknowledging this well known point.]
Just to pick up what caught my eye, do you not know that a living cell is encapsulated and has smart gates that control what comes in or goes out? That, it is a metabolising device, and that it self replicates on a vNSR, using codes and algorithms executed through molecular nanotech devices?
I just didn’t get the reference in the context of talking about sample spaces and random searches.
Similarly, you have been TAUGHT that all the evidence supports common descent, and that such is only to be explained on NATURAL CAUSES. In fact design is compatible with common descent, in several possible ways, but the evidence does not substantiate blind watchmaker naturalistic common descent.
I see no need for the designer hypothesis. [r –> Personal perception has nothing to do with objective warrant.] I agree there are aspects where design and undesigned could look the same depending on the intent of the designer. But I don’t think you can look at life on earth, with no other examples of life on other planets, and claim life is designed without making more complicated arguments and/or finding more evidence. [s –> remember, a self replicating automaton that uses CODED algorithms to control NC machines assembled using molecular nanotech. What empirically warranted chance and necessity model have you got to explain such, and what serious counter do you have to the billions of test cases and needle in haystack analysis that warrant that FSCO/I is a reliable sign of design?] You can hypothesise that it is of course. But you can’t prove it by making simple probabilistic arguments. [t –> Selective hyperskepticism, you are choosing an explanation without empirical warrant of adequacy over one with such warrant, on clearly ideological grounds.]
Routinely, on billions of cases, FSCO/I is seen as caused by design.
Quite true, regarding inanimate outcomes and when there is a designer present with the requisite skills and equipment. [u –> Irrelevancies, as algorithmic code is algorithmic code; you have no empirically warranted mechanism, and wish to object to that which does have empirical warrant.]
This is backed up by needle in the haystack analysis as in the main comment. Indeed, it would be far more reasonable on the evidence to infer to common design, which is perfectly compatible with what we see and is the empirically reliable cause of FSCO/I. The cell is chock full of FSCO/I.
I disagree. I don’t think you have proven the case mathematically. [v –> this is the proof that you are not examining on the correct grounds of warrant. Inductive matters are not amenable to deductive proof. But you wish to impose an inappropriate standard because the empirically grounded best warranted explanation does not fit your worldview preferences] Now you might be able to by using more complicated Fisherian-type methods. I’d recommend Bayesian myself, that carries a lot more weight. But you haven’t done that yet. [w –> Strawman, the issue is that a blind sample comes form the bulk with high odds, this you cannot deny.]
But this is all just my opinion. I’m not trying to inflict my views on anyone. I am trying to answer your posts with little rancour or putting words into your mouth. I’m not always successful of course (being a dopey human being really) but I am trying to be civil.
I don’t expect us to ever really agree and I’m not trying to influence anyone. But I will answer queries as best I can given my time constraints. If I’ve missed any or misinterpreted any then let me know and I will make another attempt when I can. Today is not looking good though. Oh well.
{ –> I thought it necessary to do a quick note on points, sorry if rough around the edges, gotta get ready to go now. KF]>>
>> 922
KF (916):
Pardon a quick and dirty markup at 912. Gotta go.
Please do not apologise! I know you’re busy and, anyway, I prefer that method of response.
I keep wondering why you keeping replying considering how recalcitrant I am!!
KF (912):
Just a couple of general points: I agree that the number of viable/functional/interpretable configs in the kind of config spaces we are talking about is very likely to be a very small compared to the whole space. And that most of the time, a single random sample is going to return garbage. Those are given as far as I am concerned. If I ever gave the impression I was disputing that then I apologise for my poor exposition.
samples on the gamut of solar system or even cosmos that are blind are maximally improbable to hit on the functional clusters.
‘[S]amples on the gamut of the solar system’ doesn’t make sense to me but it’s not a big deal. ‘[M]aximally improbable’ doesn’t make sense to me either. The maximum improbability would be a probability of zero which no thing in a sample space (of the type we’re discussing) would have. Each config in our discussed config spaces would have a very, very, very small probability of being selected in a random search but it would never be zero.
Me:
Given a single random selection out of the whole config space where every config is equally likely then no, you cannot assume it was not a blind sample AFTER getting a config you find surprising or meaningful.
KF:
Oh yes you can, if say you saw the first 72 letter for this post in ASCII code, given the utter unlikelihood of finding such a functional cluster by chance as opposed to the very many more non functional ones.
I’m sorry but that is just not right. In a purely random search each config is just as likely as any other. Each has a miniscule probability of being picked if the config space is large. This kind of situation is exactly why medical trials are based on large trials with multiple subjects and control groups. And then you generate p-values and confidence intervals. That’s an accepted way to use mathematics to make decisions of alternative over null hypothesis.
I think we both agree that a diagram found on Mars would have to be more compelling than some vague blobs so there’s no need to go over those points really. Obviously we’d both say something that looked like the London Underground Map was designed no matter where it was found.
Strawman, you are ducking the point that samples taken at random are overwhelmingly likely to come from the bulk not narrow special far skirt zones. You are obviously going our of your way to avoid acknowledging this well known point.
I think I’ve already shown that I agree with this. It’s the conclusion after getting a specified and complex pattern where we differ.
But you wish to impose an inappropriate standard because the empirically grounded best warranted explanation does not fit your worldview preferences
That is not true. I am suggesting a method of analysis quite common when trying to prove an alternate over a null hypothesis. To be sure your alternate hypothesis is correct you have to establish that an event was not just a random occurrence by repeating the ‘trial’ many times.
(As the null hypothesis is the ‘default’ hypothesis I am picking the design hypothesis to be the alternate but it’s possible to do the same analysis the other way around. But the testing would be different.)
If you roll a 20-sided fair die each side is equally likely to come up on any given roll. It’s only after multiple rolls that you will empirically see (as opposed to figuring it out analytically) the probability distribution of the outcomes. If the die is fair/random then after 100s of rolls you should see each outcome occurring about 5% of the time. But on any given roll you have no idea what’s going to come up. And any given sequence of outcomes is just as likely as any other. So a sequence of 1, 1, 1 on three rolls is just as likely/unlikely as 1, 2, 3 or 3, 3, 3 or 2, 4, 6 or any sequence of the numbers 1 – 20 you want to pick. IF the die is weighted and not really fair/random you will only be able to determine that after multiple rolls.
I have some weighted dice. The tend to come up 6. But not every time. It usually takes people 4 or 5 rolls before they believe there’s something going on. But they don’t blink an eye when a 6 comes up first.
I agree that if I randomly generated a sequence of 504 0s and 1s, converted it to ASCII text and found that I’d got anything which made sense as an English phrase I’d be extremely surprised. But one trial is not enough to establish that the procedure is anything other than random. You have to do many.
Write a program to do the above and see what you get. Do an experiment!!>>
Remember, there is an offer on the table to Jerad (and/or whoever) to do a 6,000 word essay on the evidence that grounds in your mind the blind watchmaker thesis and makes the design theory proposal unnecessary.
Okay, let us continue . . .
Okay, picking up from here, hope this thread behaves a bit more perkily. KF
Jerad:
What sort of experiment do you have in mind? I can perhaps create a working program if you can describe the requirements.
Yeah!! Thank you!! 🙂 🙂 I shall gleefully participate as long as you put up with me. Gotta do dinner and homework and dog walk first tonight but . . .
🙂 🙂 🙂
thank you
For comments related to posts made at TSZ which are not addressed to the relevant issues raised by gpuccio I suggest we use the Junk for Brains thread.
gpuccio:
HERE
gpuccio, you make some great points in your post @909 in the original thread.
Among them. Differential reproduction requires a functional cause. Lizzie’s GA assumes function for all her strings, regardless of where they are in “function space.” I say this because, on what other basis does her algorithm select which genome will be removed from the population and which genome will remain and be copied? She assigns each of them a “fitness” value.
Frankly, i think that value should be between 0 and 1, as shown in Joe Felsenstein’s example which is in proportion to their chance to leave offspring. I’m not sure I’m stating this clearly.
But if these are functional, then wy do they get to just wander all over the “function space” looking for a different function? You are really on to something my friend. That’s not natural selection.
My view is that if it is functional (or even if it isn’t – drift) it should have some percentage chance of having offspring in future generations. Fully 50% of her organisms have no opportunity whatsoever to contribute to future generations. And that ‘s not a “fitness” of .5, lol.
Mung (2):
KF and I have been hashing out sample spaces and probabilities. I’ll try and summarise, I hope fairly.
Let’s say you had a HUGE sample/configuration space. Like all the possible sequences of 0s and 1s of length 504. Take any one of those 504 bit sequences, break it into 7-bit chunks, interpret each chunk as an ASCII character, and see what kind of character string you get.
KF says: if the character string you get after randomly picking the sequence of 0s and 1s turns out to be the first 72 characters in this post then that implies design was involved. Or the search was biased.
I say: all of the 2^504 sequences are equally likely under a random search and while it is highly unlikely that you’d get any coherent English phrase out of a randomly selected sequence it could happen. And one random pick is not enough to establish a design influence.
I suggested that KF test out what could happen with a program. So . . .
Have a program generate random sequences of 0s and 1s of length 504.
Interpret those sequences as ASCII characters.
Reproduce the results.
Repeat. And store preferably. Print out at least.
I think it would be really interesting to look at a couple hundred iterations at least. Just to see what comes up.
If you’re not having too much fun tormenting the TSZ folks that is.
Okay:
Let me do a markup of Jerad’s 2nd comment:
_____________
>> KF (916):
Pardon a quick and dirty markup at 912. Gotta go.
Please do not apologise! I know you’re busy and, anyway, I prefer that method of response.
I keep wondering why you keeping replying considering how recalcitrant I am!!
KF (912):
Just a couple of general points: I agree that the number of viable/functional/interpretable configs in the kind of config spaces we are talking about is very likely to be a very small compared to the whole space.
[a –> That is a key first step]
And that most of the time, a single random sample
[b –> whoa there, the “single random sample of relevance in the first instance, is the solar system whirling away for 10^17 s, with 10^57 atoms doing a fresh state every 10^-14 or so s, as fast as chem rxns basically get. In the second, you are talking about seeing say the first 72 ascii characters of this post being emitted by a 504 coin string + scanner apparatus (which is eminently constructable); just one instance of such, for good reason would lead to the inference that the best, empirically warranted observation is that someone did that by design, noting that the SS whiling away at that rate for its lifespan would only sample the equivalent of 1 straw to a cubical hay bale 1,000 Light years — as thick as our galaxy — across. For excellent reason, relative statistical weights of clusters, the special zones would be invisible to such. Here, for those who need it is a sketch of the scanner:
))–|| 504 coin string||–> 504 bit report on pushing the button
is going to return garbage. Those are given as far as I am concerned. If I ever gave the impression I was disputing that then I apologise for my poor exposition.
samples on the gamut of solar system or even cosmos that are blind are maximally improbable to hit on the functional clusters.
‘[S]amples on the gamut of the solar system’ doesn’t make sense to me but it’s not a big deal.
[c –> Again explained, think of the solar system turned into a coin tray scanning machine with reporting devices, in real terms, the SS is a site in which chemistry of OOL and OO body plans was said to happen on blind chance and necessity. Search resources, gamut of observable cosmos moves up to 10^80 atoms.]
‘[M]aximally improbable’ doesn’t make sense to me either. The maximum improbability would be a probability of zero which no thing in a sample space (of the type we’re discussing) would have.
[d –> negligibly different from zero chance of observation on the gamut of accessible resources as has been repeatedly discussed, cf here at wiki: “. . . the probability of a monkey exactly typing a complete work such as Shakespeare’s Hamlet is so tiny that the chance of it occurring during a period of time even a hundred thousand orders of magnitude longer than the age of the universe is extremely low (but not zero).”.]
Each config in our discussed config spaces would have a very, very, very small probability of being selected in a random search but it would never be zero.
[e –> not strictly zero, but so close as to be practically so. This is the statistical basis for the 2nd law of thermodynamics.
Me:
Given a single random selection out of the whole config space where every config is equally likely then no, you cannot assume it was not a blind sample AFTER getting a config you find surprising or meaningful.
KF:
Oh yes you can, if say you saw the first 72 letter for this post in ASCII code, given the utter unlikelihood of finding such a functional cluster by chance as opposed to the very many more non functional ones.
I’m sorry but that is just not right.
{f –> Oh, yes it is, and you did not wait for 200+ posts to decide that text appearing over my name is not the result of lucky noise on the Internet tossing off one of those statistical miracles you want to rely on.]
In a purely random search each config is just as likely as any other. Each has a miniscule probability of being picked if the config space is large.
[h –> Strawman, you know or should know that the issue is predominant cluster vs narrow special zones as has been repeatedly pointed out and linked on. Each specific state may for practical purposes be equiprobable, but the states near 50:50 H/T in no particular order so dominate that it is utterly unreasonable to infer this is a credible explanation of seeing the first 72 ascii codes for this post. Or another string in English.]
This kind of situation is exactly why medical trials are based on large trials with multiple subjects and control groups. And then you generate p-values and confidence intervals. That’s an accepted way to use mathematics to make decisions of alternative over null hypothesis.
[j –> Yes, we know med trials and the like, with 5% tails or maybe 1% etc. We are talking here of the entire atomic resources of our solar system whiling away for its lifespan only being able to sample 10^87 chem rxn time states of 10^57 atoms total,leading to a comparative number as 1 straw to a hay bale 1,000 LY across. We have excellent reason to conclude that such a sample, if blind, would be absolutely dominated by the predominant cluster: nonsense strings near 50:50 distribution. And BTW, that sample is all our effective cosmos for chemical interactions could do, never mind that 98% of the atoms are locked up in the solar fusion furnace.]
I think we both agree that a diagram found on Mars would have to be more compelling than some vague blobs so there’s no need to go over those points really. Obviously we’d both say something that looked like the London Underground Map was designed no matter where it was found.
[k –> Okay, agreement. Now, ask why. The answer is, FSCO/I, courtesy a nodes-arcs diagram and its bit string equivalent info content; courtesy say AutoCAD.]
Strawman, you are ducking the point that samples taken at random are overwhelmingly likely to come from the bulk not narrow special far skirt zones. You are obviously going our of your way to avoid acknowledging this well known point.
I think I’ve already shown that I agree with this. It’s the conclusion after getting a specified and complex pattern where we differ.
[l –> Recall, the needle in haystack analysis is secondary, it helps us see WHY FSCO/I is so strong a signature of design. The primary argument is that we know per a massive and reliable base of billions of cases, the source of FSCO/I when we can directly see it being formed. Design. This is inference to best empirically grounded explanation in light of tested sign. And this is in perfect accord with the in-principle of say geodating on isochrons. We know per current investigations the causal factors involved in a process and their effects, including characteristic signs. So when we see traces from the remote past that parallel the signs we can observe in the present we infer to like causes like and deduce a date. I am pretty sure you accept the geo timeline produced by this and other similar means. Only problem, the signs from radiodating are known to be less reliable than FSCO/I. See the inconsistency problem on degree of warrant demanded?]
But you wish to impose an inappropriate standard because the empirically grounded best warranted explanation does not fit your worldview preferences
That is not true. I am suggesting a method of analysis quite common when trying to prove an alternate over a null hypothesis. To be sure your alternate hypothesis is correct you have to establish that an event was not just a random occurrence by repeating the ‘trial’ many times.
[m –> Of course, first, the original point was that Fisher’s investigation was premised on how reasonable samples cluster on the bulk not the far skirts so if the special rare zones crop up too much, trouble. Next, I had drawn the parallel on unobservable fluctuations from thermodynamics, the root of the statistical grounding for 2nd law of thermodynamics. Namely due to the predominant cluster, some things will not come up under spontaneous circumstances in our observation. That is the context of the 1 straw to a 1,000 LY cubed haystack example. Do all the tests you want, you are in that ball park relative to the space for 500 bits. If you present me or any reasonable person with a case where you claim the equivalent of the first 72 or so ascii characters for this post popping up by chance and necessity without design, I will conclude on very good grounds that you are pulling a fast one. To date, courtesy Wiki, here are the results of random documentation tests:
In short, a space of 10^50 possibilities is searcheable like this, but that is a long shot indeed from one of 10^150 possibilities.]
(As the null hypothesis is the ‘default’ hypothesis I am picking the design hypothesis to be the alternate but it’s possible to do the same analysis the other way around. But the testing would be different.)
If you roll a 20-sided fair die each side is equally likely to come up on any given roll. It’s only after multiple rolls that you will empirically see (as opposed to figuring it out analytically) the probability distribution of the outcomes. If the die is fair/random then after 100s of rolls you should see each outcome occurring about 5% of the time. But on any given roll you have no idea what’s going to come up.
[n –> The die in this case has 10^150 sides, and the overwhelming number are all blank. Tiny — relatively speaking — clusters of sides are written with 72 or so ascii character strings in a language. Toss the die and lo and behold the one with the first 72 characters for this post pops up. I would immediately conclude, loaded die, for good reason.]
And any given sequence of outcomes is just as likely as any other. So a sequence of 1, 1, 1 on three rolls is just as likely/unlikely as 1, 2, 3 or 3, 3, 3 or 2, 4, 6 or any sequence of the numbers 1 – 20 you want to pick. IF the die is weighted and not really fair/random you will only be able to determine that after multiple rolls.
[ s –> Cf above.]
I have some weighted dice. The tend to come up 6. But not every time. It usually takes people 4 or 5 rolls before they believe there’s something going on. But they don’t blink an eye when a 6 comes up first.
[t –> 6-sided dice are not comparable to 10^150 sided dice, with the overwhelming number of faces blank.]
I agree that if I randomly generated a sequence of 504 0s and 1s, converted it to ASCII text and found that I’d got anything which made sense as an English phrase I’d be extremely surprised. But one trial is not enough to establish that the procedure is anything other than random. You have to do many.
[u –> I got some prime commercial real estate on George Street, Plymout M/rat, to sell you. Interested?]
Write a program to do the above and see what you get. Do an experiment!!
[v –> Been there, done that; try here, only 100 coins though so you get bigger fluctuations and this gives fractional coins values too. +/-10 at 100 is reasonable as estimate, try for 1,000 with 1/sqrt n as a metric on reduction of fluctuations.>>
_____________
It should be clear why the differences are consistently emerging.
KF
PS: This story shows how hill climbing (loose sense) algorithms with targets give a superior result to above. By ID of course. But that part does not get headlined. If you are going to copy off Shakespeare hire a decent typist!
Jerad:
Do you want me to include all ASCII characters or just those which are letters, and perhaps the space character and period?
Today’s Junk for Brains winner, keiths.
Here you go Jerad, first 100 (lol):
0$ó¢ìø³›õ”®=2cçøCuµðü]ŠøÃ·°°‡Â0Ê~^²ðaaÖŠ§ôü½PX͆úèŠ
}Ú8^$Õ;g Pë?3úrŽÕ-V—ô2•T§çÎjì¹$:M 5›Õ¢“ZŒU:Õyüa”ñÐSQ`Ì
fHêåî’¤2¼ÿM7ˆ*žÄÞWÄö TP0䚣—¦ò!”u7ÃjVPk°ÒçÁ¾&ŒoØî牢¯:
A>•ÒG˜âúŽ ŠÉe”9óäËÙ
&ø^ër¼¼02%ÃR¢y?mª¡a,,ž~ñVõ¬üÂÂ8%~²À€Â°TK‰A•¬»ÙÀI³.ø“eÜ>´i_Pð
€à5Ýüslc¨ê£$.,‡v
Tø±ÄXÞJæöaq¢;òêp%=SC¡Æš¬âóor‚4Å{ä]î1 ƒÔЯ¾¹ìBÙRV5IÀKÏ1‚š(Ò
…öžrÞ‰½y¦…Âsdî¶²
˜N@w(uW
J¸’Ó?jð„;Ó|)ɽ7ë}
“Gn0õ¬D1“1s–|˜^+¦ÀE8‹3¯ûPØv<ÒsÂ
ÛsîTþ ×]¡©ÄŠa…|
ë{§“®›8ÎåÄ
6¦út„OV“)·`¨Ë¨ 8[J(-ÝĆ=íŠÌ›¼°S'7ÒãÍș샺Žëà´äjB¬Hé“•tû=Òµ
êÿk]Pº,ˆ´\ŸUºäÔéE8!˜}ÍLNÔï‰l©ßjñ;w¥âðò„#â”í+,uŽÂÓ1
<Ì™wiôÆ*mJ‘=.Á„n# µ½Óޤq“mYñT*ú×ÔÂ
¦Z(˜YÁÒr
SM]!Ë\;ND$+I9k5Dۺڄ賈bèáS°Ë2¥É‹¤/†›{Œþvâ¨ÔšýýðI0·0¸•ÔU
RçÛa<I§«*ðj\ƒdã¼’\÷Ö\fƺ1Ö@ÃÔ¡/Iñ[
æ!
žÒ6Êø¸•Wö—hšÿ}‡êiâ
Df&×a¾´@“g U®è1·x°WV}fx^ÃÆYAý³2öŠ;à#+ljºÁ¶U¥3ßXÿì%»
²%Bv
È+g|º¹"Œ¢Û[…7‡£4ãn0kÀ5‚f|mÿ,^}==gŸ86§îûœ5§˜µ!
fÌ}`óØáìˆ7<‡’hß/–b
¦éÃ:•øæÉÞÃΧKP$’éšTÃŽ4ËZlÄñ’z$ÇúÀ€Â®n‡¾dÃB
+ë˜K/xXËÉØ›”ÈGMXdâgÞE“'Ü0Úï"óyðêïWBµ¥Lã€`XJ}xXûô´.YU…žP×
\{‰kÙ Á4Ñ4ÀÁ@Q;dÒùéžþ¶Lµˆ¦²óàIZ¨þqž8dýå™S¶^ÆóªE³(å —kȆF
`6ïÑâa9°®ÂU]ãÎAÅ¡\U{¥û Æ·¾µ@]±Ã…¥òŒÓrà oÂÃ.öTà éYA‡·FðmIO¡˜Ô
/Ä»åœ ühæbn‚Ã\¸";œ¶ô
ðCÓm›ëi?\îž/<'ôEÞ!KUz”g$QJû£‡þ`¸¦
ó'Ê€åô¡â€â€ºÃ’$LžŠ<zùåÙáz’‡mâ„¢Og(½;A 0=à oUµ„c\<iùê¾=¤BÚ–OÄÉá
!Ò¾µÆËyä·¶Í
gVq’0ÝÒ°èm%œ˜*Ö@çÖûá3X}, YO/&â„«tÁ¶9ð)4ŠÝVãt£•ö É<K]ÜR
*§Vã»6'°†Ô1
ÚõƒVÙN¼;.à *Õ!¡ŸpXH(nïäŸæÒQóö¹À€BkDÕh›¦(â„¢AëwçŽÂ«C+ÈçL>âèð
(1¯è¿°£ù˜Ø!
“t×ÖÐ’ÇGE×,OºµÁ(y*†ö£lýwgGb°ùD0Ö:h3a/êeá!À?ªÌ:µú0D<ÅKÎ2
Ymīւ±?—Îé³sNI»¯¿‡‘†ÇSöI8!Pá+Kiͱm·á›.”:KètËú†mÀ€â€™R›CÃŒ8P0A$fìZ1 21ITÊ’Å“{6Þ¾žóä=@â€Â±Ã§ÃPŽæŠG‡t3÷ɶê
eË3ä4š¯þ½ƒp2“bFDƒ?|r¶ÄúEÃqt¬ÀUCq&ðŽÃØv6ˆ1$€Æ”NUÚìá‘Iý·Œê
~3–«ê^iç܃:»RSŠ‡Ã€æª[lnC?Ç=?OÀ‘¹ê‡Îšˆ9àwCHGAZ¶Ö}dPi
àxýŒlœê¸j‚ÔîênD]FU
ØYì¨ü d‡ ™(Ý—¼Ç#V›Rñ@ÊZ÷§§žybxp¬P9æÝøA“ÿ—`Èõ:Odɸ΂ù•1<
øøjüì|£ßÀ Ÿ†êÛ;€fYÊ[©Ä@%|Ód9Ÿ‰µeöˆþ/ø7Ëœ,ædEƒ–a4©ï¶Ã
¼kÜ’^ΆcêÂÉCµÃÀ€0‰…ˆìÕNúqD‡*SÆ’zuâÃ
ÃŒt PøHIÃd4Å“SžÈyžyXÃqeùötµÎþ$¬裤Z»Èæ@*Ãi„Vâ/É´ÈtwtV!
Â/âuŒ-áЃ8šféÍ)w¬«†óÃVÑ
pâyVa„ã9èUg•ÝÅ™CÝî4Žl”ke¦µZm8
y»áF•Íò¤Þ—*jjHRÔ
•ƒÄY#©ÀŠýDÜ$p7¼ý×kº[WzF>- }g`ÉŠ†ÆÕjђõàÌ
¿EN‰|áß”P§¸éþ¸Êì 艖,]†•Eý¯mìÞåëþÐe’Näp¦
³{Ù9¬@ÚãI%
»È4¬…ƒ)¼ÿò Îõ%&C]áþ{x ,\¬$±CN”ç(´ÛhläºG5ý½n’”2»wŽØR
˜¹¢¥cˆ¤2ÈÏDÑ-oh$IŠ×5Ÿ¦ßE¸¤›_>.ë«ß6¾¤áèY”opYqùâ@uX ¯x¼àáB£
öñ)©fQo&†cü;¸V†~\ÃŽÂ gËœÂ¿ÃÆ’“h7cØòçbFCVÉK¬G(b1u’jÒñq,äP§
çŒñ’.—€ïý:/
+2PÎìs€Ða÷I«ÐæzhP!žò¡k¦KËÀS/må–V(†ßQ÷|ÔM-®ªÓB<õê*%¿C¹
dÊ`ºrÆËËàßï°¸¥pÀ˨§G‘¢Û6’ï;¡wÆA°€^PFi;«ÉÎ!)C¦%Þné½
Ù¡ñõÅ`bh"ÞæAÂev“9G^RšëáŠê(ߤ0f¸’ñÈ/ò+É6g㦉ì·l
Ã…ÃÖ Šß–t
|³^¶j™Ÿ+ˆ ‹ég˜ã·M*¦¾Â¡„ŠºÊ&kD2ôy5‹ý Hë
bÒìËA‘ýœÙÒw‹Â´ …Ý
™Mkߤu˜²ËÛyPž-ÖÆmSNÇ£u"¾| ŸëÓÕ]3à q†ºßã2bG )··ˆH Ñ>ukÙ/‘Ž£Ã[„
r¦í¹þ-ÓþëÎ%‚9íŸL@ޝUFfQ0>¨àl¶¬ÌZ>˜ÂËËi•™ä„¡‡}abµ~u·ÄÈ+
k扌*Aœˆ ™ÙsD{lÉ’~žþ—T½[i\(°Û›üäm~Ï
tJ”¦×?c‚#£ÑÆØ¨g°”ÉÀÆCž>jx2“Þ6¨ý»S„{¯“ôå+ÍÎÍ/OáãYÏ£
&¤ÔÞöÂ.Pº´PK›Ï}1^”ö7¡ˆÿ.,üG(%TMî,æ«L¨›Ì~šÌLS‚Õ:¾KRSÉcrJY
µÕœ+ÎÓÓ«2P:IÌ^¥Ø-ÍãÞYŒË“1EÉIv¶Œòž _©bø–ø)ÆË?oöPa¦![Õ‘÷
SÌ»VõÉqÆK&òG=×õ5£?{œ‘X±¢¸â’¦j§¸I´ÈÞoþ¦ÓaiJS—q(Ò– í
°xÁZ
Ó$tÅÿ¶OÃw£®(ŒÀÇ9B*à V.÷²‰ÌÀ•Ú‰ú|4ä6ÂAÿºÂ#³
èž×M2æÃÊ*gÔ¦‚
ü®çbÐP‚0yM*O±²¼¼E•>´Í}À®¿0JŒv}éñÀsšÏÀõÞ¯©0BnÒÄd`ís±ý.
Èß“¸«BjX†°k.
ö;.o1ÆÐ™“î–ò#v]ÁûŸ^/U†aÂü‡ðXôœkz\ ,Bò:e£ó
׌ƒ$À”ŠÚ~û¡É>šhÛt:ÊàçÕ
ó˼U‰¶½;‘æ§Å&£Ö†¨|= Š`žÈ)+KKãcu†
ýÆŠ“DMÃÑuÃz—†ÃùD¹?ÂÈÔÓÓ©¡.j’Î`&nmva³LûA6tÅ“bò‚ ¤++Ñ¡zŸèˆÔÃ
ع Ø(㨶Ùeà8ùfYÏîEÐ…qZRtEAšQV–;˪¹t™³êËòMxÐdÖþȈ¦·
‘˧
¢¾ØYôød¨ùV„ù-Å’gâ„¢FŽ×Cc@‘‡`ïØÊCÃØb’
„Òå¹Eȱä¼O£Êýc–(ž¡…™óÉ`¦
ÉH§>—òGu@ãa™§^ƶ:|‚ƒÚ¨ó“ôÉ›ö£JÂßOؤn%LËœx{ÓÄÄ…â`ÈN>ød>‘¡ dg
@òæt;Å’HxÆřMGܱ¯éûÜ$tÂÀ€Å¾Â¹Ã·Ã˜ x9ÿºI›îÈÀ€_EP©•µãÖj¯¿ÒƒK
·$7mý.£)kpf…6ùêòÁ{3Ö¯«h6ƒbTs{ƒ+û-¢Eö ¢IhQÿþ͒ɱ“Å
‡Ô3±ƒ#
•oûP™l7*ze¡P¯üáUoIœÀûß,ÕÞ’ÑqDñ¶n>.…-m zº…
eÎ
Õ¬€3#Eßâa7`€ç8ÑÇ`
N¹’Bg‰kŠÐœlæðÉD>¦Ä„á—C¶*Å:ã›î³
¾ã*fº¸\ž
~Fø¸™SuÑí
X=&,†
Êò¢Ã¡Fٻ^U¯dú(æ¥:)a¢`yÔð©ÉqHÃ&›óëÅ` Ãl®«<®Ó“qu…#j<–0j4_òÃ
ÔøêA£ (½ï8ggü´ØÊ5œ×-\ZLêô¡rLçŸX0Ð൲£ú)KàÌz>1=^Vå}EÐ[u’4B—,6üqöàÜ%Ù‚sZ’
”)N…ÞÔÄ#µ
“Nµ¶éÄÄI+€¿WúÚOÿuø™€g%+Z_Úgƒ _+¶î½’çOž&³áØ
ÛòdØ¡ %ëGn€»„ìÔ¡Ðߎq®õÆc#ÊÞ!:ã‰w}äö ÔQè¥[‡°Ó5Ó-.Ù!’¿×
»¤<” ÅŠ€'ܹÜäÞ'6ÕÀ8IîÑú
{˜mÜOóæ†œ
2ÁÐÓa´¥*¤Ép:*Ç8W»œŸŒeÔv—ðéZE²Ê܇±u×H`â.WXú[š.@
/¾“ùöG@ÍŒ•nø™§Ï¶*Ò´îð¨2ìÿ¤ç‡
dD‡Î`_væ~Aþl’jÏD²m~,dNù/Š™ÌaÅ)Ù¼¨Wh-I\
±ÖRøõ‘¢ùŠK·”Y’T¬ÂÝAI”Ó€÷a9ÈÏ;ø¾ÖU÷ŽÊüQ
Allan Miller@TSZ:
There is no house jackpot.
There is a predefined target (in Lizzie’s own words):
MaxProducts=0; % keeps track of whether the target has been reached
while MaxProducts < 1.00e+58
To assert that it “plays no part” is disingenuous at best, since the whole point of the exercise is to generate a string that meets or exceeds the target by selecting strings that come ever closer, increasing the number of such strings as a percentage of the overall population, and mutating them in the hopes of generating one that meets or exceeds the target.
gpuccio,
At least Allan seems to be addressing your points and Lizzie’s program. I will give him that. I also appreciate his manner and apparent willingness to concede certain points.
Allan Miller:
I doubt it. See my post above.
So the whole point of the ‘fitness function’ isn’t to identify strings closest to the target?
Then what does this code mean?
Products=sortrows(Products,2);
MaxProducts=max(Products(:,2));
WinningCritters=Products(Ncritters/2+1:end,1);
I’m new to MatLab, but here’s my comments on what I think the code does:
It sorts all the organisms by how close they are to the target value, stores the highest value so she can later test to see if the target has been reached, and then takes the 50 who have a product closest to the target so that they can be replicated.
In any world I inhabit that is “evaluating with respect to the target.”
Because their products are closer to the target.
Guided by the intelligent knowledge with respect to which products are closest to the target.
That’s only part of the story. It does compare them, but the result is to put them in a sorted order according to which the 50 with products closest to the target can be identified and retained.
In the final analysis, just another glorified Weasel program. 🙂
These people over at TSZ, most of them anyways, are so funny.
Do they not understand that the reason you start with a randomly generated genome in a GA is to spread the “organisms” far and wide over “the landscape”?
What sense then does it make to use a single fitness function that applies to all of them alike? Why are they all searching for the same thing?
oops.
Junk for Brains
Jerad:
That appears to be inconsistent with your statement in the original thread.
And that was with a sample size of 1.
KF (8):
Much easer to scan the thread now. Thanks.
I know what you mean by gamut but it’s not really standard statistical nomenclature. Nor is maximally improbable. [–>’twas never intended to be, but to communicate to the reader] But I know what you mean in both cases so, moving on . . .
I see that you’d be willing to make a design inference based on one highly improbable result but I could not. [–> A result that is credibly otherwise empirically unobservable on the gamut of our solar system, per all but zero probability, on inference to best explanation. Maybe we should go deer hunting on the flanks of the Blue Mountains sometime, now that J’ca has a deer population courtesy escapees after Hurricane Gilbert. ONE clear deer track is a sign pointing to deer never mind that someone could fake it or some possible chain of circumstances could possibly make it by happenstance. We are dealing with empirically grounded moral certainty, not abstract proofs beyond all dispute, noting that post Godel, not even math meets the latter criterion] I wouldn’t find that a good statistical reason to reject non-design. You can invoke straw bales the size of the galaxy or whatever but it’s still not a valid decision to reject the null hypothesis based on one sample. [–> translation, I will pick the all but impossible over the empirically well warranted if it is in a context where my metaphysical preferences are at stake. You will note, that a single inscription on Mars would suffice to demonstrate to our satisfaction that someone was there with a civilisation, a point you agreed to in the earlier thread. You have a right to your metaphysical preferences, but you have no right to impose them on others as that which has cornered the market on that which can be termed science. Not that you are doing that, but others with power unfortunately are.]
That’s the way the math works. All outcomes in a random search are equally likely/unlikely. [–> I note how you consistently back away from the issue of predominant cluster vs isolated and narrow zone. Maybe, this is in part a reflection of diverse backgrounds, being trained in stat mech gives me a healthy respect for such clusters. Perhaps the strongest laws in physics, those of thermodynamics, rest on this pattern of reasoning.]
I would roll the die again. And again. Then I’d start working on the mathematical argument. [–> There was a little mistake in that, forgive me: the die is not possible as a die of 10^150 sides cannot be constructed out of a cosmos of 10^80 ATOMS. But then, that underscores the issue of gamut of available resources and what can be done. What is possible is the coin tray and scanner or the equivalent, which puts us in the position of a 72 bit reporting engine that suddenly reports a complete 72 characters in English. The random text generation exercises show why such an infinite monkeys result is not possible, but the intelligently designed cumulative monkey program has now produced almost all of Shakespeare. See the difference between chance and necessity and intelligent guidance?]
I watched someone roll a 4-sided die once (a tetrahedron with flattened corners) and it landed on one of the ‘points’ and stayed there. It happens. [–> irrelevant.]
Hey, I was born in Plymouth . . . Wisconsin.
Maybe, send me a picture and the Lat/Long and I’ll check it out. I prefer to do some research before I make a decision.
[–> The point is the street is under 20 ft of volcanic ash.]
I think we’ve come to an impasse here to be honest. I think we’ve both expressed our views multiple times and pretty well. If you want to let it rest here that’s fine with me. [–> I hear your view, but actually, we are at a pivotal point. KF]
Mung (10):
Whatever, it’s gonna be mostly garbage either way. You might go for years before you got a phrase or sentence.
I agree with KF that for even a 504 bit sequence of 0s and 1s the chances of converting a random generated sequence into ASCII and getting anything legible in English is very small. But, I wouldn’t infer design if it happened, even on the first go.
computerist (12):
Yup, mostly garbage, just like you’d expect. Are you sure those are really random though? I was thinking there’d be more variety. But randomness can be clumpy.
Mung (17):
Yes, I think the situations are very different.
The scenario you presented included a preprepared envelop with a text string written down. While it is still possible that you could match a target string with a random string on the first go it’s much, much, much more likely you’ve been set up by someone.
And I guess that gets to the heart of the matter really. I’m much more likely to assume fraud than design. I’d want to be very, very sure before I inferred cosmic design but I do admit it’s a possibility. I know there are people/agents, like magicians, who enjoy fooling people and performing seemingly impossible acts on demand. I haven’t seen any evidence I find credible that there was a designer around much before humans learned to spray paint out of their mouths onto cave walls.
I will gladly infer design in the manner of a devious human agent!! I find that extremely plausible. But I’d still want to prove the case by repeating the testing with extremely strict controls.
EL:
No:
1- You are starting with that which requires an explanation in the first place
2- You don’t appear to understand what Dembski is saying
3- You are using artificial selection
Mung:
Go for the full 128 character set, do please.
Also, if you can set the coin length, that would help, a manual push the button mode (and icon) would be great if you want to go that far.
KF
The TSZ ilk are pitiful. After all this handwaving about natural selection, their “theory” boils down to this:
Somethings happened in the past and things keep happening and here we are,
off-topic but worth a mention-
I posted about the record ice extent for Antarctica and as some sort of “refutation” one of the TSZ ilk posts about the loss of sea ice in the ARCTIC.
Mung, do you have something for that?
KF (18):
I am happy to infer deer where there’s deer known to be about. Deer poo would be even better, harder to fake.
‘We are dealing with empirically grounded moral certainty . . . ‘ Not sure what morals have to do with a mathematical discussion. I’ll look it up to be sure but I think that’s not quite what Godel had to say. I’ll check though.
I’ll pick the mathematically sound choice, nothing to do with metaphysics. An inscription on Mars does not mean there was a civilisation there, there are other possibilities: time travel, crashed aliens, secret mission by another earth country. All about as likely as a lost civilisation on Mars from what we know now. If the Rovers turn up a map or an inscription let me know.
I’m not backing away from that. I agree that subsets of the sample/config space with greater numbers are more likely to have a config from them picked in a random sample. Most searches in the real world are not random samples.
I’m only discussing random selections out of huge samples spaces because of the way you use such examples in your argument for the improbability of such a search hitting a functional config. I think it would be extremely improbable, but not impossible. And if it did happen (and the modern evolutionary theory says it only had to happen once and then RM + RS kicks in) it’s not an indication that the sampling technique was biased.
I probably should at least peruse the thinking about OoL and the first replicator so I get an idea of how complicated it might have had to be. I hate chemistry. 🙁
But such a result IS possible. Just highly, highly unlikely. If you randomly generate 504 0s and 1s and interpret them as 7-bit ASCII characters you could get a sensible English text string. There are millions and million of those. The first 72 characters of every page from every book ever written for example. Even with millions I agree that getting one of them is highly unlikely. But not impossible.
My interest is waning. But then, look at Pompeii.
Jerad:
I think you may need to look back above, where I cited the result of extensive tests done by random document generation exercises, and reported by Wiki:
This shows how spaces of order 10^50 are indeed searchable [Borel was wrong to that extent], only problem is such a space is 1 in 10^100 of the scope for 500 bits.
So, why is it that I think that a config space, W, of 10^150 possibilities with isolated zones of interest T1, T2, . . . Tn where in cumulative total the T’s are much, much less than W, is in effect so unsearchable for E’s from the T’s that for practical purposes T’s are unobservable on the gamut of our solar system on blind chance and mechanical necessity (i.e. no intelligent direction as was so in the case where Shakespeare has been reproduced nine characters at a time)?
Simple, I respect the significance of clustering of states, and the fact of an overwhelmingly predominant cluster.
That is, the search for a needle in a haystack where there is sufficient stack [even on the gamut of our whole solar system], is predictably fruitless.
Yes, for our purposes we can take any given state as equiprobable, and so in effect any given state is utterly unlikely to ever be found. That is, if we were to devote the resources of our solar system to it for its existence to date, with our coin tray exercise or the equivalent, WE WOULD PREDICTABLY NEVER WIN THE LOTTERY TO FIND A GIVEN STATE.
But, if states come in clusters, the probability of the relevant event shifts. For instance, define E as E is any set of results from a toss of the coin tray. We will hit E on the first throw and every throw thereafter.
Now, it is not hard to show that W is absolutely dominated by strings of near 50:50 distribution in no particular order. So it would be easy to hit that state, too. Indeed, it would be hard NOT to hit it. That is our straw in the 1,000 LY on the side hay stack.
The problem is to hit the star systems in the haystack, i.e. anything but straw. In short, I am putting up a needle in the haystack exercise on steroids. It is notoriously and proverbially hard to find a needle in a haystack.
Why?
Because this is the complement to the easy problem, finding hay in the haystack at random. If that is easy as the hay is utterly dominant [in the 1,000 LY stack, there will on average be a light year of nothing but straw in any direction from any typical point . . . ], because it is the cluster of the bulk of possibilities, then it will be correspondingly very hard indeed to by blind chance and/or necessity, NOT find hay but needle or whatever.
Years ago, I used to set up a thought exercise to make the point clear.
Set up a bristol board sized chart of a bell-distribution, say Gaussian. Mark say 1/2 SD wide stripes with the peak in the middle. Back it on some backing and get yourself some darts and a step ladder.
Go high enough that you have a more or less even distribution.
Drop darts, one, two, thirty or so, a hundred. One dart can be anywhere but smart money is if it hits it is in the bulk not the far skirt — probability is proportional to the area of the stripe though the odds of hitting any equal area will be the same.
Clustering and relative statistical weight at work.
After about 30 hits, we predictably will have a reasonable picture of the distribution, through the implications of that same clustering.
100 or so hits will likely cut into the 5% tails, maybe the 1% ones, but if you ran the tails out to say =/- 5 SD’s, it will be quite hard to by chance hit the far tails with a reasonable number of drops. of course, as more and more hits happen, the sampling will begin to pick up more and more deeply isolated zones.
All of this is simple and obvious, and it is the grounds of Fisherian inference testing. The far stronger chance of hitting the bulk is such that hitting the tail becomes unlikely on the sort of range of opportunities that will be reasonable and typical. So if far tails are showing up where they should not, that says maybe the assumptions that are in the null are violated.
I know, I know, there will be endless debate points on bays vs Fisher, etc, and there has been a debate on how dare you suggest something is wrong in the infamous Caputo case where 40 of 41 elections supposedly picked on a coin put D first on the ballot, which is known to have a favourable biasing effect. Yes, yes, I know I know, there will be all sorts of theoretical reasons trotted out as to why Bayesian inference and perhaps likelihood reasoning is superior etc etc. The fact remains that the basic point of the Fisherian approach is reasonable and it worked well enough that for decades, it dominated.
So the side issues are side tracks away from the pivotal point: samples that are blind trend to pick up the bulk of a distribution early and that which is unusual as a rule comes up later on if at all. That is just what the case with the random documents generation exercises above shows.
So, we have a very good reason, on the relative rarity of fluctuations, to expect the bulk to dominate, until we go so far along that it is reasonable for isolated special zones to begin to be picked up in the overall run of samples.
Now, what happens when not even a solar system gives you enough resources to get to the point where you can reasonably expect to go beyond the bulk?
To see, let’s look at the needles in a haystack discussion in IOSE, starting a bit above the just linked:
So now, the root error is redefining the problem from the real issue, observationally distinguishable clusters of states, to one of picking individual states. Yes, individual states are equiprobable, but clusters are NOT.
And so, it is highly reasonable to observe the pattern of likely outcomes and to think on the clusters.
A Royal Flush, individually is as probable as any other hand, and the four are collectively four times as probable, but there are so many other possible hands that the odds of such are about 0.00015%. That means the odds of not getting a Royal flush are very high. But 4 in 2.6 mn is a winnable lottery on the sampling resources here on earth, so we would not find getting one even on the first try excessively improbable. That is why getting several in a row would be needed to trigger our suspicions.
But, why be suspicious?
After all, getting ANY four particular hands in a run would be exactly as improbable in the new sample space, clusters of say five poker hands.
And the fallacy surfaces: we are comparing clusters, not individual hands. And Royal Flushes are a definite zone T that is very special indeed in W the set of possible Poker hands, so the best explanation for several in a row — that is the odds are sinking, five in a row is of order [4/2.6*10^6]^5 — is that some4one has artfully picked a sampling frame that makes such hands all but certain.
So, when we look at the solar system, we see the same problem, a cluster of about 72 ASCII characters in English such as the opening letters etc from this post, comes form a vastly improbable special zone, one that is independently describable, and so specific and functional. NOT landing in such a zone but in the bulk of the distribution is also describable but utterly non specific and non functional.
As a working out of the binomial distribution will soon enough show, that cluster would utterly dominate the set of possibilities. So, we are dealing with in effect an unwinnable lottery — lotteries have to be designed to be winnable, BTW — and so if we have won, that is suspicious so long as design is a POSSIBLE explanation. For, lucky noise on that order is not normally credible.
This brings us to the heart of the problem.
The real game is that the evolutionary materialists who have dominated institutional science, education and pop sci promotions in recent decades have injected a question-begging ideological a priori, even into the definition of science and its methods. It is because they have rigged the game to make it seem that a designer is not possible — or at least is not a “properly” scientific possibility, that we are seeing the debate we have been having.
So, the real issue is, is a designer possible in the relevant context, OOL etc?
The obvious, simple answer is, that with the work of Venter et al in hand, that is so. For, it is highly credible that in several generations of work, we will be able to have a molecular nasnotech lab capable of building from scratch a nanotech, molecular machine based encapsulated, gated, metabolic and self replicating entity that uses a vNSR mechanism to effect this last. No physical impossibility blocks the way, the basic techniques have been demonstrated by Venter et all, it is a cost and further development challenge not a roadblock issue.
So, why should it be deemed effectively impossible that say an advanced molecular nanotech lab seeded earth with original life? Surely that would be sufficient to explain the OOL on earth.
And if that is possible, we should be willing to accept that evidence that is best explained on design should be allowed to point to design as best current explanation.
Regardless of whose feathers are getting quite ruffled now.
But, but but — don’t you really mean that God is the designer, and aren’t you injecting the supernatural into science’s hallowed secular halls?
Nope, Barbara Forrest has been playing you like a piano.
Right from the beginning of modern design theory 25+ years ago, the technical work has been careful to distinguish between inference to design as causal process on empirically warranted sign, a scientific enterprise, and speculations as to who the designer[s] of life on earth may have been.
But, it is convenient for ideological reasons for rhetors like ms Forrest to pretend otherwise and erect grand conspiracy narratives dripping with hints of the Inquisition being re-established.
[Actually, it still exists, over in Rome as the Congregation on the Faith or something like that. No more thumbscrews, though; doesn’t fit the ambiance of air conditioned seminar rooms that are the vogue in Rome these days, what with former college philosophy profs being made pope and all. And, nope there is utterly no danger of such ever being imposed again. Though, if you entertain the taqqiya laced blandishments of IslamIST Muslim Brotherhood spokesmen — who have a declared intent of establishing their religion as supreme in a global empire over the next 100 years or so — you might find yourself facing a swordman about to chop off your head for being a rebellious kuffir who will not submit. No wonder more moderate Muslims in Algeria were the first to warn that such are Islamo-Fascists.]
We need not waste more time on Ms Forrest’s conspiracy theories.
The issue is clusters and it is reasonable to look at the divergent probability of clusters and to compare the alternative hypotheses to explain outcomes. Arriving at FSCO/I on blind chance and mechanical necessity is just short of being outright utterly impossible. Arriving at it on design is not.
So, if we see a case of FSCO/I; on inference to best current empirically grounded explanation, design is the best explanation.
It is as simple and as reasonable as that.
KF
Joe, (25)
The Washington Post article does point out that the Antarctic ice expanse is 5 – 10% above the 1979 – 2000 average whereas the Arctic ice expanse is almost 50% below it’s average over the same period. They also look at the trends in both regions.
The jet stream maybe shifting and England, where I live, may get, on average, colder because of that. But that doesn’t mean the earth isn’t warming overall.
KF (27):
I agree with just about everything you said (and I really enjoyed the parenthetical snide aside about the Catholic Church) and I think some of it was very nicely put too.
I agree with you that the cluster of non-sense strings is IMMENSE compared the the cluster/subset containing ‘sense’ strings. I agree that you could randomly search for years and only get garbage. Years and years. I’d be a fool to deny that. It’s completely obvious.
But, you still could get the first 72 characters of the KJV or any other sensible string on any give try. Two in a row would be almost unconceiveably improbable. Three in a row . . . Or even 3 out of 10 . . . might not ever happen. But it could. Purely randomly.
I must look up quantum tunnelling . . . there’s some stupefying improbability associated with it. But it does happen I gather.
Anyway, I haven’t got anything new to add or say. I’ve never even finished reading Dr Forrest’s book by the way although I have heard her interviewed a couple of times. We can argue about whether the Discovery Institute has an agenda if you wish. I listen to ID: the Future whenever it comes out and I have corresponded with Casey Luskin although it was a long time ago. I do try and keep up with some of what the ID community is saying and publishing. But not because I think they’re trying to overthrow science. I just want to know how they see the situation.
There’s a good reason why Royal Flushes are the highest card possible in poker and that’s why I never play 5 card stud. Too boring. Give me some cumulative selection any day! 🙂
To Zachriel (at TSZ):
You say:
In an evolutionary algorithm, Hamlet would represent all the multidimensional mountains and valleys that make up the fitness landscape. And yes, a map contains a lot of specified information.
It’s an old issue that we have certainly discussed many times. But, just to have the pleasure of debating at least a little with you again, could you please explain how your concept of multidimensional fitness landscape can help find the right functional sequence for a new protein domain with a new protein structure and a new biochemical function, for instance a new enxymatic activity, from unrelated DNA strings? What kind of multidimension will help you to explain that? Just to understand…
I know: as your friends say, “evolution has no target”. I agree. That’s why it finds no targets, indeed!
And yet we have the little porblem of those thousands of functional targets that are the existing functional proteins, so different from non targeted aminoacid strings that are good for nothing.
To All:
Dr Dembski has a new post up at Evolution News and Views:
http://www.evolutionnews.org/2.....64871.html
And a lot more.
Those are very much random.
If you have python, here is a very simple script you can play around with:
import random
def bin_str_to_ascii(bin_str):
c_ascii = ''
l_ascii = ''
count = 0
for c in bin_str:
count = count + 1
c_ascii += c
if count % 8 == 0:
l_ascii += chr(int(c_ascii, 2))
c_ascii = ''
count = 0
return l_ascii
def randomize(s):
l = list(s)
random.shuffle(l)
return ''.join(l)
start = ''.join(random.choice(('0','1')) for _ in xrange(504))
for x in xrange(1, 100):
print bin_str_to_ascii(randomize(start)) + '\n'
Here is a few hundred more (this time with delimiter ========= to indicate start of sequence):
=========âý\Uº’ã2Ã]&¿éÿî>Hùá$Ioá«ñ©½>ç‹Pþífn`MË
›+¯®W”G[“Œ0Š+;m
=========9€qcrg¹ðØÿJnâÄPUm^žŸªn÷òÑqîuòi,ß‚6)s.õ·ÜéöQ³…(Rìºá¾R|¯yÖ6gF
=========þµ&ÊÈ6
Å cÙøŽ6ûið5hq¨ûý®ÃoE^98öá¼$ŸÜô=ÕµHҴ؇þµjjn–·^ÊÃ
=========]]øw5³•†iHý>®âÎúNÉåâ—¾Åήàóë®rÃ’>Qì‹Åãk´ñ4mª
´˜Ãâ–rûùKÙ
=========²0GÉê¾ZZæçõwƒ°kôô.ò¸ñ´(¯2`£–øTd¿õçw¥Ÿí³.ÊÓîƒv£]ÏÉ€Ê1yáù>¡
=========?lýî¾?ôjÛÔVf8h²‡ÑèR{6n›¬Þ¯èo¼YLt4OôŽ^¨yæ`õ5yð>HŒLÅ#OGŠ
=========²Ý«&ÌZù¶
=========î:Ÿo?™g’xí%ZŸù$ñ:жqKŸ½Ñ|+^Ïå‡ß%ýEÎ!Æ^Ù«!ŠUô
=========BHíÛ9Jëß^îål˶vôPCð%O™ŸÁ¬
=========ñ8Jw=w‘±>»¡›*!øþš©»’éó®ÞAÄ[9IDÄéÿéÍ;c~·ãNSªs£3k#6 Áþ£þPû
=========ºâB=¬
=========~_‚Ñ÷sj‹â¿âjÃdJÖit•Z$ç}0À¿õ\Ù¯rfWÀrìpïRv—Q&TíZÚ¹þõÔÃWcâ„¢$yâ€W۔L——Ä‹»KŸ…ÃñÆÌBVã-îp¿ÆÃÓ/õ
=========&Øš
û
=========‹®yóÙ7&}ç~Õ›N#9š¾aô×îÖ«8dº<Õ¸–Ò¾d=M¥°ìßŲÍ`Š,,u]Ì-©{°‘ìŸ~s
=========£}<huA]É?Aoê´°PJy“‹ÊžžÙMÇ„ÏïÿãIÁ7ŽÞZéŸËZ¦%T€ðsìÀL2Aå§îsÍÅy½õ>}+¶î•
=========…ÏN·£’’Æý3×ñˋ‹ïœ¯!ŠÔ漤øsÙ·‚ž+8:q°ˆkUÆŒ@ö›ÿÛ•Ÿ÷v-Ìüõ´“
+¥èë{
=========7ÛƒáfQ~‰ó¨ÃMžÑ㤙bóýþiä
\ÄÃïrK^¨±÷<úv°ÃlÑÚXm+½…ªÿæ¤j’ãšP\4_¹
=========1ŒÈ#Pð–×Qs`Gü£…ˆ»…Z³~6‹ø3š¶ºsËÜgñÓÓ÷½iWÆ’4Vb÷4ö/â€i6ñc¿pW
=========›Yû³wmµø¦ÂÆz§†ÙÿïÀòæ¡Užèl½ö2RÀ€O9ÜoïÙò#iÃÌ´YÈa
Ë¥Ô±÷$sçž`WC
=========ù—-
Zºp 4<é…Ù’b÷øß)·×¯ïŒçx}&‰Tâ…Ó«ýÒíiccü€Ÿ70½–t%³™ƒ<¡ð
=========Cèsr;·^5£wmó^áwX*;–p7늓°Â|n=W±©¿œÿc)
=========Ø¿¹>ãyÙ
ËPïk\ü=0„GPŸg¡ÜÞã_B{»§‰èLj1ûßý¯¨ÔMn@Ö³zoKÎdŸ1
Bb_
=========°®¶æÚp§¼‹Cž³ÖrøÃsv3Ãð“—˜”W`ýÿI§»’—¦øöãWŠòµO²ãÊÖŠèš±´ß¨&
=========”–ì¦^ýtL¡´~º.1Mí4£µ¸¦˜ãA`ÂéJP÷âwÊ´G¿¿Èb³xBþeuûäª,n~‹Óf
=========$
=========¯ÂøØ’%z%Е·‚Ò„¹_‰èÃïñd¯7ôk8ì…ËWG8ïùj/Ž?
}cGs›-;¢ßóT;W³ë
=========zc× 6?‚˜ÃúåëfÞ•deö¯òÉ9DRŸ‡âÃQà ¦ýÛ-ù÷@ÃAÀ³O¦÷’õW¦y§@ôW§
=========aø[jšEžG*Ôõƒë¾g¹ÇÃÒeºpåß×·yÔº‡-¸Ã_3Ãn*ŒÝi©ôY„ÞîqL/ó Ƚ–Zm
=========š¦Œ.fyüVßâÕv=7ÃSLLc™ ÿEç¯^mt[ÙG¯Tn{É¢tèÈÕµl⿤Mk³ùmÿ
=========6\ŽýC1Dý
=========úfØ–<vÅ“@½ªºôñܼۢTœÞ_ÖÃSÚtpš¿-× ’tÂ}y3Ãl’gaþ—ŽC5Ÿe¿B{£*³øŽ˜
=========OlÎÑ
°À€$›¤·“=ÃÂò¹žsOUýǧßç ƒ’Â[;Ã…MnÓýÓWq,É·ÃNH©ÃῬWßY$.|:[
=========qq¾ñâð@ùø³kxŒšÖÃÒÿ{¾C„‹=ÿðû²3¤FŠO´±
øòþûµ!G©cÆ’vï;VtÃ…69Pâ€
=========ª¤Ãôшû›VOA•Åõxõßj¤¯ã®K9X?K”·Å~î¶‚Z¾f…×°£o{Ľ°¶™©Žf€%3ñÃÔJ
=========¦ÇnNZ5»ö¡Qtþ½c®»Æ+ké4ÃÑÃ>6ftñýð›À€Å“O,BÅ X“.l?)ß½¢ÒÅSK†ùh¯d1$Z]–
=========±RïGŠqMÞsS©Wž‰ÖÝŽõH›åc5[¬•¯¾¾mæÎ³hœ¾ ©·‹œuì-PËŽýÑ›_
=========ÿƒ¡IX#ñµ¡ýM”|ù{êÃË¿Úa(‘D®³“(æÃ6ïºÕrÈtCä±ööÑîû}H3¯ÛÃòÇ N>)
=========a>K%>BÅ
=========D^6,.
$PßÁ½½èjÿØ{õ³¹F’L‘×zj_ÙÕ3›)vdèL_ »Ç3Oîü÷×Û¹¦+Úæä
=========±ËÃÂ]ÃËœtÂjãþéæ?†_°ˆÂåÒùP+¤k·J&ÒªØ4:·ú×wã1)¶FK²žlXòžW–ÑÒ—
=========§p]÷[~Tï×€¾ÿ™{øä”Â};œ¬—‰,©ÿ‡×4Ã…b„xø°ë ²IÅ Ã
=========÷ŠÒw&ä±¥ÔòÊî¹äÏÀî„!Dû”n½A|EŸÏOÞû×ùÎkÝþí^$!äQ#ºß°ä,‡ð²
=========aë[ïðk«šÄ»®Ö3¹Ûœ;§Ä·ïNhq&øÒ,“2%;ƦkI„*ñ̪úû0ë¶Í®‡®ªØ PÿÛº&QqÝÞ•óÒ}XµïôÖñ=}Öâ‡h©®ð<¡‚¨
=========«ÌhØÃWæRÃaZIÀ€â€”ÅÈd뫾`É…Ê_ú„SÄû¿¼k5F•œ¾Ü/åÖ-|Å
=========_žä¿s¶¢„Ä<Õ d~8²NÂ¥HnïÚmN
û×:ñ·E|¡DYÜ)a¼ûô¼Ñ
ã±z„?%ØËW¯Ì2³_°äp¯†’Ù>žïšÞ™*ó¾sÐé1.éL|®9¬ëÃÿ*ÚGˆhq.®
=========òpõÄg@kf?lrT[ÈÒ0Q/ú1NÛ¼AzßÞÃuØJ﩯Òwé6nùœQ&ÆùÒpNójbµú]
=========%ñÓ“ù¿ô©“Ùx–; ,¤uÀ;/Œ1dŽ,ƒ½Úßn\Ý^å_Ô”ëèr¢C¾÷ÕŒ×;tüy
=========įiõR’åïÒŸDÒ Kß3Šè§Þz¢]þnûò%î9îN¶NlŽFžÚQžÉ^Ô8µý¼JCuÂTCÄF
=========£qày딞’Ÿýý*Ít¯kx–f^@‹#¿½m*Ã(àá–ÌÙm¿ÄaÙÚÞ˜¯“Þ_
=========UiAÝÍâ`ç¿‹ý6y»ºuAu½ßø&Z@ŸÛÅã3î¢$!û5‘ɲìe‚tW~hʱœì¸Gy*
=========OÑé¬_KŸMˆ‰g³ág~¾YůöZ¯Ýk¨á¯’¼`E×êW÷
ݺ¦žx©G-nzÞYäŠÏ¯×DÊJ“I
=========³Ài[÷æmÅøÑ×Ñ/lBªþû½;,ÿáñ_9:ÜQC…9&v¶[ÐÿUø?„,?ÃÍU Œ7r5ƒ
=========Ð(ËjÅsÎæ:m{[G’ªX+ŸÏ½xOþ¼´Þ3,×xŸà’êqÓb,—‡Xgˆ¥œnI$_7sððüãÒ*
=========;;RDþ(=ú,É™¶ö¿1Ï¢‹ÿ—çÿ´ìÏ(™²õûCVáVeÁÀ]¤’;7NÑÊ3ç˜ÒÆ–ó
=========;ÿÈìêé{¥îTŠ«¾ÞÖòâö|¼,†²˜/ŸÓÖ‚gó0Õpªú+$(üÚÛÃ(²hi€eOrLØã\Üó?_
=========wO”å©ñT»Ïϯ4&½‹Ðíë©Î±å”#£5Ü
UTzÙ]{Ïjê`_ýŽB–|tÌôúc}½Vþ
=========”SfôFvÈÛÇÚ+áyª…ß/+—YŠW çN¾x}ú ?÷ó{‹ÔOd+ccö(ÿqšvOšP&MÏ®ÕË
=========ÿçtA:!7±¦ñ»z£2öµý1äv¶pÆú‰•å`úX_
™%§rà1|rûÙfûÖCÎ/îp³kqºáçG
=========t½Í¤çm(Sê^ÏR#ƒ¬Ieß(µÒö¯—¾ê)±ŒNeù¡}%àŒÛçÑãìfø;±îÁ¦(árïÈËuaÖ
=========çx½ðà #Ÿ?[É€°þì¸Àž?lXìäáÜKõwŸ…ô5¯¹¹8Ù@8Ÿ>oWbFf·VŒÚNdš!ʺÖÇŸ¦F#
=========_§Ií¾·ÒŸ®#FU·ûjO+`ß,7èlÝøÁFÕaŽÈôgHÛ¿Tg°c©¼}Î
=========1ë1i¦þ€p q~¿oŽÁãOPÀnÂãŸòYø½¦ú j}iCî÷á6iß`/Sìž¡ÆVô¥“Ø®U´
=========¶f%^Äð-ÊÀÝÔÕCÒñU“Gžha°_jÂô“Úå—}÷zT;÷|;`<Ñt.ùÇa*ê¿.ÿ/xQغEùÒ
=========6¹úô¿Ò¾º(Þ†µÎœî°7°¦p8íñÐ|àRÍù“\n)^Œ8½4:š4´È}.Év»ÿ“CYjÊü‰
=========6áóP_Çý’ÊrëfwþzæðtSâ€Ã¨â€º(ϩfïãÕð—;Ã’^È–aÈst¬AvFÃoaDãó¦Þ~3î_+ž
=========qdþ·*ïäx9׎›’…ý‰çŸ<.îsXÊØ×ËÓÇyŸ90â€~{
³…¡-E^ÕQè/gú?§Ãu9P$©,
=========¼‘½¨v@ž{œzé´kn²®Ëyœªc
®'Co÷‚±*m¹h^5ÿî€"¾ÛëGO☂íRÐ}#wKZ
=========ÔûI¶r_ÕáÛ±~°…FïÄAß!NAz-¼)Ã¥AS6u«Ó0£r^ÇÝ Gõô_df?ýÏfzß” b»
=========xS§$Þ— Fˆg*ZP·C²»¿ûßäì³ÿYbˆzbtˆüT»«VEà oåéJÿfyY
þ9SÛªr´!ÿ4
=========Cyr 5Œ‹+q¾ó—-„Àí󽻄U¿czÊ“Ý_jóÅTD'î„&º}‰«xIÛÓõùAtfüõ
=========þh¨Sº¹ßQ3ÇO³&W“¤Êåê¬÷€qß
To Allan Miller (at TSZ):
Ah, good old civility! People are being dishonest, now?
Some certainly are. And to call things for what they are is civility.
I don’t think, anyway, that you are in that lot. I appreciate your comments, as I have already said.
Just to clarify this, as I can see it being misrepresented, more conventionally, these would be termed beneficial and deleterious.
I appreciare you precisation, and I don’t think I have any problems with them. Amyway, you may have realized that I am not speaking of population genetics here, but just dealing with the causal logic of the neodarwinian explanation.
In the language of my argument, therefore, the relevant concepts are:
a) If a starting gene (such as the duplicated inactivated one in my example) gains some biochemical function that can improve reproductive fitness, I call that a positive functional mutation, that is positively selectable by NS. If the selection happens, the result is that the new gene is expanded in the original population. The concept is simple: in the beginning, the new mutation is necesarily present in a single individual. But, thanks to the reproductive advantage of that individual and its progeny, after some time the new gene is represented in the whole population, or in a good part of it, pertially or tottally eliminating the original form. Thgat’s what I call “expansion”. Thus expansion is very important, because it is the real factor that lowers probabilistic barriers: if a mutated gene expands from one to 10^9 members of the population, just to speak, its probabilistic resources to accept a new favourable mutation are increased of 10^9 times.
b) At the same time, mutations that lower a function, or eliminate it, can be negatively selected, that is eliminated. That is a very important element too, because it makes functional genes tend to remain functional. In general, it works against evolution.
c) It is true, however, that if a first mutation is selected, because functional, from that moment is it also preserved, to a degree, by the same principle of negative selection. That helps the effect described in a), because ot means that the “work” already done will probably no be lost.
That’s the most I can say in favour of NS. From that perspective, I calculated that the theorical power of a single perfect selectable intermediate is very strong, although certainly not omnipotent. Its practical power is certainly much lower than what I showed.
But the problem is always there. Even for one selectable intermediate, a lot of logical problems remain:
1) What is the function of the intermediate?
It cannot be the function of the target, because otherwise the intermediate would be the target itself. A member of a protein family is not an intermediate to the family: it is part of it.
It is not the function of the starting sequence: indeed, if the starting sequence retains its function, its structure will be preserved by negative selection, and evolution towards some distant target will be impossible.
It could be some other function, but then why should it be a step to the target function?
2) If the gene is inactivated, and can therefore mutate freely, it is by definition non functional, and obeys pure laws of random variation. How can it, then, find new islands of function?
3) And, even if it finds a new island of function, how does the organism “understand” that a target has been reached? Indeed, transcription and translation of the new gene should be reactivated, it they were inactivated, before any effect on reproduction may manifets itslef, and NS may enter the game.
4) How is the new protein function, found by RV, immediately integrated in what already exists and works, so that it may improve so much reproductive fitness that it may be expanded? Most protein functions are highly integrated in complex molecular machines, as Behe has shown a lot of time ago. Even a single new protein, biochemically functional as it may be, has scarce probability achieving immediate integration and success.
These are only some thoughts, just to go on a little more with a discussion that, I believe, has no more great chances of reproductive success in the combined environment of our two blogs..
computerist (32):
Well, that’s about as good as you can get with a packaged random function.
And again, mostly garbage. Some runs ‘look’ short which I guess means lots of unprintable characters, control codes, blanks, etc.
Anyway, nothing to see here!
computerist:
A discussion of some of the issues using random number functions in code:
http://support.sas.com/documen.....281561.htm
Mostly you get pseudo-random numbers. Some are really bad in fact.
Mung:
Gee which definition should I pick? 🙂
From Wikipedia:
Not the first one surely!! hahahahahahahahahhaah
Jerad,
Global warming is good. A cold planet is bad. And besides it appears the rising temps have been due to a solar maxima, whuich unfortunately means the earth will be cooling off.
kairosfocus:
: coin length
The user should be able to specify the length of the string (i.e., the number of coins in the toss).
: manual mode
The user should be able to specify the number of rows to output (how many strings will be generated and output) in a given iteration.
Does that represent what you have in mind?
Probably no icon, unless I turn it onto a web page. at first it would just be a console-based app. You may actually be able to run by entering it here:
tryruby.org/
Once I write it, that is 😉
http://www.tryruby.org/
Thanks, that’s about what I was thinking. KF
Allan Miller@TSZ
http://www.thefreedictionary.com/new
One that does not exist in any prokaryote but that exists in all eukaryotes. Would that qualify, in your mind, as a ‘new’ protein domain or biochemical function?
I don’t see what is so difficult to understand here.
Protein domains along with biochemical functions, presumably, arose through the same evolutionary process as eyes, and wings, and tails, etc. All currently identified protein domains did not exist in the LUCA, did they? Do prokaryotes have rhodopsin?(For all I know they may, lol.)
“In a bright light rhodopsin breaks down into retinal and opsin; in the dark the process is reversed.” I assume there is some biochemical function involved in that process.
Isn’t that pretty much what gpuccio said? He grants the families. Where is the evidence for the ancestors and the functional intermediates?
You mean they contain instances of protein domains?
Isn’t that what gpuccio is saying? And then you go off to talk about proteins when gpuccio is trying to talk protein domains. Who cares if they get swapped around? Where did they come from?
Zachriel@TSZ
The components are instances of some protein domain. Where did the components come from?
That’s debatable, and probably irrelevant.
http://www.merriam-webster.com/dictionary/run
The 424 Definitions of the Word “Set”: by Lee Andrew Henderson
Who cares if they can be re-arranged? Sounds like modular design to me.
Where did the domains come from?
Why do I get the feeling that Jerad is MathGrrl?
Nah. He’s not.
gpuccio:
Yes. Isn’t it just magical how yet another fortuitous mutation came along at just the right time, the time when if we take this sequence and turn it into a string of amino acids (polypeptide) it just happens to have a selectable function?
Maybe cells are intelligent?
Juartus & Mung (43, 44):
I’m much better looking. In low light. After 3 drinks.
lol
Joe (37):
Kind of depends on where you live.
Probably not actually:
http://www.skepticalscience.co.....-basic.htm
Generating CSI with Faulty Logic
I take a fair coin and toss it 5 times: HTTHT
That’s one of the possible set 2^5 sequences (1/32).
Now I copy that sequence, and modify one position at random.
Say the first position: TTTHT
Is that sequence a member of the original set of 2^5 sequences?
Why or why not?
What is the relevance, if any, for calculating CSI?
Jerad:
A History of Solar Activity over Millennia
kairosfocus,
https://gist.github.com/3816082
Mung (49):
Obviously since there are only 32 possible sequences. List them all before you start. Modifying a given sequence just gives you another one of the 32.
There are only 32 possible sequences of 5 Hs and Ts.
You tell me. If a randomly generated sequence is randomly modified and then arrives at a target or functional/meaningful sequence . . .
Joe (50):
I prefer my multiply sourced, peer-reviewed science I think. You should try reading more than one person’s opinion.
I want to make several points after browsing through the blog entries for the parent topic of this thread:
http://www.uncommondescent.com.....ent-434996
It seems that TSZ objector to design, AF, insists on the long since corrected canard that design is a “default” inference.
1. I was impressed by several of the entries in this blog. In particular those authored by gpuccio and especially those at #856 and #909
2. I would like to suggest that certain entries in certain blogs (like those 2 above) deserve to be saved or collected in a kind of structured (more or less) compendium of ID thoughts, principles and/or essays.
3. I would say that the two entries above (#856 and #909) and maybe others make very well sense in a scientific paper on an ID topic. Gpuccio has a special gift to structure clear ideas, principles and to obtain and share with us significant insights into core topics discussed. I think that many of his and others ideas expressed in these blogs will trigger interesting thoughts and associations in the minds of the readers and maybe plant the seeds that later germinate in other relevant blog entries or who knows: papers or books. This is like a genuine on-line, group brain-storming.
4. Related to this I was wondering if Word Press has the ability to “grade” a blog entry (with “Likes” or “Dislikes” or any other way). That may help authors to get feedback on the valuae of their contributions as perceived by others and also, later, for new-comers in going to “high mark” entries in very long blogs – like that under discussion.
5. What about having the ability that the author who proposes a topic for discussion may choose to start a moderated blog. The moderator may be the topic initiator/author or may be an Editor or another “principal” selected by the author or the editors. Again I don’t know if Word Press provides moderated blogging (or facilities for that) or it is rather a manner to operate for the editors. The MAIN ROLE OF THE MODERATOR in my view would be to:
a. Maintain the dialog aligned with the main topic of discussion or to relevant sub-topics – as the moderator decides.
b. The moderator should be able to either warn the author of a blog entry that he/she is not providing an answer to a pending question/issue or challenge, and remove (or ignore) the entry as irrelevant immediately or after a number of warnings.
c. The moderator should direct the blog exchanges and dialog toward achieving concrete, specific goals or objectives for the issue at hand and filter out the “noise” produced by perturbers, i.e. not well intended and honest participants.
d. I think too many times the energy and good-will of ID bloggers is wasted with entertaining dialogs and exchanges that are just useless and bring no benefit to the authors and readers but only to the “enemies” disguised as posters.
6. A topic author may propose another type of blog: collective collaboration for discussing and producing a collective essay on a particular ID subject. An example of such a Goal-Oriented topic might be the elaboration of a systematic hierarchy/list of topics in Intelligent Design. Or creating a Cell Model or a Replicator Model to be used later as the logical foundation to investigate evolution claims or cell biology research or cell biology research prognosis. Again, a good example for structuring a model or principles in investigating evolution claims are those entries at # 856 and #909 authored by gpuccio.
To Allan Miller (at TSZ):
You make a few points that should be answered.
I do wonder what GP has in mind when he says “new protein domain” or “new biochemical function”? Does he have one that he considers ‘new’, and definitively inaccessible by the probabilistic resources available to any ancestors – something that can be investigated, rather than his personal, very general assumptions about the structure of protein space and its distribution of function?
According to SCOP, there are about 2000 (1962 at present, but the number is constantly growing) protein superfamilies in the known proteome. Proteins included in different superfamilies are unrelated both at sequence and structure level, and share no obvious evolutionary relationship.
New superfamilies keep emerging througout the whole history of life, as can be seen here:
http://www.plosone.org/article.....ne.0008378
(Thank you, Zachriel, for providing me a quick refernce to one of my favourite papers. I am afraid, however, that you have not really understood what it mean. It is not about “evolution” of the domains, but rather about their emergence in natural history. I quote:
“Notwithstanding, these data suggest that a large proportion of protein domains were invented in the root or after the separation of the three major superkingdoms but before the further differentiation of each lineage. When tracing outward along the tree from the root, the number of novel domains invented at each node decreases”. Emphasis mine.)
To Allan Miller (at TSZ):
Behe’s CCC calculation suffers from a lack of consideration of recombination too, incidentally. It’s a very important, sometimes underappreciated evolutionary force
What you say about recombination as sense, and I can agree. But I don’t think it can solve the fundamental problems about completely new information. Anyway, it can be reasonable to try to evaluate the real powers of recombination on some empirical basis, but it is certainly true, as you say, that it is a “sometimes underappreciated evolutionary force”. Underappreciated and not much supported by evidence, although often generically invoked.
I understand that darwinists need to keep some faith in something, and as NS does not seem to help much, recombination can be of comfort.
It is of some interest that my mcuh cherished “rugged landscape” paper, after concluding about the powerlessness of NS to recover fully a previously existing function, even in extremely favourable lab conditions:
“The question remains regarding how large a population is required to reach the fitness of the wild-type phage. The relative fitness of the wild-type phage, or rather the native D2 domain, is almost equivalent to the global peak of the fitness landscape. By extrapolation, we estimated that adaptive walking requires a library size of 10^70 with 35 substitutions to reach comparable fitness. Such a huge search is impractical” (emphasis mine)
goes on with some wishful thinking, completely unsupported by any data in the paper, about a possible way out:
“and implies that evolution of the wild-type phage must have involved not only random substitutions but also other mechanisms, such as homologous recombination.”
As can be seen here:
http://www.plosone.org/article.....ne.0000096
And however, Behe in TEOE makes an empirical evaluation of the powers of evolution from real life examples. It can be correct or not, but it is not “a calculation”. In real life, recombination can certainly act, even if not calculated.
Jerad:
My link was not an opinion and was science. Perhaps you should try learning the difference between opinion and science.
Speaking of recombination- how was it determined that recombination is a blind and undirected chemical process?
Jerad,
That paper by Ilya G. Usoskin of the Sodankyla Geophysical Observatory at the University of Oulu, Finland was published in Living Reviews of Solar Physics.
IOW it IS peer-reviewed science.
Now what do you have to say?
Jerad:
Are you and/or anyone of your acquaintance taking me up on the offer of an up to 6,000 word essay for UD presenting your view and empirically warranted grounding of the blind watchmaker thesis style account of origins?
I think about a week has now passed since I made the offer.
KF
And Zachriel with the equivocation:
Intelligently designed evolutionary processes or blind watchmaker evolutionary processes? Or is cowardly equivocation still the best you and your ilk can provide?
And what about the proteins whose folding requires a chaperone or chaperones?
Which came first, according to the modern synthesis- the chaperones required for folding long polypeptides or the long polypeptides that wouldn’t fold until a chaperone came along to aid in that process?
KF (60):
Well, I’m not. And I told you I wasn’t going to at the time very soon after you made the offer. I can’t speak for anyone else as I’m not in contact with any of them so I’ll leave it up to them to decide.
I’ll just reiterate that I can’t possibly hope to match things written by those much better versed in evolutionary theory than I am. Nor do I have anything new or interesting to say. I’ve been trying to stick to the math stuff since then.
Joe (59):
I had a look at the journal and it does look fairly legit I must say. It sounds more like a review than research but that’s quibbling. I don’t know how other in the field looked at this work but I do know that a great many specialists in that field have looked at the data and come to a different conclusion.
So, even a peer-reviewed journal article can still represent a minority viewpoint which is why I suggested you read some other opinions. I haven’t got the background to look at the big picture on this issue so I look to the consensus of many other who do have the background.
I try not to make up my mind based on ideology or one scientific paper, I wait for some kind of group agreement to arise.
InVivoVeritas:
Thank you for the kind words.
Your comments about brainstorming and about trying to build a collective set of detailed arguments are very interesting. Maybe some organized project could be proposed.
I must say that I am extremely grateful to our “adversaries”, especially the best of them, because they truly stimulate and inspire the discussions about ID.
Intelligently designed evolutionary processes or blind watchmaker evolutionary processes?
Zachriel:
Biased towards a goal, which means you are talking about Intelligently designed evolutionary processes. And recombination is an intelligently designed evolutionary process- see Dr Spetner’s “Not By Chance”, which means you are definitely talking about intelligent design evolution.
Thank you for clearing that up. Carry on…
In response to kairosfocus challenge, Zachriel posts:
But the evidence gathered since then has not borne out his “powerful argument”, which makes it impotent. There is still nothing that supports the claim of natural selection being a desiner mimic.
That is what kairosfocus is looking for- you could start with telling us how to test the premise that any bacterial flagellum evolved via accumulations of random mutatons. My prediction is that you won’t.
Nope- ns is always just assumed, never demonstrated.
Artificial selection is NOT natural selection- AS has actual selecting taking place whereas NS is just a result of 3 processes.
1- NS requires that the change be random/ due to chance and no one ahs demonstrated that wrt finches
2- Still no designer mimic
3- baraminology is OK with adaptations
Biased towards a goal, which means you are talking about Intelligently designed evolutionary processes. And recombination is an intelligently designed evolutionary process- see Dr Spetner’s “Not By Chance”, which means you are definitely talking about intelligent design evolution.
Please define your use of “fitness” and also how it is you determined that recombination is a blind watchmaker process.
Artificial selection is NOT natural selection- AS has actual selecting taking place whereas NS is just a result of 3 processes.
Nature doesn’t select and natural selection could never produce a toy poodle even given selectable intemediates.
NS requires that the change be random/ due to chance and no one ahs demonstrated that wrt finches
Natural selection is a result Heritable random variation that leads to differential reproduction-
So, yes there needs to be variation and it needs to be random/ ie a chance event.
The variation is the change and according to the modern synthesis is entirely by chance. Changes due to the environment would be directed changes ala Dr Spetner’s built-in responses to environmental cues”- IOW more evolution by design. Thanks.
Also with natural selection whatever is good enough survives to reproduce. And with cooperation even the not good enough can make it.
Please define your use of “fitness” and also how it is you determined that recombination is a blind watchmaker process.
It is duly noted that you refuse to define your terms and can just declare what needs to be explained.
BTW- Fitness, wrt biology, refers to reproductive success, an after-the-fact assessment.
gpuccio:
Where did the protein domains come from that are required for recombination? 🙂
onlooker:
And the ability to replicate is the very thing that needs to be explained.
Wrong. In GAs and the environment reproduction proceeds by design. So there, I match your bald declaration.
Nope- 1- they are NOT the mechanisms of the modern synthesis 2- no functional complexity was constructed 3- she starts with the very thing that needs explaining.
It is quite telling that onlooker so uncritically accepts what Lizzie sez and acts like a belligerent little child when someone proposes a semiotic theory for ID.
NS requires that the change be random/ due to chance
Jon F
Can’t call it “natural” selection is the mutations are directed. Darwin’s whole point with natural selection is design WITHOUT a designer. WTF, indeed…
InVivoVeritas:
I second the words of gpuccio @65. Thank your for your comments.
Some of them. 🙂
I wonder if more of them would be willing to post here if a thread was moderated, or even if they could moderate a thread.
kf does have his 6k word essay challenge up.
Zach:
Evolution by design.
It cannot be planned/ directed and still be natural selection. A designer mimic that uses designed processes is a contradiction.
Well of course it already exists- it is ONE of the INPUTS.
That is incorrect, unless nature performs magic. However if your position is correct nature does indeed perform magic so you may have a point.
Is that what you are saying? Because at face value natural selection cannot have any sort of designer input at all.
No, YOU are confused because YOU do not understand natural selection. OTOH I posted that variation is one of the inputs with natural selection being the output. And I supported that claim with two references.
Yes because those random variations are a required input for natural selection. You appear to have difficulties understandinf natural selection as evidenced by:
1- Natural selection is a result and doesn’t act on anything:
The Origin of Theoretical Population Genetics (University of Chicago Press, 1971), reissued in 2001 by William Provine:
Thanks for the honesty Will.
Allan Miller is confused:
Strange that I have been positing for YEARS that GAs are what are running living organisms- nature doesn’t create GAs, Allan. Algorithms are a thing born in minds. GAs are what control Dr Spetner’s “built-in responses to environmental cues”- they are software, whereas nature can only possibly account for hardware.
Front-load starting populations with GAs, provide some intial resources along with mechanisms of recycling, set it and forget it
I disagree, Allan is not at all confused.
The question is, from whence do GA’s derive their “power.”
They say it’s from the mechanism alone. I say it’s from the mechanism + design.
Now if they want to admit that there is design in nature, that certainly explains the “power” that exists in nature.
If they claim design has nothing to do with it, then they must show that the mechanism alone is sufficient.
This they cannot do, at least not using GA’s, lol.
Power to do what?
Which design can get to and nature cannot. IOW it demonstrates the severe limits of natural selection.
If the source of variation is planned then it cannot be natural selection, by definition.
Hey look, Haley’s Comet! Darwin always referred to variation by chance. Mayr, in “What Evolution Is” says teleology is not allowed. He also says:
And again I will add, so you can continue to ignore, natural selection was proposed as a designer mimic, ie design WITHOUT a designer.
Nope. How can I be when I told you exactly what each is wrt each other?
To Allan Miller (at TSZ):
I had written a long answer to you that was completely erased from the form because of some wrong typing before I could post it (one of the least intelligent forms of RV!).
Now I am tired and frustrated. I hope I can find the goodwill to write it again tomorrow…
I’ve changed my mind. Allan is confused.
GA’s do not use only differentials in birth and death, mutation and (optionally) recombination.
If they did, it would indeed be miraculous to find them at all useful in fields such as math and engineering.
Where do you people come up with this stuff. Seriously.
I like how this guy isn’t afraid to make it explicit:
http://www.obitko.com/tutorial.....rators.php
http://www.doc.ic.ac.uk/~nd/su.....ementation
And today’s Junk for Brains winner is, onlooker!
Allan@TSZ:
Mung:
Zachriel@TSZ:
I know what deferrential refers to.
Let me rephrase:
There is more to a GA exploring certain kinds of digital space than differences due to relative fitness (usually defined by a fitness function or map), mutation and (optionally) recombination.
For example, potential solutions must be encoded into a “chromosome.”
Encoding potential solutions into a chromosome implies there is a problem to be solved.
Information about which potential solutions are more likely to solve the problem must be implemented.
There’s more to a GA than just the three things Allan listed and claimed to be the only things used to explore the “digital space.”
If I wasn’t clear before, I hope that helps.
Zachriel:
Just so we are clear that nature does NOT select, meaning the only slectable intermediaries are artificially selectable only.
If the source of variation is planned then it cannot be natural selection, by definition.
What I said is true. You are nobody to say otherwise and you sure as heck cannot produce a reference to support your claim.
That is false.
Nope. I have made it clear which is which and I have supported my claim with references. So stuff it Zach.
It will be subject to something but again natural selection is a result of three processes. You don’t have any idea what natural selection is.
Darwin always referred to variation by chance.
Page number of “On the Origins of Species…” in which he states that the variation for natural selection is/ can be non-random.
What part of being a designer MIMIC alows for an actual designer?
And nice of you to cowardly avoid my Mayr reference.
So to recap I provide references to support what I claim and Zacho just repeats his refuted nonsense. And we are the people who don’t understand the theory of evolution. 🙄
So how about it Zach? Do you have the sack to actually ante up some references to support your claims?
Also natural selection is supposed to be blind and mindless. And that cannot be with directed mutations.
IOW only if one totally redefines natural selection can one say that natural selection allows for artificial inputs
Mung World
ok, so I created my own version of Lizzie’s program.
took less than 10 seconds
1522 generations
What’s the big deal?
keiths@TSZ
It’s not an “impression” that I am under. It’s a fact.
Lizzie has encoded a potential solution to her problem in each member of her starting population. It matters not that they were randomly generated.
1.) if we change her encoding her program will not work. It depends upon sequences of 0’s and 1’s. Just try changing that and see what happens.
2.) There is at least the possibility that a solution will be found among the first 100 randomly generated genomes, though she doesn’t actually check to see if that is the case.
Each chromosome can be thought of as a point in the search space of candidate solutions. The fitness of a chromosome depends on how well that chromosome solves the problem at hand. For that to happen the potential solution must be encoded in the chromosome.
Maybe someone over there at TSZ will be kind to you before you put your foot in it any more than you already have.
Zachriel@TSZ
Novel. Would that be like, new?
Maybe you can Allan can talk:
http://theskepticalzone.com/wp.....ment-16208
chromosome encoding
http://www.cse.unsw.edu.au/~bi...../05ga.html
http://en.wikipedia.org/wiki/C.....gorithm%29
To Allan Miller (at TSZ):
So, back to the task.
I certainly agree that recombinations happens. Exon shuffling and domain shuffling can have a role in biological reality. And I do believe that we must try to understand what that role is. Unfortunately, not much is known about the cause and mechanisms of recombimation. The problem is that we, in ID, like to test the explanatory power of any proposed model, instead of acceppting some explanation just because someone likes it.
If recombination is truly random, and if, as you say, it can occurr at any point of the genome, then we should evaluate its probabilistic power by sizing the space of all possible recombinations, and of functional ones. That is probably a very difficult task. That’s why I usually stick to the model of single domain proteins (basic protein domains) in my argument. It is simpler, and more tractable.
For recombination, we should carefully consider the emerging role of finalistic adaptation mechanisms. It is possible that certain recombinations are favoured versus others by the structure itself of the genome, for instance whole gene or whole exon recombinations could be favoured versus purely random ones. There can be genomic sites that make recombination more likely. All that could increment the power of recombination, but should be explained as an adaptive mechanism already present in the existing genome.
Finally, I doubt that recombination can have any logical role in the explanation of basic protein domains, because they are functional units that cannot be decinstructed into parts that would yield selectable biological functions.
You say:
Behe’s CCC argument relies on serial mutation 1 then 2 or 2 then 1, with no benefit till both occur in the same individual. Calculations show it to be of low (though not vanishing) probability. But since 1 and 2 must necessarily be at different positions in the gene, recombination can occur between them, increasing the chance substantially, even though recombination will cause occasional loss of 1-2 links.
I think you are wrong here. First of all, I would like to restate, for clarity, that Behe derives his conclusions from observation of empirical data, and then he argues that those observations are in line with his calculations. But that is not the real point.
The real point is that, while your discourse about recombination can make some sense in the recombination of functional elements, it is of no importance in the case of individual mutations that have no function until they conflate in a more complex output. The important point is: a recombination can certainly join two mutations, but it can join any set of two mutations with the same probability, unless we can show that some mutations, and in particular those that are necessary for the future function, recombine more frequently than others. IOWs recombination in this case does not alter the probabilistic scenario.
This is an error often made by many darwinists. A random effect does not change the probabilities of a specific output, unless we can demonstrate some explicit connection between the effect and the output. That’s exactly the reason why the so often invoked neutral mutations and drift have no relevance in the computation of dFSI. They are random effects, and they favour no specific result.
So, again, I see no relevance of the recombination mechanism for the emergence of a new protein domain.
Other mechanisms clearly occur that cause fragments to be moved greater distances in the genome. The existence of long areas of sequence identity (or close enough to be revealed by statistical test) in different genes – the very thing that enables us to declare homology of a ‘domain’ – is regarded as evidence of the within-genome common descent of that sequence by duplication, which necessarily involves a recombination event.
I have absolutely no problem with these concepts. I have absolutely no problem with common descent.
Other explanations for that homology are pretty ad hoc – one could infer that it was moved (or tooled in situ) by a Designer – but what would distinguish identity from such a source from that caused by known mechanisms of recombination?
I have no intention to propose any ad hoc explanation for those data. I will not invoke any Designer for them, unless and until I can show that dFSCI is implicit in them. At present, I have no reason to believe such a thing for the existence of homologue genes.
Finally, you say:
If you are right about GAs, it is one hell of a coincidence that a method of exploring certain kinds of digital space using only the biological observables of differentials in birth and death, mutation and (optionally) recombination should have such power that they are popular tools in engineering and maths, as well as biological applications unrelated to modelling evolutionary mechanism (eg phylogeny) … and yet you think the algorithm has NO power in the very realm that inspired it – biology? And despite working fine in other statistical fields, according to some anti-common-descenters in ID their use in tree-building leads inevitably to false phylogenies …! Every time they are applied to the biology that inspired them, they apparently fall to bits. One hell of a coincidence.
I am not sure I understand your point.
I have clearly shown that Lizzie’s GA is an implementation of IS, and not a model of NS. That GA has nothing to do with
“the biological observables of differentials in birth and death, mutation and (optionally) recombination”. It has, instead, all to do with the much more general logical and mathemathical concepts of random generation and variation of strings, and intelligent selection according to a mathemathically defined target. I suppose that many other more popualr GAs have similar characteristics. But I am neither an expert nor a fan of GAs.
I am not saying that GSa are useless. Lizzie’s GA finds its solution (although it could be found much more easily by a top down reasonment). Even the infamous Weasel GA finds a solution: the Weasel phrase itself, that it already knew. But other GAs can certainly be more useful than that.
If I believed that algorithms are useless, I would not use a computer. Algorithms can do very remarkable things. And GAs are simply algorithms that use, at some point, some random variation. So, they can certainly be very useful, for specific problems. But that does not mean that they are giving us any useful information about biological systems, or NS.
If a GA did really model in some way, even grossly, the effects we expect in NS, it could probably show, very trivially, how some microevolutionary events take place, like antibiotic resistance and other events where minimal variation is functional in a certain environment. And nothing more. But we already know that from the observation of spontaneous biological systems.
Waht any GA cannot do is generate truly new dFSCI for a truly new function, about which the original algorithm has no direct or indirect information.
Finally, I have no objections to using GAs in phylogeny. If they work well, I am perfectly fine with that. As phylogeny is inferred from homology and other explicit mechanisms, there is no problem in using some GA that correctly models those mechanisms. The results can be more or less correct, but they are certainly potentially valid and interesting.
As I have alredy said, I have absolutely no problem with common descent. Indeed I believe that common descent is the best scientific explanation for what we observe, and that it is an invaluable component of a credible ID scenario.
To onlooker (at TSZ):
Any fitness function in any GA is intelligent selection, and in no way it models NS.
Please, do not consider any more that statement. Keiths is right, it was a wrong generalization. I have given a very generical example of how a fitness fuinction could be built that, while being essentially useless, at least would not be an implementation of IS, and could resemble generically what we expect from NS (IOWs, it would add nothing to what we already know).
The correct concept is as follows:
It is completely wrong to model NS using IS, because they have different form and power.
As I said, you help me to refine my concepts, and I appreciate that.
Before someone states that I am changing arguments, I would suggest that you read again my original definitions of IS and NS, from which this statement can very clearly be derived:
“d) NS is different from IS (intelligent selection, but only in one sense, and in power:
d1) Intelligent selection (IS) is any form of selection where a conscious intelligent designer defines a function, wants to develop it, and arranges the system to that purpose. RV is used to create new arrangements, where the desired function is measured, with the maximum possible sensitivity, and artificial selection is implemented on the base of the measured function. Intelligent selection is very powerful and flexible (whatever Petruska may think). It can select for any measurable function, and develop it in relatively short times.
d2) NS is selection based only on fitness/survival advantage of the replicator. The selected function is one and only one, and it cannot be any other. Moreover, the advantage (or disdvantage, in negative selection) must be big enough to result in true expansion of the mutated clone and in true fixation of the acquired variation. IOWs, NS is not flexible (it selects only for a very tiny subset of possible useful functions) and is not poweful at all (it cannot measure its target function if it is too weak).
Those are the differences. And believe me, they are big differences indeed.”
As everyione can say, in these definitions there is all the logic of my detailed argument about Lizzie’s GA, hwere I show that it is simply an implementation of IS, and not a model of NS.
Do you agree that a model must resemble the logical form and power of what it is modeling, to be valid?
I’m curious, how would you measure functional complexity in such an environment? Would it simply be the length in bits of the digital organisms? If an organism with sufficient functional complexity to meet your dFSCI threshold were to appear, would you consider it to have dFSCI or would the fact that it arose through evolutionary mechanisms, which might even be tracked mutation by mutation, mean that the dFSCI medal could never be earned?
It’s easy. I would proceed like Lenski. I would “freeze” (copy) the virus periodically to examine its code. If and when any functional string of code expresses a new function that helps the virus to reproduce, and therefore partially or totally replace the simpler version, then it will be easy enough to evaluate the funtional complexity of that new string of code, with the ususal methods detailed at the beginning of your thread at TSZ.
To onlooker (at TSZ):
A distinction without a difference. The model shows that the mechanisms of the modern synthesis are quite capable of generating functional complexity in excess of that required by your dFSCI.
This is exactly the type of wrong statement that has prompted me to analyze in detail this issue. Have you read my post #910 in the old thread? Please, refer to it for any following discussion on this.
To Zachriel (at TSZ):
Some novel protein domains are available to completely random processes. However, the natural history is not well-documented.
What do you mean? To what are you referring here?
To Zachriel (at TSZ):
Keep in mind that your “don’t think” encompasses all evolutionary algorithms. Evolutionary algorithms, such as Word Mutagenation, can show you how and why recombination is such a powerful force for novelty.
Can you give us the code? Can we discuss the oracles in it?
To Allan Miller (at TSZ):
You keep linking us to that paper. I have already recognized that it is an interesting paper, and I have also given some brief comments. But, as you go on linking it as though it were the answer to all questions, I have to remind readers of what you alredy acknowledged from the start, but many may have missed. Form the paper:
“As an initial step toward achieving this goal, we probed the ability of a collection of >10^6 de novo designed proteins to provide biological functions necessary to sustain cell growth. ”
With all its limits, that paper is a good demonstration of how powerful human top down design can be in protein engineering.
Petrushka, are you listening? 🙂
To Allan Miller (at TSZ):
I maintain that your reasoning is wrong.
Please consider that a sYstem tests a limited number of new states thorugh random variation. Let’s say it tests 10^9 new states in a certain time.
So, the probability of A and B being both present in a same state tested depends on the probability of that particular state versus all possible states that can be tested, and on the probablistic resources of the system (in this case, 10^9 attempts. It does not depend on what kind of random variation we use to get to new states. Unless, as I said, you can show that some method of variation favours that particular state.
That does not seem to be the case for a specific two mutations set where each mutation has nothing specific and is completely non functional.
Therefore, your reasoning is wrong.
To Allan Miller (at TSZ):
And domains can be deconstructed. Four amino acids will make a turn of a helix.
Everything can be deconstructed. What I said is that protein domains cannot be deconstructed into smaller, functional, naturally selectable elements. Can you please explain how a turn of a helix can give reproductive advantage?
Duplicate that ‘proto-domain’ a few times and you have an extended helix, 50, 100 bases long …
And why should RV duplicate that “proto-domain”, and not any other possible sequence of four aminoacids? You are making here exactly the same logical error that I have discussed in my previous post.
and the ID-er comes along and declares that the domain is irreducible complex – for, if you remove it from the modern protein, or even chop it back to 4 bases, it ceases to work!
Which is simply true.
To Allan Miller (at TSZ):
Note that I am asking GP what he considers ‘new’, not denying that anything in biology can ever be considered such.
And I have alredy answered: I consider new each new protein domain of a new protein superfamily or family emerging throughout natural history, with no sequence and structure similarity with what existed before. IOWs, for example, each of the 3464 domains listed in this paper (which indeed uses the family level).
There appears to be no significant mechanism to introduce new DNA sequence other than through template copying and fragment shifting,
And so? The problem is not how new DNA sequence is introduced, but the cause of the introduction: was it RV or design that caused the variation?
The paper was the usual one:
http://www.plosone.org/article.....ne.0008378
F/N: I have now put up the essay challenge as a full post. KF
Also natural selection is supposed to be blind and mindless. And that cannot be with directed mutations.
Allan Miller:
I know that Allan. And I also know that you cannot have natural selection without variation and it cannot be NATURAL selection if the mutations/ variations are directed.
LoL! The processes of natural selection are: variation, heridity and fecundity, meaning natural selection is the RESULT of those three processes, Allan. How long have YOU been discussing evolution and still don’t know that? It means, Allan, that if one of the inputs is NOT blind and mindless, then NS is NOT blind and mindless.
But thanks for proving that you cannot even connect the dots.
To the TSZ ilk-
GAs are a DESIGN mechanism, period. And NOTHING you can say will ever change that fact.
So please, keep hanging your position on a known design mechansim. It not only exposes your ignorance but also exposes your desperation.
Zachriel:
In what way is Lamarkian inheritence non-random? For example, if a man loses his arm in an accident, an acquired trait, that would be random.
Zachriel:
It is necessary to show that recombination is a non-telic process. And taht is something that you cannot do.
Zachriel:
1- Fitness = reproductive success
2- Natural selection requires the fitness be due to heritable random variation(s)
In what way is Lamarkian inheritence non-random? For example, if a man loses his arm in an accident, an acquired trait, that would be random.
I never said it was. What I said is a man losing an arm is an ACQUIRED trait. Also I noticed that you avoided answering the question.
GAs are a DESIGN mechanism, period.
And the weather is the result of a designed planet and planetary orbits are the result of a designed universe.
Natural selection requires the fitness be due to heritable random variation(s)
Of what?
It would be subject to a result? What does that even mean? Yes it will now be subject to natural environmental pressures, but that does nothing to any claim I have made.
And I noticed that you still refuse to provide any references even though I requested that you do so.
Telic Thoughts has you pegged- you are an insipid troll.
Zachriel can’t seem to follow what he(?) sez from one day to the next-
Zachriel:
In what way is Lamarkian inheritence non-random? For example, if a man loses his arm in an accident, an acquired trait, that would be random.
Zachriel:
Right, it is an ACQUIRED trait, which both Lamark and Darwin (pangenesis) thought was also heritable. YOU brought it up, remember?
Natural selection requires the fitness be due to heritable random variation(s)
Allan Miller:
Must use preview window BEFORE posting-
Natural selection requires the fitness be due to heritable random variation(s)
Allan Miller:
And it sure as hell doesn’t care about YOUR misrepresentations. Again I have provided references to support my claims and uses of the word random and I can provide more.
Who debates that? I have heard debates about whether or not the variation has to be genetic, allowing for behavioural traits that get passed down to be part of NS as they too aid in the survival and reproduction processes.
Joe,
Two separate things: variation and natural selection.
Until we learned to manipulatie DNA natural selection and directed selection (breeding) both worked on a basis of variation in the source population. Much of the variation is due to random mutations. Directed selection didn’t require directed variation and natural selection doesn’t ‘require’ nattural variation. Seclection of any kind kicks in after variation of any kind is produced.
But selection and variation are separate processes.
Look up natural selection in a dictionary.
OK wait, I think I have found something– bear with me:
Mark Frank has a new post over on TSZ that pertains to Wm Dembski and Robert Marks.
So if we take Mark Frank and Robert Marks, subject them to crossover, we (can) get Mark Marks.
Are you still with me? Good
Mark Marks is the sound that a cleft lip dog makes.
A cleft lip is caused by a mutation. Mutations are one source of variation. Evolution requires variation.
Therefor the existence of Mark Marks, which crossover proves can exist, is evidence for evolution!
Perhaps this should be posted in kairosfocus’s challenge thread… 😉
Natural selection requires the fitness be due to heritable random variation
They are not that separate as you cannot have natural selection without the variation. That would make them rather connected.
BTW I provided definitions of natural selection, one from UC berkley and another from a college biology text.
see posts 70, 75 and 78, then get back to me. Thanks.
Joe,
I agree any kind of selection requires variation. Natural or artificial selection requires variation in the population to select from.
One of your definitions mentions heritable variation but it doesn’t say anything about the variation being random.
You kept saying the variation had to be random.
I agree that non-random variation makes the whole process non-undirected. But, strictly speaking, selection is separate from the variation. Natural selection is selection by natural, non-directed processes/pressures/affects.
Comment 78:
Mayr, in “What Evolution Is” says teleology is not allowed. He also says:
Then there is the fact that NS was proposed as a designer mimic which means no directed variation as directed variation is what a designer uses.
And natural selection isn’t selection of any kind. It is wrongly named so that Darwin could try to fool people. NS is a result of 3 processes.
Joe,
I agree if the whole process is going to be undirected then no teleology is allowed. Clearly. Even Mayr says ‘almost exclusively a chance phenomena’, i.e. mostly random.
But the selection part is separate from the variation part.
Natural selection is the culling process imposed by the environment. It’s what ‘selects’ some individuals over others. I would have said environmental cull but we use what is traditional.
And I wouldn’t have called NS a design mimic. Are breeders design mimics? Maybe they are . . . I just wouldn’t have used the term.
Joe,
I can’t imagine how it would arise but you could have artificial variation, i.e, introduced by a designer, coupled with natural selection, undirected environmental culling.
In fact, I thought that was partly your view!
To Allan Miller (at TSZ):
You have not exactly answered my points. Instead, you add some strange considerations:
This is simply contradictory. Shuffling bits and pieces of protein is an adaptive mechanism because it increases the power of module shuffling, which is a disruptive mechanism and has limited power of evolutionary exploration? Make your mind up!
??? I have said nothing about recombination being disruptive, that was more your discourse. I have only said that it is difficult to evaluate its probabilistic powers, and that anyway it has probably no use for single protein domains, which cannot be deconstructed into selectable functional units. Out of that, I have explicitly recognized that potential of recombination, and suggested that it could also be an adaptive mechanism. Where is the contradiction?
The bottom line point to bear in mind is that recombination (distinct from exon shuffling) is blind to gene expression. Totally. So it has nothing to ‘go on’ to establish what would be a legitimate swap and what would not. It is variable across genome length, for sure, for many reasons both ‘active’ and ‘passive’, but it is not attracted by regions that could do with a bit of a shake-up so much as repelled by those which would be better without.
OK. It is blind. To gene expression. But, if adaptive mechanisms can favour some recombinations that are more likely to produce some type of outcome, where is the problem?
Would you say that the genetic recombination that produces the basic antibody repertoire is completely blind to gene expression? I definitely would say the opposite. It is obviously a very adaptice mechanism, and a very complex one, already embedded in the genome.
There are many different kinds of recombination, and I don’t know how much benefit there is in lumping them all together as ‘adaptive’
I have never said that all recombinations are adaptive. Why do you believe that I said such a thing?
Recombination due to viruses, transposons, damage misrepair, ectopic misalignment in meiosis – these are no more obviously adaptive in themselves than point mutation.
I would definitely object for transposons.
But, nonetheless, all recombinations, whether adaptive or not, still promote much wider exploration of protein space than you started off allowing for
No. I allow for any possible exploration of protein space, but I try to evaluate its probability and credibility. My model, as said many times, is the emergence of basic protein domains, and I maintain that I can’t see how recombination would be helpful for that.
but this is not always a good thing.
Obviously.
Such exploration is not to the benefit of any individual organism, or most genes. It’s just something that happens, and organisms adapt if that-which-happens throws up a beneficial combination – one more source of the spectrum-of-variation on which NS works both positively and negatively.
In my language, that would simply be NS, not adaptation. Adaptation would imply some active help by some mechanism already embedded in the genome, which goes beyond simple RV, exploiting some active algorithmic information.
To Zachriel (at TSZ):
Random sequences can form active proteins (Keefe & Szostak, 2001).
Alredy been there. I don’t know if you have ever read my long analysis of Szostac’s paper, some time ago. He used RV + Intelligent selection by measurement of ATP binding to get to an essentially useless protein. But he never analyzed the original random sequences, which were selected for a mere very week ability to bind ATP, and then intelligently engineered into the final protein. No good at all. Wrong premises, wrong conclusions, wrong methodology.
The origin of the original protein domains is still largely conjectural.
Indeed. You are very clever with words. What an elegant way of saying “We have no idea!” That’s one of the many reasons why I admire you.
By the way, if you want to find a needle in a haystack, try sitting on it.
🙂
The algorithm is very simple. The landscape is the dictionary of valid words.
Oh, yes. The algorithm is very simple. And it has a whole dictionary as a oracle! Simple indeed.
If they form a word, they enter the population.
And how does the algorithm know that a word was formed? Ah, I forgot! The dictionary.
If they do not form a word, they do not enter the population.
Why am I not surpised?
A couple of insights: It is possible to evolve long words much faster than random assembly. Recombination is essential to this process.
That’s fine with me. And I suppose that the dictionary is essential to appreciate the successes of recombination.
Believe me, I have nothing against your personal algorithms. They are elegant and brilliant, and I like them. In principle, they are not different from Dawkins’ Weasel, but what a difference in class!
To Zachriel (at TSZ):
Similarly, in protein-space, simple motifs are often repeated, and recombination between sequences that exhibit such motifs are much more likely to generate workable proteins.
What a pity that there is no dictionary there to select those not-naturally-selectable but much-more-likely-to-generate-workable-proteins motifs.
If you recombine workable protein sequences, you are much more likely to find a new workable protein sequence than random assembly alone.
That has nothing to do with my answer, which was dealing with Allan Miller’s discourse about two single neutral mutations. Two single neutral mutations are not, I believe, “workable protein sequences”.
See my post #90 for context, the paragraph before the one you quoted, which says:
“The real point is that, while your discourse about recombination can make some sense in the recombination of functional elements, it is of no importance in the case of individual mutations that have no function until they conflate in a more complex output.”
Zachriel, I have full esteem of your intelligence. At least from you, I would expect criticism for what I really say.
Natural selection is based on the reproductive fitness of the replicator. There can be many functions that accomplish this aim, so if longer legs provide an advantage, then it can be subject to natural selection.
True.
In the abstract, this is done with a fitness landscape,
Like a dictionary? That’s where all problems start.
but more detailed simulations are possible. As for the size of the advantage, that is also easily simulated.
I am ready to consider any algorithm that simulates NS in a credible way, both formally correct and probabilistically appropriate to with existing data. I am convinced that such an algorithm would be completely trivial, and would add nothing to what we already know.
Zachriel equivocates:
Yes, an Intelligently Designed evolutionary mechanism.
Jerad:
Ok, you do’t understand that definitions of natural selection I provided.
Joe (120)
Well, one of us is wrong.
Evolutionary theory’s main processes are random mutation and natural selection, agreed?
(There are other sources of variation and mutation covers a wide variety of events. Also there are other selection filters in operation but NS is the biggie.)
Both RM and NS are undirected. One is unpredictable, random. The other is more predictable and deterministic.
They do not influence each other . . . mostly. It may be that mutation rates are selected for but that’s not for sure yet.
Any kind of selection, ‘Natural’, ‘Artificial’, etc, needs a varied population to work with. (Otherwise, what is there to select?) The underlying causes of the varieties must be heritable or they won’t get ‘fixed’ in the population. But selection works no matter what the source of the variation is. It can’t ‘see’ the source of the variation. Selection can only cull from the varieties it’s presented with. It might ‘keep’ all varieties. It might kill them all. Depends on the environment at the time. The dinosaurs had a bad stroke of luck.
Let’s say a designer was tweaking the mutations in a population but otherwise leaving nature to get on with things. The environmental pressures would still be naturally selecting who lived to reproduce and who died. The designer was not affecting the selection process. That would still be natural selection with a feed of guided mutations.
Jerad:
No. For one natural selection INCLUDES random mutations:
Differential reproduction due to heritable random variation (mutation)= natural selection
So the main mechanisms would be natural selection (if you bundle those 3 processes together as one mechanism) and genetic drift with a little neutral theory sprinkled about.
Then there is sexual selection and cooperation to consider.
Yet Dennett said “there is no way to predict what will be selected for at any point in time.” And whatever is good enough urvives and has a chance at reproduction.
You want NS to be a nice tight bullet when in fact it is more like bird-shot from a sawed-off shotgun.
In that scenario NS would no longer be a designer mimic. Ya see the WHOLE PURPOSE of NS was that it is a designer mimic- design without a designer. Now you want it to be a designer helper of sorts.
As I said you are totally redefining the term to suit your needs.
Differential reproduction due to heritable random variation (mutation)= natural selection
Zachriel, still searching for a clue:
Only to equivocators, like yourself.
I never said nor implied that it was. The reason mutation was in () is because I was responding to Jerad who used the word mutation. I didn’t want to put random variation and have him come back with “I said random MUTATION”- I know how ya’ll operate. And here you are, full of your bloviations and confusions.
Yes, it can. But it doesn’t have to.
If there isn’t a source for novel variation, then how did the variation get into the population? Ya see if you start out with one genotype then the first time there is a mutation it would be a novel variation.
Nice job, cupcake
Today’s Junk for Brains winner is Zachriel, who chide’s ID’ers for “assuming that evolutionary processes are no better than random assembly” while appealing to random assembly by a random process such as recombination.
And the comedy show at TSZ continues.
Mung:
keiths:
I said randomly generated genomes. I’d say the chance that Lizzie’s program generated 100 strings of all 0’s at random are about the same as her generating CSI.
zachriel:
iow, I’m right. you know it. But you don’t have the guts to tell keiths.
Could you slap some sense into keiths whilst the two of you go about getting on the same page?
Zachriel:
right. I’m the one that’s confused.
keiths:
Every chromosome generated by the GA is a potential solution. Else what is the point of generating them?
keiths:
And the light goes on. Maybe.
Allan Miller:
What would a string of length zero consist of?
How would you test the fitness of such a string?
What would crossover look like?
Allan Miller:
Instead of repeating me, why not just admit I’m right?
Strings and perhaps a few branes…
Natural Selection:
From http://www.biology-online.org/....._selection
From http://www.thefreedictionary.com/natural+selection
and
and
From http://dictionary.reference.co.....+selection
From http://www.merriam-webster.com.....0selection
and
and
From http://oxforddictionaries.com/.....Bselection
From http://www.answers.com/topic/natural-selection
and
From http://www.macroevolution.net/.....G0xxbTA7Sk
From Page 11 “Biology: Concepts and Applications” Starr fifth edition
From http://evolution.berkeley.edu/.....ndom.shtml
From http://www.answersingenesis.or.....-evolution
From http://creationwiki.org/Natural_selection
and
Joe
keiths:
That is false. Again, you demonstrate that you don’t understand what is being discussed. They would still be a potential solution. Just not a good solution. Just not an actual solution. A string of 500 0’s is still in the search space.
But I was reminded of a challenge I had issued. That challenge consisted in setting all strings to the same initial value, rather than having them randomly generated.
So please, have Lizzie initialize all her starting population of strings to all 0’s. By all means. Let’s see how well it performs then.
onlooker, choosing willfull ignorance sez:
As has been explained to you, ad nauseum, heritable variation with differential reproductive success is the dFSCI that needs to be explained in the first place.
And it has yet to be demonstrated that it can generate any dFSCI wrt living organisms.
So you say yet cannot support. Not only that but real world darwinian evolution STARTS WITH the very dFSCI that requires an explanation in the first place.
Mike Elzinga:
No, we just don’t like people calling each member in a new generation an outcome of 500 coin tosses when it isn’t.
To Mike Elzinga-
We see the same misconceptions regarding evolutionists’ confusion about genetic algorithms. They want to use a known design mechansim as an example of a blind watchmaker mechanism.
And Mike, you need to explain how a new generation can arise- IOW GAs start with the very thing that your position needs to explain in the first place.
keiths@TSZ:
lol.
Darwinian evolution does not need “the right fitness landscape” to work. (What would a “wrong” fitness landscape look like?)
Your problem, keiths (and apparently the problem of a few others over there at TSZ), is that you don’t know what a fitness landscape represents.
You think fitness landscapes lead to eyes, and elbows, and asses. And if they did, one would have to infer design was behind it. IDists have no reason to fear fitness landscapes.
Joe Felsenstein, is there some reason you don’t speak up? Explain fitness landscapes to poor keiths.
chromosome encoding (cont.)
: Genetic Algorithms in Search, Optimization, and Machine Learning
Genetic algorithms are different from more normal optimization and search procedures in four ways:
1. GAs work with a coding of the parameter set, not the parameters themselves.
2. …
3. …
4. …
Genetic algorithms require the natural parameter set of the optimization problem to be coded as a finite length string over some finite alphabet.
Some of the codings introduced later will not be so obvious, but at this juncture we acknowledge that genetic algorithms use codings.
To us a genetic algorithm we must first code the decision variables of our problem as some finite length string.
http://www.softwarematters.org/ga-engine.html
HT: “MathGrrl”
LAWL
Well, I think I finally got a grasp on Lizzie’s problem over at TSZ. She thinks a description of a function is a description of an observed pattern.
Not sure how that works, but it really does appear that she believes that.
IOW, if I define a function that simulates 10 coin tosses and weight it such that the chance of a heads is greater than the chance of a tails, as might happen with a weighted coin, I can simply describe the function as “generates more heads than tails.”
So if I have a sequence of heads and tails, for example, ttHtHHHtHH, I can simply describe this as more heads than tails and claim I have CSI. Or some such malarky.
“Well, I think I finally got a grasp on Lizzie’s problem over at TSZ. She thinks a description of a function is a description of an observed pattern.”
But aren’t these the same people arguing, against the recent ENCODE findings, that the vast majority of the genome is still junk because ENCODE’s definition of functionality was too loose??? By Lizzie’s even looser definition than ENCODE’s I guess we can now declare that neo-Darwinists hold the genome to be virtually 100% functional 🙂
To TSZ:
No, keiths, I wasn’t trying to change the subject, and your response gives fair evidence that I wasn’t.
Mung:
keiths: “You’re Wrong!
Well, no, I’m not. Your attempts at logical refutation are no substitute for the facts.
I have quoted numerous sources now, including posters right there at TSZ, that agree with what I wrote. Don’t blame me if they don’t think enough of you to set you straight.
Mung:
Zachriel:
So. Tell that to keiths.
Zachriel:
Which was:
Well, I think that’s a straw-man. That’s my response.
Of course, if you can point out where I’ve actually made such an assumption then it might not in fact be a straw-man. But until then…
I am referring to the need for an encoding. The nature of the landscape is not known until it’s known (but that’s another topic).
keiths disputes that there is any encoding involved.
Mung:
keiths: “You’re Wrong!
Zachriel:
Please explain it to keiths.
I’m saying things, you’re agreeing with me, and keiths continues to just plow ahead.
keiths:
Sheesh. You don’t know that it’s not an actual solution until you measure it’s “fitness.” That’s why it’s a “potential” solution.
keiths:
What the heck did you think I meant? And where did I use the word “smuggled.”
Change the 0’s and 1’s in Lizzie’s “genomes” to T’s and H’s, change her mutation function to flip T’s and H’s instead of 0’s and 1’s and then see how well her fitness function works when it can’t find any contiguous 1’s to count or any ‘0’ to separate them.
So yeah, there’s information in the chromosome. It’s not smuggled in, it’s encoded. man oh man.
Zachriel:
We’ll need to clarify what is meant by “landscape.” To me a fitness landscape isn’t something that is there waiting to be discovered (or climbed, ala Mount Improbable), it’s something that is created as populations evolve.
Now I think it may be true that there are heights that creationists and ID theorists think cannot be reached by a given population in a given scenario because they just can’t produce the required rate of reproduction.
But we may be talking past one another and it’s best to clarify terms.
Allan and Zachriel:
The way I understand Zachriel’s argument is that he is appealing to a bag or assortment of pre-existing components (aka protein domains) that can be used in proteins and that their availability for use somehow lends less of a random character to the process (making a functional protein more likely) even though the main proposed mechanism for this shuffling is recombination, itself a random process and the protein domains themselves also arose largely as a result of a random process (perhaps “guided” by “natural selection”).
Anyways, rather than attack a straw man I’ll pause here and give you all a chance to respond.
Personally I have no conflict with regular repeated processes going on inside living organisms because to me that smacks of teleology. 🙂
So if recombination helps organisms adapt, I am totally cool with that (as long as it’s an accurate description).
I was just trying to make sure you all weren’t insisting on talking about what can be constructed from the parts while gpuccio is trying to talk about how the parts were constructed.
Mung World
To my admirers at TSZ.
I tossed my program together in a short evening. I am actually rather pleased with it, I even managed to make it object-oriented (for the most part).
However I freely admit it is not an exact duplicate of Lizzie’s program just written in another language, I rather attempted to capture the “spirit” of what she built.
It’s a bit rough around some of the edges, but I would like suggestions on how it can be improved.
I call my digital organisms LiddleLizzards, in honor of Elizabeth.
Here’s my LiddleLizzard class. I think the first thing that can use improvement is the mutate method, it’s pretty rough. ;).
# mutates this chromosome
def mutate
chromosome[rand(500)-1] = ‘1’
end
All I do here is set one position in the chromosome to a ‘1’. If it’s a zero it gets changed, if it’s a ‘1’ it’s like a neutral mutation. I don’t know what that cashes out to in terms of a mutation rate, if someone wants to tell me.
Some potential modifications:
1. Set the chosen locus to either a zero or a one, that would not be too difficult to code.
2. Explicitly set the mutation rate.
3. Create a Mutation object that is passed in when the digital organism is created that encapsulates it’s mutation parameters.
4. Pass in the length of the string to generate rather than hard-coding it in a constant.
Honest evaluation, criticism, and suggestions for improvement are welcomed. You can leave comments at that link as well.
keiths@TSZ
It wasn’t entirely clear what sort of landscape(s) he was talking about, So I decided to wait and find out. You, otoh, plow ahead unabated.
Are you saying you understand evolution and GA’s, or are you hoping someone else is going to save you again.
Using my own program for that test would be a little silly, since it’s function is to maximizes the number of contiguous ‘1’s. Sorry to disappoint. (not really)
Allan Miller@TSZ:
Hi Allan,
Mung,
You would have better luck trying to sweep sand from a beach than to try to reason with the TSZ ilk. They really believe Lizzie created CSI using natural selection. And that means they are hopelessly ignorant of both CSI and natural selection.
And it is very telling that they cannot provide any real world examples of natural selection actually doing something, let alone producing CSI.
Lizzie’s program is not a real world example? And what about mine? Doesn’t it generate CSI?
A Question About CSI
I have a program that generates a random string of ASCII characters.
“y\x1EB.\x01UF\x1CLy)V(PHP\x04\rp=v~ i/pG\e_@\x0E\x06kf-FH\x00VBM]\ttLyu&eN\x1D>gA9*=o{cGV\x182ORh\x12uH56”
I can simply describe this string as “a program generated string.” Why doesn’t that demonstrate that I have “generated CSI”?
gpuccio, I hope you’re reading this:
http://www.evolutionnews.org/2.....65001.html
HT: BA77
Remember when the strong evidence for evolution was the vaunted “twin nested hierarchy”?
Mung @147:
Ah, yes, HGT. That convenient get-out-of-jail-free card when problems come up with the traditional evolutionary storyline. HGT is a real phenomenon, to be sure. Just doesn’t have the power that adherents claim it does.
HGT, “I [laugh] in your general direction.”*
* Apologies to Monty Python.
Allan Miller@TSZ
Hi Allan,
I apologize for the confusion. If it exists it’s my fault not yours.
I do believe that the string you are referring to in your post is the one here.
That is from a completely different program.
I first wrote a command line program that allows a user to start the program, passing in two values. One value to set how many binary characters should be in the string (the length) and the other value to set the number of strings to generate.
Having generated the strings the program then converts each one into an ASCII string based upon the underlying bit stream.
Here is the link to that program:
https://gist.github.com/3816082
It’s not a GA and has no fitness function. It’s a demonstration of how unlikely it is to get meaningful text. So if the program were to produce a string of 72 characters that match precisely the first 72 characters of this post, it would be reasonable to make a design inference.
I do appreciate your observations.
Corrected link:
Allan Miller@TSZ
But the real point of that exercise to was to ask, is that CSI? If not, why not? What makes my string and it’s description any different from what Elizabeth has done?
oh look! Randomly generated string. That’s a pithy description of the pattern. Therefore, CSI.
I’m questioning her interpretation of what Dembski means by a specification.
1. There must be a pattern.
2. It must be simply describable.
3. The description must describe the pattern, not the function used to help produce the pattern.
yikes!
ok, i have to stop using that @ sign.
Zachriel: How’d you guess our PIN?
that made me laugh 🙂
Toronto:
Yes, I understand exactly what Elizabeth is trying to do. Where were you before she launched TSZ?
Well written code is as descriptive as English text.
If you can’t understand the code speak up, it means I have not communicated well.
lol. So I posted code and asked for comments, some of them are even funny.
JonF
I guess by atrocious JonF means it’s unreadable and doesn’t even perform the intended function. He doesn’t say why he thinks it’s atrocious. He certainly doesn’t say anything constructive.
I think it’s readable. Others there at TSZ seem to have no problem reading it. And it certainly did what I asked of it, so …
keiths:
Is this supposed to be a criticism?
My code is written in Ruby, her’s is written in MatLab.
My code is object-oriented, her’s is barely even procedural.
I’ve only posted a single class.
Her code is atrocious if you ask me.
Any constructive criticism?
keiths:
So? You think the results will be different if I mutate to either 0 or 1? Will that prevent me from generating CSI, like Lizzie?
News Flash! Latching prevents algorithmic CSI generation!
I would think it encourages it, but what do I know. =P
keiths:
So? Will that prevent me from generating CSI, like Lizzie?
Do I have a single pre-specified target?
Why can’t fitness be judged according to “the longest sequence of contiguous 1’s”? You don’t say.
Why is Lizzie’s choice of fitness function better than mine? You don’t say.
And the evidence that you know what Lizzie’s code does is?
I just created a very clean (though admittedly minimalist) class in Ruby. It shows that I do know how to code. Maybe you just don’t understand object-oriented programming.
My background:
Basic
awk
C
C++
Java
Ruby
I’ve actually sold a program I developed.
Other programs I’ve developed have produced significant income for me.
Now I just give them away for free.
I just happen to love Ruby. So shoot me.
At least I’ll die happy!
🙂
For you real coders out there, try Ruby!
Fitness Landscapes
I’m sorry, but that question doesn’t even make sense to me.
64 bit encryption keys don’t exhibit differential reproduction, do they? In what sense is one 64 bit encryption key more fit than another 64 bit encryption key?
So do I perhaps have a point about context?
Mung:
The bolded indicates that any string of like length would do, so it is not specific. The default on the S remains at 0. Chi_500 = – 500.
Next prob.
KF
DrBot:
Why are you calling it a fitness landscape? Fitness is a measure of differential reproduction. So, context.
olegt, Thank your for your response.
If I were attempting to simulate biological evolution, I think you might have a valid point, though there seems to be some debate over there at TSZ on this point.
But the purpose of my “GA” is not to simulate biological evolution, but rather to simply illustrate how easy it is to generate CSI. Lizzie, imo, made her program unnecessarily complex. But she perhaps had a different goal in mind than I did.
If you think Lizzie’s program generates CSI, I would like to know why you think mine doesn’t.
Does the fact that my mutate function “latches” make a difference that makes a difference?
Assume that it also generates a ‘0’. At first, the chance that this makes a difference is 50/50. In a randomly generated sequence of 0’s and 1’s, trying to “replace” one position in that sequence with a 0 is as likely as not to have no effect.
So I suppose that the argument here is that my mutation function is not “non random wrt fitness.” Given my “fitness” function, I would grant that objection. If the goal is to simulate Darwinian processes, that would be relevant. But the goal is to generate CSI.
If that is in fact the objection, I would level the same objection against Lizzie’s program.
That said, I may yet modify my “GA” to incorporate your suggestion, with the proviso that the change is not relevant to the argument, but rather to remove it as a point of contention.
I did recognize this as a potential objection before it was raised.
Mung:
Let me again express my thanks for your post and your tone.
olegt on October 5, 2012 at 1:15 pm said:
ok, this is partially Ruby and partially a regular expression, which is not Ruby specific.
def fitness
Begins the definition of a method named “fitness” that may be called on this particular organism to determine it’s fitness. A message can be sent to any organism that can respond to this message asking it to provide it’s fitness.
If you understand object-oriented programming this should be clear. If not, just ask and I will try to explain and clarify my code. This is not intended as any sort of an insult. Good code is readable and understandable.
score = 0
The default fitness is 0. Some “objectors” at TSZ appeared to criticize this, they probably just didn’t understand the code.
chromosome.scan(/1*/).each do |str|
score = str.length if str.length > score
end
This is probably what you were asking about. chromosome is a String. The scan method iterates over the string looking for patterns (this is where regular expressions come in.
We are looking for patterns in the “chromosome.”
Each pattern in the chromosome that matches the regular expression (a one followed by one or more ones) is assigned to the variable str. (chromosome.scan(/1*/).each do |str|)
So then we are interested in how many contiguous “1”s are in the string that matched the pattern.
score = str.length if str.length > score
If we find a pattern of 1s that has more 1s than the previous number of ones we assign that vsalue to the score.
So “fitness” is decided based upon the number of contiguous 1’s that are found in the sequenec.
If you have questions about this , please ask. I will try to answer/explain.
But of greater interest to me is, why does my GA not model the same thing as Lizzie’s GA? Did I not generate CSI? Why not?
So, posters at TSZ question whether I understand Lizzie’s “GA.’
I question whether they understand Lizzie’s “GA.”
Mung,
Perhaps if your string won a prize it would be CSI. The prize would be a target and some specified string would be the winner. And if that string is 500 bits or more, then it is CSI because it specified a winning combination.
Zachriel on October 5, 2012 at 1:17 pm said:
ok, good, we are on the same page.
keiths is an idiot.
Joe Felsenstein abdicates his position.
Oh, so a fitness landscape is a representation?
It represents relative reproductive fitness?
Is protein function synonymous with reproductive rate?
If not, you’re spouting Bee period Ess.
olegt:
So?
I never asserted that my fitness function matched Lizzy’s fitness function in every detail. What’s wrong with my fitness function?
Why does Lizzy’s fitness function lead to the generation of CSI while mine does not?
Lizzie claims her fitness function maximizes something. So does mine .
olegt:
Your program has not run yet. It did not produce digital organisms from a suitably small target space.
My program did run.
Your claim that my program did not produce digital organisms from a suitably small target space is not consistent with your claim that my program did not run,
Your claim that my program did not produce digital organisms from a suitably small target space is simply false.
Because ignorance is no excuse.
Toronto:
Object oriented code was being written before the concept of objects even existed. What a moron.
Toronto:
No explanation of why “pseudo code” is more understandable is offered.
No explanation of why “pseudo code” is any different from “runnable” code is offered.
keiths:
ok, so who at TSZ has had the balls to disagree with the garbage you’ve asserted?
keiths:
Demonstrating, for all who care to see, that you are a complete moron.
Toronto:
Its even better to simply say in English what you want to do before coding and get feedback before you commit to anything.
Working code trumps whatever fantasy world you’re in.
keiths:
So?
Toronto at TSZ claims to understand:
olegt:
So?
My fitness function does not generate CSI and hers does?
Toronto:
Let me guess; you generate a “specification” for your code after the “functionality” has been observed.
So?
Mung:
If it makes you feel any better Lizzie’s did not lead to the generation of CSI and Lizzie still hasn’t demonstrated any understanding of what CSI is. For that matter no one over on TSZ appears to understand what CSI is.
keiths sez:
No, keiths, that is NOT a SOURCE of information. It is a source for changing existing information.
For the record- ID does NOT state that random effects never happen in a designed universe. ID does NOT state that random mutations never occur.
But please keep humping your strawman arguments. It is entertaining.
Add PIN to the list of things Zachriel doesn’t understand.
olegt:
Lizzie wrote a program that allegedly simulates natural selection producing CSI. Unfortunately for Lizzie it does neither. But that won’t prevent you from continuing to claim otherwise. And we wouldn’t expect anything less…
keiths:
Well if it takes specified complexity to get replication, and it does, then if you start out without any specified complexity then you can’t even get started.
And that is why Lizzie fails to produce CSI- she smuggles in specified complexity by just granting reproduction with variation.
Joe, I think perhaps you are correct.
I didn’t try to make a game of it.
To observers @ TSZ:
There seems to be some misunderstanding about my program. It’s fundamental purpose is to show how easy it is to generate CSI using a simple algorithm. Isn’t that what Lizzie program is designed to show as well?
IMO, Lizzie’s program is needlessly complex and takes too long to achieve the desired result.
The fundamental question that needs to be asked and answered is, why does her program generate CSI while mine does not?
I’ll be incorporating some of the suggestions I’ve seen there at TSZ, but first I want to code up a way to track historical information so I can observe the effects of changes made to the program.
I hope to work on that today.
I’ll also post a link to my LizzardPopulation code.
KeithS proves that he doesn’t know what CSI is:
But he does prove my point in comment 161.
What is it with monkeys and shiny prizes? 😛
According to Dembski, the most up-to-date definition of CSI is:
Χ = log2[10^120 * φ_S(T) * P(T|H)]
Have you calculated Χ and determined that it’s less than 1? How did you choose H, and how did you determine φ_S(T)?
I can go through the calculation given one way of interpreting Dembski, and it comes out to easily have CSI. But I’d like to see your calculation first.
R0bb,
In order to qualify as CSI it cannot be algorithmically compressable. And a sequence of all 1s is algorithmically compressable.
As I said you guys don’t understand anything ID.
Dembski says the opposite. According to his definition of CSI, the more compressible it is, the more CSI it has. Which of Dembski’s works have you read?
Since Dembski disagrees with you, apparently he’s one of the guys who don’t understand anything ID.
Robb,
Try to compress the works of Shakespear- CSI. Try to compress any encyclopedia- CSI. Even Stephen C. Meyer says CSI is not amendable to compression.
A protein sequence is not compressable- CSI.
So please reference Dembski and I will find Meyer’s quote.
Mark Frank is confused. Above I was discussing COMPLEX SPECIFIED INFORMATION and he links to a paper about SPECIFICATION ONLY, in an attempt to refute what I said.
SPECIFICATION is the S in CSI, Mark.
If anyone wants to read a confused rant-
Things That IDers Don’t Understand, Part 1 — Intelligent Design is not compatible with the evidence for common descent
Will evo nonsense never cease?
To Mark Frank- once again I refer you to my exmaples in 186- none of which are compressible and all of which exhibit CSI.
How to YOU deal with those facts, Mark? By ignoring them, as usual…
To Mark Frank-
The “on/off” of a pulsar is specified. However it is NOT complex. Therefor we do not infer design.
The pattern of a snowflake is also specified. However it too lacks complexity and therefor we do not infer design.
Page 164 of “The Design of Life”:
R0bb,
I examined Elizabeth’s program closely and I don’t see where she even attempts to calculate CSI in it. So how do you suppose she knows she generated CSI?
If you can help me turn that into code I’d be more than happy to include it in my program. Maybe we can all learn something.
Wouldn’t that make a simple marvelous fitness function? The more CSI in a string the more fit it is. Or, the more specified it is, the more fit it is.
Does “less than one” indicate “less” CSI? How much less?
The it to which you are referring is the pattern T?
Given a 500 bit string of 0’s and 1’s and the chance hypothesis the probability of T|H is 1 in 2^500?
What do you make of Dembski’s absolute specificity?
–log2 P(T|H)
Now in my (initial) program, the pattern that all my “winning” strings had in common was a minimum of 450 contiguous 1’s. That’s certainly a restricted subset of all possible when it comes to a 500 character string.
But the actual description of a specific pattern is going to basically boil down to the same algorithm, isn’t it?
n.times {print ‘1’}
So in what sense is any one of them more or less compressible? Do they all then have the exact same CSI?
regards
To Mark Frank-
The most likely explanation is that YOUR interpretation of Dembski is wrong.
And Mark proves he is clueless:
To Mark Frank- once again I refer you to my exmaples in 186- none of which are compressible and all of which exhibit CSI.
How to YOU deal with those facts, Mark? By ignoring them, as usual…
Mark, you quoted my comment 186 in your OP. Are you really that slow?
To Mark Frank-
Yes I have read the paper. Nice of YOU to ignore everything I have said.
Do you really think ignoring what I say refutes it?
Pathetic…
Mark Frank:
SPECIFICATION IS NOT CSI. Specification is only one part of CSI- ie the S.
As I said encyclopedias are CSI, Mark. And guess what? Not compressible.
Spoken like a true dolt. You are confused mark as I have never defined “specified” that way. What i do understand is that there is a HUGE difference between specified and CSI.
And Allan Miller chimes in:
Nope, completely random sequences do not have CSI. As I said you guys ignore what I write and make stuff up.
Losers…
Yeah, That’s a bad case of misrepresentation.
The text of Shakespeare is not random. But it is specified.
And I see Allan misrepresent me as well.
High CSI? What’s that? Low CSI? What’s that?
How much CSI makes for high CSI and how little CSI makes for low CSI? Lizzie makes the same mistake. No wonder it just gets repeated over there.
Joe:
You might want to inform Dembski that his paper is about specification only. He thinks it’s about CSI:
The design inference is CONTEXT specific. For eample 500 1s- if that occurred by someone rolling a die 500 times, recording the result of each roll, then yes I would infer specified complexity existed and therefor design.
CSI is a special case of specified complexity, which would mean all CSI is SC but not all SC = CSI. (if not I am sure kairosfocus, mung, PaV, gpuccio will correct me) and both indicate design
R0bb,
1- the paper does not exist in isolation
2- the ENTIRE paper, not just the part mark/ you can quote mine
Now I have given expamples of CSI that obvioulsy counter what you think Dembski is saying. That should tell you something but obvioulsy you no speaky the language…
Joe:
I’m aware of Meyer’s position, and thank you for sharing CJYman’s take, although I’m not sure why I should put stock in it.
As for a reference, why would you need one when you have already read Dembski’s work? For example, you say that you’ve read the “Specification” paper, so you can easily answer the following questions:
1) Is specified complexity directly or inversely related to φ_S(T)?
2) Is φ_S(T) directly or inversely related to compressibility?
Or you might want to reread the section Specifications via Compressibility.
Or you can think back to Dembski’s poster child of specified complexity in both The Design Inference and No Free Lunch, namely the Caputo sequence. Is it compressible or not?
We’ve been over this before, Joe.
R0bb,
Perhaps you missed my comment:
The design inference is CONTEXT specific. For eample 500 1s- if that occurred by someone rolling a die 500 times, recording the result of each roll, then yes I would infer specified complexity existed and therefor design.
CSI is a special case of specified complexity, which would mean all CSI is SC but not all SC = CSI. (if not I am sure kairosfocus, mung, PaV, gpuccio will correct me) and both indicate design
That takes care of the caputo sequence…
Now I have given expamples of CSI that obvioulsy counter what you think Dembski is saying. That should tell you something but obvioulsy you no speaky the language…
Another predictin fulfilled…
Mark’s confusion continues:
I never was, mark. I have been specifically talking about CSI, not just spoecification.
I never said, thought nor implied specification was not compressible. IOW you really need to seek help…
R0bb:
When it comes to Intelligent Design cjyman forgot more than you know- and he doesn’t forget. 😛
It looks like Mark Frank has given up on trying to force his misconceptions unto us:
No Mark, you are just a fool…
And anothe clown chimes in:
Yeah, Mark messed up. He is conflating mere specification with CSI.
Nope, only evos think that way, and here we have Flint.
Joe was talking about CSI, not mere specification, wrt compressibility.
One moron lights the torch and another jumps in to take it from there.
Joe:
Where did you get that idea?
Here’s Meyer, in his famous paper:
You have even quoted the above yourself.
Here is Dembski and Wells’ definition of “complex specified information” in the glossary of The Design of Life:
So, again, where did you get the idea that the terms are not synonymous?
R0bb:
I checked a thesaurus. 😉
R0bb,
CSI and SC are different manifestations of the same thing.
And if you read what I said I never said they were not synonymous…
Mung: “And I see Allan misrepresent me as well.”
And on it goes:
Really? How much CSI is “high” CSI?
More CSI and less CSI are the sorts of things you folks come up with:
Allan:
And Lizzie claims to have generated CSI. Where were you then?
I don’t believe I said a string of ‘1’s has “high” CSI. I’m not sure I said a string of 1’s has any CSI. I want to know why, if her strings have CSI, mine don’t.
Allan:
My, that’s even more simply describable than the one I came up with! So yeah, I guess any string of sufficient complexity has CSI.
In case you jokers haven’t caught on yet, I am mocking Lizzie’s effort to generate CSI and the non-critical acceptance of such by her fan club over there at TSZ.
She doesn’t calculate the CSI for any of her strings. She doesn’t explain which ones have more or less CSI or why.
At least I asked R0bb if he was able to assist me in incorporating such a measure into my program. That may be more than Lizzie ever attempted to do.
http://www.uncommondescent.com.....ent-435866
More Flint:
Computer programs, encyclopedia articles, text books, assembly instructions, genomes- all real world referents of CSI
And Zachriel continues to amuse:
You have to see if the quantity is there to qualify as CSI. Once you pass the threshold the rest is irrelevant to the design inference.
Actually, English text is highly compressible as shown empirically, and even more compressible in theory.
If a compression engine is optimized for N-bit English text, it can theoretically achieve a compression ratio of 1 – (logV)/N, where V is the number of N-bit sequences that are valid English text. For non-small N, the vast majority of N-bit sequences are not valid English, which means that (logV)/N is very small and the compression ratio is very high.
The same principle applies to any kind of specification. Whatever various ID proponents understand by the term “specification”, I think we can all agree that the number of “specified” outcomes must be very small in comparison to whole sample space in order for a specified outcome to be considered special and evidence of design. This means that a compression engine that is optimized for specified outcomes can achieve very high compression ratios.
Joe:
We can’t quote the entire paper, much less all of Dembski’s work. If you think that Mark or I is guilty of quote-mining, please show us the quote mine and show us something in the context that contradicts our interpretation.
I am saying that compressible sequences can be CSI. To counter that, you would have to show that compressible sequences cannot be CSI. Where have you done that?
Joe:
You said that “all CSI is SC but not all SC = CSI” and “CSI and SC are different manifestations of the same thing.” These indicate that the terms are not synonymous. Agreed?
Joe, a summary of your position on various items:
– This paper is about specification (the S part of CSI) only. Never mind that it talks extensively about complexity (the C part of CSI). And never mind that Dembski says that this paper is about CSI.
– Compressible sequences do not qualify as CSI. Rolling 500 1s in a row qualifies as specified complexity, but not CSI. But that is not to be construed as saying that “specified complexity” and “CSI” are not synonymous.
– The works of Shakespeare and encyclopedias are incompressible. Never mind the fact that they are compressed often, and English text is known to be highly compressible.
Do you disagree with any of that?
Try to compress the works of Shakespear- CSI. Try to compress any encyclopedia- CSI.
Then why didn’t you do as requested?
SC- and it all depends on the CONTEXT just as I said.
Disagree.
What is the information in a string of ones? What does it tell me?
ALGORITHMICALLY compressible- and i still notice you haven’t done so, just sed it.
You cannot quote the part that refers to “specification” and have it apply to CSI/ SC. It’s that simple.
Zacho:
Nope.
If it is shannon information, not algorithmically compressible, and can be processed by an information processor into a separate functional system, then it is complex specified information.”–CJYman
CSI does.
If the presence of specified complexity in an object means it was designed, and the presence of complex specified information means it was designed, that would mean they mean the same thing, ie they are synonymous.
So close!
keiths:
An SI value below a certain threshold is not CSI.
To me SC is for objects- like Behe’s mousetrap- several components that come together in such a way as to convey some function that is spearate from the components themselves.
And CSI would be for something like the message in “Contact”
If an object exhibits specified complexity then it is also a given that it took CSI to create it, meaning it contains that CSI.
Encyclopedia exhibit complex specified information with means they also have a specified complexity.
Dembski:
Well Joe, I’m wondering if some of the folks over at TSZ aren’t just as skilled at word games as we are. 😉
It seems to me the point of compressibility is the same point as it’s always been with Dembski and here at UD.
Dembski:
Patterns of small probability. Algorithmically compressible sequences are just one example of such a pattern.
Do you get the sense that the folks at TSZ think algorithmically compressible sequences are the only small probability sequences?
Note he never says therefor “design”. That is because law/ regularity/ necessity can also produce algorithmically compressible sequences.
His point is chance cannot produce algorithmically compressible sequences.
It still all depends on the context. If you have an algorithmically compressible sequence, you have ruled out chance-> DEFAULT chance is out.
Earth to Zachriel- when cjyman said:
If it is shannon information, not algorithmically compressible, and can be processed by an information processor into a separate functional system, then it is complex specified information, it does not mean that every instance of CSI has to be like that. He is saying if that is what you have then you have CSI.
Is English not your first language?
Moar Dembski:
Not limited to algorithmically compressible sequences.
Are we and TSZ just talking past one another. Is there really some point of fundamental disagreement here that I am just not grasping?
I see two possible ways to interpret Dembski here.
1. These patterns are specifications. In which case, we need specification plus something else.
2. These patterns do not yet qualify as a specification.
WmD.
Perhaps there’s another interpretation that makes sense. Maybe that’s the one that Frank et al. are working from. I guess I’ll shut up now and see what they have to say.
Zachriel:
🙂
If you’ve known me for any length of time, you know that I don’t have any problem disagreeing with other people here at UD. Heck, I even disagreed with Meyer. If I don’t like what Dembski has written I’ll disagree with him, lol.
Am I correctly interpreting Dembski?
If a sequences is algorithmically compressible that does not auto-magically make it a specification.
If Joe says he disagrees with me, so be it. People learn through disagreement. I don’t see it as a horrible bad thing.
p.s. mine’s shorter. thought i’m not sure i should be bragging about that on the internet.
Zachriel,
I offered what cjyman said about CSI to support my claim pertaining to CSI being not algorithmically compressible.
Patrick, aka MathGrrl, whines:
More revisionist history.
“MathGrrl” appealed to ev. Tom Schneider, creator of ev, claims to have used it to generate CSI. Patrick had nothing to say about that.
Elizabeth Liddle posted that she was writing a program to generate CSI. Did Patrick ask her what definition she was using and an example calculation?
Now if Tom Schneider and Elizabeth Liddle understood the definition of CSI and how to calculate it well enough to write programs to demonstrate it could be generated, “MathGrrl’s” complaints ring hollow and Patrick is just whining.
Zachriel:
What definition was Lizzie using, and where were you in that thread?
http://theskepticalzone.com/wp/?p=576
Mung at 234,
You have beat them over the head so many times with their own use of the words and concepts they claim not to understand, one would think they’d eventually become embarrassed about saying they don’t understand them.
But then again, its Patrick, so there’s an explanation in itself. He has a family to protect from those lying Christians. Demonstrating pseudo-intellectual irrationality is hardly too much to ask in comparison.
Thanks mate!
petrushka on October 9, 2012 at 3:36 pm said:
Do probability calculations enter into the determination of “Shannon information”?
IOW, to calculate the “amount of information” in Shannon terms, what must be either known or assumed?
Mike Elzinga:
wow. just wow. Assume I have access to the formula.
Shannon’s paper is, after all, available online. And “the formula” has been reprinted and discussed in many books since then.
Do you think Shannon information can just be read off any old sequence? How much “Shannon information” is in the following sequence: 00101
In order to calculate the “amount of information” in that sequence in Shannon terms, what did you either know or assume?
So tell us, Mike,
If someone tells you that in a sequence of 500 0’s and 1’s there is 500 bits of Shannon Information, would you believe them, and why?
petrushka?
Allan Miller on March 16, 2012 at 4:44 pm said:
Ido:
Allan:
More Intelligent Design please! Fine Tuning anyone?
Upright BiPed,
You’ll love this.
Patrick in the Creating CSI with NS thread at TSZ:
http://theskepticalzone.com/wp.....mment-8289
http://theskepticalzone.com/wp.....mment-8290
I need to go back and review, lol. No telling what I’ll find. I sure hope he’s not trying to generate CSI. =P
And then there’s R0b:
http://theskepticalzone.com/wp.....mment-8363
Sorry, got to run and see if his code is still there before he can delete it. I want to see that CSI calculation!
Joe:
Prior to the rolls, you didn’t know what the result would be. After the rolls, you knew that the result was a sequence of 500 1s. That’s the information you learned. Your uncertainty was reduced by about 1300 bits.
Yes, algorithmic compression. What kind of compression did you think I was talking about? Hydraulic?
Are you seriously doubting the compressibility of Shakespeare and encyclopedias? Okay, I downloaded the works of Shakespeare and an encyclopedia, and I compressed them with PAQ on level 8. I got 80% compression for Shakespeare and 79% for the encyclopedia. You’re welcome to reproduce these results.
R0bb:
I would just clarify my views about compressibility, that I have already expressed in the thread.
First of all, I am aware that Dembski considers compressibility as a form of specification. He may be right, but very simply I have never considered it as a form of functional specification in my discussions about biology. In particular, compressibility is not a function we can observe in any special way in the biological world. The functional specification for proteins and other biological molecules derives from what they can do, not from the fact that thet can be compressed (indeed, biological molecules are not specially compressible at all).
So, maybe compressibility can be considered as a form of specification, but that is not relevant for biological discussions.
But there is another aspect of compressibility that is of relevance to any discussion about CSI or dFSCI. If the observed string is compressible, we must always consider the possibility that it came into existence in the system we are considering in an indirect way. IOWs, we have two “chance” explanations to consider:
1) The string was generated by RV directly
2) A simnpler system was generated by RV directly, and then generated the observed string by a necessity mechanism.
The second scenario is the one where an algorithm that can compute the solution is generated by RV. I have discussed that scenario in detail about Lizzie’s algorithm.
As the secons scenario would still be a geberation of the solution by RV, even if indirectly, it must be considered, and its complexity evaluated. But, in the second scenatio, the complexity to be evaluated is the complexity of the algorithm (in the case of Lizzie’s example, the complexity of the simplest executable string that can output the solution). If tha complexity of the algorithm is lower than the complexity of the observed string, that will be the dFSI of the string. Otherwise, the dFSI of the string remains the complexity of the string itself.
So, if you have a string of 500 1s, its direct complexity is 500 bits, its indirect complexity will be the simplest executable program that, in the system we are considering, will output 500 1s. If you can write an executable string that does that, and is less than 500 bits, the complexiy of that string becomes the dFSI of the original string, because the new string is a compression of the original string, and still can generate it in the system.
So, if we want to apply that to the works of Skakespeare, you can reduce the funtional complexity of the original observed string (the works of S themselves) by calculating the total complexity of:
a) The compressed string that you obtained
+
b) The software that can expand it into the original observed string.
In the end, I believe we can safely affirm dFSCI for the works of Shakespeare anyway.
But, if you can find a way to generate Hamlet in a system through a functional complexity of less than 500 bits, please let me know.
What is the information in a string of ones? What does it tell me?
So 500 1s alone do NOT tell me anything- I need to have information about the entire process. Got it.
BTW can I see those alleged algorithms?
R0bb:
A few more comments, just to make it more clear:
a) Compressibility as specification.
Indeed, compressibility can be used to specify, just like any other property.
Specification is not a narrow concept. Anything that objectively qualifies a subset of a search space is a specification. So, if my search space is made of 1000 objects, and 10 of them are red, being red is a psecification that objectively qualifies a subset of the search space. The complexity of the specified subset will be, as usual, 10/1000, that is 10^-2.
Higly compressible strings are, as Dembski says, a small subset os all possible strings. By defining the length of a string, and the degree of compressibility, we can probably calculate the maximum specific complexity of some specific subset of compressible strings.
b) Compressibility as a possible result of necessity mechanisms.
Now, let’s say that an observed string is specified, for instance because it is functional, or even because it is compressible. As I have stated many times, it is not enough to compute the maximum functional information in that string (the ratio of the target space to the search space). We also have to consider of any known necessity mechanism can explain what we observe, completely or in part.
Now, in many cases, it will be clear that no known mechanisms is available. That is rather obvious for most human complex dFSI, such as Hamlet or a complex software.
But highly compressible strings are different: they can often be generated by necessity mechanisms, so we should be extremely cautious when evaluating the dFSI of such a string.
For example, 500 heads looks like a specified (becaiuse highly compressible) string, and its maximum complexity is 500 bits. But such a string can easily be generated by the tossing of an unfair coin, that can only give head as a result. That would be a necessity mechanism that can completely explain the string. If such a mechanism is possible in the system we are considering, then the dFSI of the string becomes zero.
Another example. A DNA sequence of 500 thymidines could appear specified (because highly compressible). But it can easily be generated in a system where only thymidine, and no other nucleotide, is available.
That’s why compressibility, while being a possible way to specify a subset, should be considered with extreme caution when we try to evaluate dFSI for that subset. Compressibility is often a sign of a simple necessity explanation.
Zachriel:
Please reference Dembski stating that, because I have provided CSI that is not that.
gpuccio: As I have stated many times, it is not enough to compute the maximum functional information in that string (the ratio of the target space to the search space). We also have to consider of any known necessity mechanism can explain what we observe, completely or in part.
Zachriel:
Nope, that does NOT follow from what he said.
Allan Miller:
If the translated strings do not function then I would say they do not have dFSGI- no “F”. And we see that with genetic engineering- some, or even most, times the transplanted gene gets translated but the protein does not form. And all you have is an unfolded, functionless polypeptide.
Joe:
Yes, you said:
But then you said:
So you’re denying that CJYman’s criteria, including incompressibility, are requirements for something to be CSI.
So which is it? Is incompressibility a requirement for something to be CSI, or is it not?
gpuccio!
welcome back.
Information Source
Information source (mathematics)
To Allan Miller (at TSZ):
He makes a good point about the relevance of compressibility to biological strings, though then rather blatantly ‘smuggles in’ a function relating to the existence of the transcription/translation system.
Are you referring to me here? Where did I “smuggle in” that?
So there can never be translated strings that do not have dFSCI, whatever they contain, since they go through the ribosome/mRNA/tRNA/aaRS system!
Just to be clear: if we are considering a System where the trancription and translation apparatus already exist (that is, if our scenario is the emergence of new proteins after OOL and LUCA), then we will not consider the complexity of those things (they are already part of the System, and they arre available). We will only analyze the functional complexity of the new protein, given the transcription and translation apparatus, and all other functionalities already present in the cells where the new protein originates.
But, if we are debating OOL, then the whole complexity of the minimal known reproducing beings should be taken into consideration.
As I have tried to explain many times (apparently without great success) computations of dFSCI are never made abstractly, they are always made with explicit reference to a System, a Time Span, and so on (see my detailed discussion in my previous thread, entirely pasted at TSZ).
To Shallit (at TSZ):
Yes, it’s clear that Dembksi and most ID advocates are quite confused about the relationship between Kolmogorov complexity and the bogus concept of CSI. In my paper with Elsberry we point out that Dembski associates CSI with low Kolmogorov complexity (highly compressible strings). But strings with low Kolmogorov complexity are precisely those that are “easy” to produce with simple algorithmic procedures (in other words, likely to occur from some simple natural algorithm).
You are absolutely right on this point.
By contrast, organismal DNA (for example) doesn’t seem that compressible; experiments show long strings of organismal DNA are often poorly compressible, say only by about 10% or so. This is, in fact, good evidence that organismal DNA arose through a largely random process.
Right again. That’s exactly what I have tried to say here.
R0bb,
Can you provide the alleged compression algoritm or not? What was compressed, exactly?
To Keiths (at TSZ):
It’s even worse than that. An object can conform to multiple specifications, in which case it simultaneously possesses multiple CSI values, all equally valid:
That is perfectly true, but there is nothing bad about it. I stated that point very clearly in my definition of dFSCI: functional complexity is always computed for an explicit functional specification.
I have also offered many times the example of a tablet computer: it can be specified as a paperweight (a perfectly valid function), but its functional complexity for that function will be very low. Or it can be specified as a computer capable of many explicitly defined functions, and its functional complexity for that specification will be extremely high.
I am glad that you understand correctly this point.
To Zachriel (at TSZ):
So your answer to Dembski’s rhetorical question, “Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause?” is no.
I would simply say, again, that we have to define the System where the object arose, the Tine Span, and we must know enough about those things to be able to reason about the object and its origin.
I don’t think that is in real contrast with Dembski, if we consider that Dembski is assuming the whole known universe and known physical laws as his System, and the time from Big Bang to now as his Time Span. IOWs, Dembski is trying to answer questions about the possible emergence of life in the universe.
My approach is different. I usually ask questions about the possible emergence of protein domains after OOL. Or, in alternative, about the possible emergence of the basic life system in LUCA. I usually prefer the first scenario, because we have more details about it.
All Dembski is saying is that if we did not observe the thing arising can we still determine it was designed or not? And the answer is clearly YES.
I offered what cjyman said about CSI to support my claim pertaining to CSI being not algorithmically compressible.
Maybe to you.
It all depends on how you are defining “compressible”.
To Zachriel (at TSZ):
Well, that establishes that there are conflicting definitions.
And so? That is good evidence of intellectual vitality and non dogmatism in the ID field!
To Keiths (at TSZ):
About your thread regarding common descent. I must disagree with you.
Being clearly a member of you “third group”, I must say that I don’t see any of the difficulties you describe, which derive only from your preconceived assumtpions about how the biological designer would act.
I have none of those preconceptions, and i judge from evidence. IMO, evidence is clearly in favour of a designer who acts with all the obvious constraints created by physical laws, and who is not acting as an omnipotent dictator.
Reuse of existing hardware and software is extremely common in human design, and rather obvious in biological design. That said, I would say that there are however many examples in natural history that are best explained as sudden design explosions, and where the reuse of existing design, while present, is overwhelmed by the sudden emergence of novelty: OOL and the Ediacara and Cambrian explosions are the best known examples of that.
So, even if you say, with your usual arrogance:
If you are still an IDer after reading, understanding, and digesting all of this, then it is safe to say that you are an IDer despite the evidence, not because of it. Your position is a matter of faith and is therefore a religious stance, not a scientific one.
I must answer that “digesting all of this” has not changed a comma in my scientific embrace of ID theory.
Mung:
Thank you!
To Shallit (at TSZ):
Just a correction. I agree with all that you say, except obviously the last phrase, that I included in the quote by mistake:
“This is, in fact, good evidence that organismal DNA arose through a largely random process.”
I obviously don’t agree with that. Indeed, I believe quite the opposite: the fact that, as you say, “organismal DNA… doesn’t seem that compressible”, and that it is however highly functional, is good evidence that it did not arise through any random or algorithmic process, but trough design.
Yup, a swiss army knife, almost anything from RONCO or Popeil- is keiths saying that these things are not designed?
How does that work- Seeing that anything designed can be used for more than one thing, they are not designed?
How can we get these guys to testify in the next trial?
To OM:
The equation is in the post R0bb made here
My values can be found here
Have fun…
Where do these guys come up with their nonsense:
blockquote>By analogy, all of Lizzie’s organisms have high dFSCI because they run inside a complex, designed computer, and are handled by a program that is more complex than they are
No, Lizzie’s organisms don’t have any dFSCI because they don’t do anything- they don’t have a function and posses no information.
So, gpuccio, now that you’re back.
The subject of recombination and protein domains was previously raised over at TSZ (Allan Miller, iirc).
Thought you might like to take a look at this paper:
PLOS ONE Are Protein Domains Modules of Lateral Genetic Transfer
Zachriel:
Not likely but if you were one of those two people then anything is possible.
petrushka with her lie of the day:
THAT despite everything we have said…
OMTWO on October 10, 2012 at 7:15 pm said:
lol
See here:
http://theskepticalzone.com/wp/?p=576
Code here:
http://theskepticalzone.com/wp.....mment-8121
Let us know when you find the CSI calculation.
Hi Mung,
I haven’t really been following the discussion, but since we’re talking about EA’s, I have an idea for one:
1) generate an initial config space
2) iterate and randomize the config space
3) at each iteration compile the config space
4) test whether compilation (produces an object file) fails or succeeds
The EA runs relative to function rather than a target string.
Lets see how far the function goes with respect to the given config space.
Let me know what you think.
To Zachriel (at TSZ):
It means people can be discussing CSI, but referring to different things entirely.
Or to slightly different aspects of the same thing. Or to different definitions of similar concepts. That ìs how cognition grows.
The basic concept of CSI is very simple and intuitive: how complex must an object, or a string in the digital case, be to express some objectively defined function, or property? And then how unlikely is it that such an object or string can arise by RV? Or can the complexity be only apparent, and the Kolmogorov complexity be really low?
These simple points are treated in different ways according to contexts and to different people. But the fundamental concept remains: complex functions require specific complexity to be implemented, and that specific complexity can be measured.
It’s mainly the obstinate resistance of people committed to materialistic reductionism that tries to confound the issue. They probably know all too well that CSI is deadly to their beliefs, and would argue any possible thing to evade the concept.
All the discussion about compressibility, indeed, although intresting, is completely irrelevant to the biological context. Biological strings are scarcely compressible. They are the kind of strings that formally appear “pseudorandom”, except for the fact that they convey a specific function. Exactly the kind of object that allows, with the greatest safety, a design inference.
Indeed, even the algorithm issue is irreleant, in the end. We all know very well that no algorithm can explain protein sequences. The only historical proposal is exactly classical neo darwinism, which derives some minimal power from the existance in the System of complex biological replicators competing for environmental resources, the one and only source of the effect usually called “natural selection”. But that RV+NS algorithm completely fails to explain almost all biological functional complexity, for the reasons many times discussed.
Any debate about compression or other possible necessity algorithms is mere distraction, a real and fundamental strawman. Compression and other possible algorithms have no role in biological systems (with the only possible exception of adaptational algorithms already embedded in the genome).
In the end, only design can explain what we observe. The only logical attempt to deny that fact has been classical neodarwinism, and the RV+NS algorithm. If it fails (and it does fail!), what remains is either design or complete mystery.
You are free to choose to stick to mystery. As for me, I have my reasonable scientific explanation.
Hi computerist,
I’ve considered doing something like that with Ruby, which does not need to be compiled to object code.
You can generate a string and then attempt to execute it as Ruby code and see if it fails, from within a running program.
eval “some string of code”
Another idea to would be to see if it cold be executed as an operating system command.
exec “some command”
Shallit says:
gpuccio: Biological strings are scarcely compressible.
and now Zachriel:
You and Shallit need to get on the same page there Zacho
To Zachriel (at TSZ):
My point was, and is:
Biological strings are scarcely compressible.
I never said they are not compressible at all.
I need not remind you that a highly compressible sequence is something completely different.
Let’s consider, for example, a sequence of 10^9 1s. It would have a “natural” complexity of 10^9 bits (quite a value!). But I believe that it can be compressed by some very simple algorithm, of much lower complexity. That would be a highly compressible sequence.
[Set up counter to 0
Write 1
Increment counter and compare to 10^9
Loop till count is met.
Print string. KF]
Biological sequences are scarcely compressible, for their intrinsic nature. They certainly have a few regularities, that can account for some compressibility, but they are not certainly in the range of “ordered” sequences, that can be outputted by some simple computation.
As I commented about Hamlet, you can certainly compress the text somewhat, but you would still need the compressd sequence plus the decompressing algorithm to get Hamlet. Do you really believe that those entities could arise in some random system?
Onlookers:
Sometimes, we need to go back to basics to clear an atmosphere of the fog from burning strawmen, in order to get back on track.
Step 1: Config spaces, W:
The idea here is that a given set of components put together in a system (down to atoms if necessary) can be arranged or scattered in a large number of possible ways, W. This traces to ideas in statistical mechanics and to phase space, but we are more concerned with position and orientation and coupling than with momentum.
Next, think about an exploded diagram of a system — say, a Cardinal Spinning reel — and the requisites for putting it together right in order to work. Parts have to fit together and be arranged and coupled in a fairly restricted number of ways, if something is to function. We can define a particular arrangement as an event or occurrence E, and we can cluster those that work under the restrictions of requisites of function, T.
Thus we see a zone of function or island of function, T within a wider space of possible configs, W.
Step 2: Dembski’s first models, in NFL
As the IOSE notes here, in NFL pp. 148 and 144, Dembski discussed (in a work that was published by Oxford and previously was essentially his Doctoral work in the field, so we can be reasonably confident that it passed serious scrutiny by peers of scholarship twice):
What Dembski does here, is to insert another common premise form the world of Stat mech, the idea that taking states as equiprobable in the absence of other info that biases choice of config, isolation to 1 in 10^150 of states E in zones T in W, is sufficient to secure something in T from reasonable chances of being found by chance and/or mechanical necessity.
He is also generalising from the context of functional specificity to the pure idea of being in a narrow, isolated Zone, T.
This is one gateway used to inject all sorts of confusing or dismissive distortions.
So, let us note that he is quite plain that in the biological world, specification pivots on function. Whether or not any particular way to set up this zone T succeeds in one’s estimation should be isolated from the point that there is such a reasonable concept as an isolated zone T in a field of possibilities W that is observable on some reasonable criterion such as, T is the cluster of possibilities that works in some definite way.
Similarly, I have seen huge debates on how to define and calculate probabilities exactly.
This is not needed. We know or should know that chance based, random sampling of a population, of reasonable scope, will normally capture the bulk, and miss special isolated zones. We even have a law of large numbers to that effect in statistics.
If we are searching by random processes or uncorrelated mechanical necessity without guidance on where T is, unless we have a sufficiently large sample, we are apt to miss such special, isolated zones. Indeed, there is a whole province of statistical testing that pivots on that tendency to be in the bulk not special zones such as tails. (The difference here is the tails or special zones are isolated to 1 part in 10^50.)
Step 3: 1 part in 10^150
Elsewhere, I have discussed how on the gamut of our solar system’s 10^57 atoms and 10^17 s, where fastest chemical reactions take up about 10^30 Planck times, the space of possibilities of 500 bits is such that the number of possible observations or search steps of the solar system’s atoms, would sample the equivalent of pulling one straw sized sample from a cubical hay stack as thick as our galaxy, about 1,000 LY. Overwhelmingly, such a sample will reliably pick up straw, not anything else.
Going up to the scale of the observed cosmos, 1,000 bits more than suffices to isolate zones T to 1 in 10^150. That is the number of Planck-time atomic states for the 10^80 or so atoms in the cosmos, is as 1 in 10^150 or so of the space of possibilities.
Converting into bits, 10^150 is roughly 500 bits worth of possibilities, and 1,000 bits is 10 ^301 possibilities, equally roughly. Where, ASCII text strings use 7 bits per symbol. 500 bits is just short of 72 ASCII characters, and 1,000 is about 143 characters (hence the limit on a Tweet).
We notice that we routinely produce text in English of 72 to 143 or more characters. We do so informationally and intelligently, not by blind chance and/or mechanical necessity. Indeed, we would dismiss as absurd the notion that text in this blog thread was produced by lucky noise on the machinery of the Internet. For obvious reasons.
Now, also, anything that can be described as a collection of nodes and arcs, can be reduced to a cluster of descriptive strings, which can be concatenated, so — as AutoCAD shows — discussion on strings is WLOG.
We have of course, abundant evidence that functionally specific, complex organisation and/or information — FSCO/I — is routinely and only observed as the product of design. this is important, as it is an inductive generalisation on billions of cases in point backed up by an analysis as above as to why this is so.
We must bear this in mind as we examine the tilting at windmills confused for was it giants, strawman tactics and objections.
Step 4: What about genetic algorithms and other forms of incremental climbing of a mt Improbable?
The key observation is that such things are based on intelligently designed algorithms. Were such a program to be constricted de novo from statistical noise captured by a computer, we might have something to boast of, but this is not the case for reasons directly tied to the just above. I doubt that any GA program is less than 72 ASCII characters long.
Similarly, we observe that such programs depend on some form or another of incremental hill climbing off the performance of a well-behaved fitness function that leads up to a peak zone or one of a cluster of linked peak zones, so the step size can be small and the uphill trend keeps one pointed peak-wards on the whole.
But the above makes it plain that most of the field of possibilities for a multipart functional entity of reasonable scope will not be like that. For most of W, functionality = 0, and there are no trends to on the whole point one uphill. So, as soon as we are in a zone that has that uphill pointing aspect, we are already within an island of function T, where all steps Ei –> Ej will have some functional character on both legs and we can reward desirable increments, individually or on a population of samples basis, say best of 100 or 1,000 or the like.
The real problem , however is not to move to a peak within T but to move from a zone W which overwhelms possible search resources, to find T without intelligent guidance.
In short, a big question is being begged, and the problem posed is being strawmannised by objectors. They are so used to working inside T that hey do not see the problem of the much wider zone W, and the challenge to find T.
BTW, that is exactly why I have insisted on a molecules to man frame for the 6,000 word blind watchmaker thesis essay challenge. At OOL, there is no existing von Neumann code based self replicating mechanism to appeal to; it too needs to be explained as an instance of FSCO/I which is patently irreducibly complex. (The resulting ducking, dodging, mischaracterisation, denigration, thread vandalism etc etc speak volumes on this challenge.)
Step 5: What about CSI?
Now of course Dembski generalised the zone T, and has sought to provide a generic model. the success/failure of such attempts should be understood relative tot he above, not by twisting them into pretzels as is altogether too common.
If you think Dembski has failed to capture the framework above, fine, show that, and suggest ways that he could better do so. Do not pretend that an extension can be criticised, so the underlying issue can be dismissed.
Instead of going to town on whether his mathematical model of 2005 is correct relative to the above, or can be twisted into pretzels, let us first show what it is trying to do, and then go about simplifying it for use. Years ago in response to a challenge by MF in his blog, I presented the following which is in the UD weak argument correctives, no 27:
W should not have to state the obvious, but given objections that have been raised, we do: the semiotic agents in view are constrained by available atomic resources and resulting limits on opportunities for observation. No more than 10^117 chemical time events for the 10^57 available atoms can happen in the history of the solar system to date for instance of perhaps 10^17 s.
Step 6: The log reduced, simplified Chi metric
CSI per the 2005 expression is intended to generalise and opens up a can of worms and side tracks as we have seen.
It is in my view useful to simplify, certainly moreso than to try to disentangle the thicket of strawmannish objections erected in the hopes of burying the CSI concept; which itself is a breach of the basic premise on which science is built, of seeking to improve.
This was done in response to MathGrrl/Patrick’s challenge of some time ago, as is presented in the IOSE and elsewhere. Clipping IOSE (accessible all along):
So, all along we have known how the matter could be addressed to the relevant context of function, and applied to biology.
All the huffing, puffing, erection of a forest of strawmen and setting them alight to cloud the issue, is pointless and willful.
KF
F/N: Since P’s track record is relevant, it should be noted that in his MG persona he tried to dismiss the above log reduction as a probability calculation, a quite severe blunder.
To Allan Miller (at TSZ):
In comments to me, having made a similar interpretation, GP denied that this is his argument. Once the replication system or translation or whatever is in place, we take that dFSCI-to-date as a given, and apply the metric to the ‘extra’ dFSCI within a particular Time Span.
That’s essentially correct. Obviously, as I have already stated, it is also possible to analyze the emergence of basic replication (OOL), and in that case the transcription and translation mechanism becomes part of what must be explained.
Computation of dFSI is always a highly empirical task, and it must always be referred to a specific scenario, and to a specific problem.
KF (275):
Do you happen to know how 635×10^9 was arrived at? I kind of figured, if we’re talking about the number of different ‘hands’ of size 13 that can be selected from a standard deck of cards that it should be 52C13 (52 choose 13). But when I evaluated that value I got something different. Just curious. It’s not going to change the eventual negative result but I’m just wanting to make sure I’m tracking the logic okay.
Petrushka (at TSZ):
I always revert back to what I believe to be true. That’s my usual behaviour, and there is no need to “pin me down” to get that 🙂
Joe Felsenstein (at TSZ):
… which seems to have nothing to do with the stuff about dFCSI. So why bother with dFCSI?
It has all to do with dFSCI. Protein domains:
a) Have high functional complexity (therefore cannot arise in a purely random system)
AND
b) Are irreducible to simpler functional naturally selectable intermediates, and therefore cannot be explained by the only available necessity mechanism, NS.
Jerad:
52 chose 13 seems to be exactly 6.35*10^11, as KF said. What would be your result?
gpuccio,
See, told you I was dopey. I just checked it thoroughly. I had eyeballed the factorials and estimated. Sigh. I need more tea obviously.
Never mind.
To Zachriel (at TSZ):
What is your definition?
You can find it here:
http://www.uncommondescent.com.....inference/
post #88.
Jerad:
No problem. I will not attack you because you have taken two mutually inconsistent positions in less than one hour 🙂
gpuccio,
I don’t get credit for admitting I made a mistake?
Tough crowd!!
Jerad:
It was only an ironic quote of what Keiths said about me at TSZ, for “taking three mutually inconsistent positions in less than 48 hours”.
I certainly did not get any credit from him for admitting, twice, that I had made a mistake.
Tough crowd!! 🙂
To Keiths (at TSZ):
Quite the opposite. I don’t make any assumptions about how the Designer would act. He has trillions of options open to him, and he could choose any one of them, regardless of whether it produced an objective nested hierarchy.
You are making the assumption that the designer “has trillions of options open to him” (why?), and that he “could choose any one of them” (how do you know that? are you an expert about the designer’s free will?), “regardless of whether it produced an objective nested hierarchy” (so you know how many of the options would produce that, and that the designer has no reason to prefer one kind of option to another one; again, how do you know that?).
Those are a lot of assumptions.
It’s the evidence that tells us that the objective nested hierarchy exists.
Fine.
1a) Out of the trillions of possibilities, unguided evolution predicts an objective nested hierarchy; we see an objective nested hiearchy; the prediction is successful, and unguided evolution fits the evidence extremely well.
It certainly fits the evidence of the hierarchy. But, unfortunately, it does not fit the evidence of the complex biological information. You are reasoning as though the hierarchy were the only evidence available.
ID predicts neither an objective nested hierarchy, or the lack thereof; we see an objective nested hierarchy;
It does nor predict necessarily the hierarchy, but it is perfectly compatible with it. What ID does predict is the complex functional information in the designed objects.
ID proponents have to assume that the Designer chose to produce an objective nested hierarchy,
Either chose, or had to. Because of specific restraints.
which is exactly the same pattern that unguided evolution would have produced.
No. It is simply the same pattern as any form of evolution, guided or unguided, would have produced if it had to work by modification of the existing beings, instead of having to create new beings from scratch each time. It is very obvious that the first option can be the best, or the only one, available to a designer if specific constraints on how the designer can act are present in the system.
There is no successful prediction, and a completely unwarranted assumption.
There is no prediction here, but there is a much more powerful prediction about complex functional information. And there is no assumptiom at all: we observe the evidence, we infer design (from complex functional cinformation), and we reasonably infer that the designer had specific, and definable, limjitations in how to act.
Physical laws don’t require an objective nested hierarchy.
The designer has to modify matter from consciousness, through some interface. We don’t know how that interface works, and what its laws are. The real constraint is obviously how to implement the design in the material world. The simple explanation for the nested hierarchy is that it is easier for the designer to modify what already exists than to redo everything from scratch. Is that so difficult to understand?
That suggests that your embrace of ID is not scientific.
You are entitled to your opinion, however bizarre.
Keep thinking about this,
I think about many things, but I usually decide myself what to think about. Anyway, thank you for the kind suggestion.
but try to do so with the attitude that you want to discover the truth, whatever that may be — even if the truth turns out to be uncomfortable.
That is a very wise principle for thinking about anything, and I certainly can reciprocate the encouragement.
P.S The UD side of the discussion is happening on this thread, so you might want to repost your comment there.
I will copy this comment there too.
Zachriel with the daily equivocation:
We are interested in its blind watchamker origin as ID is not antievolution and is OK with nylonase evolving by design.
IOW Zachriel and the TSZ ilk can only equivocate because they have nothing else.
Joe Felsenstein:
That is my argument and it follows from what was said in “No Free Lunch”. You need to explain reproduction, you cannot just use it as a given.
Also Lizzie did not have extra SI put into anything. She did not generate CSI.
But anyway it is very interesting that not one of you can come up with a biological example of natural selection adding specified informaton to any genome.
Joe Felsenstein:
Most likely
Lack of critical thinking skills. Or the inability to understand your opponents due to some limbic issue
Zachriel to gpuccio:
That is incorrect. Just because there aren’t any known deterministic explanations does not mean it is part of the definition.
A deterministic explanation for dFSCI would mean the presence of dFSCI is not a hallmark of design.
How many times do you have to be told that?
And petrushka goes for the personal shot:
Unlike petrushka I tend to NOT equivocate and understand that the paper needs to address the mechanisms proposed, namely accumulations of random mutations. You just don’t get to declare gene duplication followed by function changing muations, a random process just because you are too lazy to determine an actual cause.
IOW the question is evolved how-> By design or accumulations of random mutations?
gpuccio:
The argument by keiths where he assigns to “THE DESIGNER” trillions and trillions, possibly even infinitely unlimited options, was so lame i couldn’t even bring myself to care about it, lol.
Thanks for addressing it.
Now you do bring up an excellent point. What is it that makes for the ability to identify this “objective nested hierarchy.”?
If genomes were just random assemblages, what sort of objective nested hierarchy would that result in?
Joe:
Joe, I told you what algorithm I used — PAQ. I provided a link to a zip file containing the source and executable. What exactly are you looking for?
I provided links to the files that were compressed, told you how much they were compressed, and invited you to reproduce the results.
Everything in those files was compressed by about 80%, although 12.5% can admittedly be attributed to the unused bit in each 7-bit ASCII character.
Joe:
He clearly means “algorithmically compressible”. So you already answered the question when you said, “In order to qualify as CSI it cannot be algorithmically compressable.”
To maintain this position, you have had to claim a distinction between the terms “CSI” and “specified complexity”, while simultaneously maintaining that they’re synonymous. You do this with the following logic, which I’ll assume you’re saying in jest:
And you support your position with a quote from CJYman, but then later deny that CJYman said that incompressibility is a requirement for something to qualify as CSI, meaning that the quote doesn’t actually support your position. And, true to fashion, you cap off this denial with an insult:
Finally, your position requires you to ignore or spin clear statements by Dembski, like this one from page 144 of No Free Lunch:
gpuccio:
But this rendering of the basic concept of CSI seems to assume that CSI entails high Kolmogorov complexity, or at least apparently high Kolmogorov complexity. So the question of whether CSI, a term invented by Dembski, really does entail that, is basic to the concept.
Joe:
You should be complaining to gpuccio. He’s the one who said that, in order for a string to be said to exhibit dFSCI, “It is required also that no deterministic explanation for that string is known.”
R0bb,
You first converted tne text and then compressed the conversion. Not the same thing.
Are the same number of words still used?
R0bb,
Stop misrepresenting gpuccio. What I said is what has been said since ID came around.
And you are also mispresenting the Dembski quote- the CSI within the within the Chaitin-Kolmogorov-Solomonoff theory .
To Zachriel (at TSZ):
I am afraid you are seriously misunderstanding me here.
First, I will answer the small things.
It’s not important, but what is the function of Hamlet?
I will answer that briefly, but we could go into more detail if you want.
There are essentially two kinds of functional information (see also Abel):
a) descriptive information (like language) has mainly the purpose of conveying meaning
b) prescriptive information (like software, or a protein coding gene) has mainly the purpose of implementing a function.
It is possible to describe descriptive information (like Hamlet) in term of an explicit function, such as: a text that can convey all the information about the story, the characters, the emaning, and if we want even the emotion and the beauty.
Prescriptive information is easily described by defining the function it implements.
Again, just as an aside, how many permutations of words have the same function as Hamlet? Keep in mind the many, many versions of Hamlet. Seems intractable, especially given the lack of a clear functional specification.
The problem is indeed tractable. I could show you that dFSI necessarily increases with the increasing length of a text. Therefore, we can be sure that, beyond some length, a non redundant text will certainly be beyond the threshold of, say, 500 bits.
Let’s grant that Hamlet has high functional complexity, per your definition.
It has.
So if we are ignorant, we are more likely to judge it to be design. This is nothing but a gap argument.
It’s not a question of ignorance. A plausible explanation must be known, otherwise we deny all scientific principles. You cannot just say: maybe in the future we will find some necessity explanation for that, therefore why ifer design even if it has all the properties of a designed thing? That is not science. Such a position can never be falsified. It is only wishful thinking, to defend one’s pre commitments to a specific ideology.
Moreover, while strings with some regularity can easily evoke the suspect of a possible necessity origin, pseudo random strings which convey a meaning have never been explained that way.
More on your last comment later, I must stop now.
Joe, I didn’t convert the text to anything. I don’t know what you’re talking about. I gave you everything you need to reproduce the results — why not try it?
Are we seriously arguing over whether English text is compressible?
Folks:
Let’s keep things fairly simple.
Take a protein. How much can its string vary without disastrous loss of function? If not a lot, then it is specifically functional. (In short, we are in zones T when we have relatively narrow sets of possible configs in a much larger space, that will work.)
Similarly, for DNA that codes for the protein.
Next, for multipart systems in cells made up from proteins, etc.
Remember, there is a reason why we have a fear bordering on panic about radioactivity, which accelerates random mutation rates. (Way back, I learned the main mechanism was breaking up H2O, which then reacts aggressively with whatever is nearby. Breakdown of function is a very likely outcome.)
With CSI, the debates back and forth are on an attempted generalisation which exists in a context of a clear understanding that in life forms specificity is cashed out on config dependent function.
And Way back Abel and Trevors highlighted that a completely random string will have low compressibility in the algorithmic sense, functionally organised strings will have moderate compressibility, and ordered ones, strong compressibility. the point of K-compressibility is that if you can set up a simple way to get there, it is compressible. A truly random string simply has to be repeated. Functional strings tend to have some redundancy so moderate compressibility is possible. things which are simple and highly ordered, like a crystal or vortex etc, will have fairly short and simple descriptors.
In short, you can describe a wiring diagram or the result of it enough to force a specification of its config to within a fairly narrow scope, but because there is a minimum amount of complexity demanded by therequisites of function, you don’t get the degree of compression by algorithmic description or statement of controlling law etc that happens in other cases.
Meanwhile much of the onward back forth is on increasingly tangential side-tracks. I get the feeling that in some cases they are setting up red herrings led away to strawman caricatures used to be rhetorically punched up and dismissed.
let’s remember, whatever flaws one may find or think s/he finds in Dembski’s models and statements, the fundamental issue is that we are looking at things complex enough to set up large spaces W, sufficiently large to exhaust the atomic resources of our solar system or observed cosmos. In these spaces, we have zones T that are apparently describable or observable, that are narrow zones of interest. The candidates for being at an E in T are blind chance and necessity, or design. On sampling space and sampling grounds as well as by direct observation, the best explanation under such circumstances is design.
Let us not blind ourselves by kicking up enough dust, smoke and fog to miss the main point.
Which was the point of my earlier comment.
KF
R0bb,
Are the same number of words still used? If not what words are missing? And if the same number of words are used what was compressed?
R0bb:
paq8l is an open source (GPL) file compressor and archiver.
So first you had to CONVERT the text to a file and then the FILE was compressed.
kf @301:
Well stated, including the clarification on compressibility. In terms of compressibility, CSI tends to (although does not necessarily in all cases) fall between the extremes. This is due to the fact that CSI is characterized both by complexity (i.e., not a simple highly ordered state) and by rules (e.g., all forms of information use some kind of vocabulary and syntax that follows certain rules of order).
The question of whether CSI is more or less compressible than this or that string misses the primary point. CSI is not primarily about compressibility. It is about syntax, semantics, pragmatics. The compressibility aspect (the simple statistical Shannon aspect of information) is interesting at some level, but must not be allowed to overshadow the real issues.
To Zachriel (at TSZ):
I believe that here you misunderstand. Points 2-4 are intended to explain how dFSCI is defined and measured.
Point 5 is a completely different thing. It just means that dFSCI, as previously defined, is empirically capable to distinguish between human artifacts and non designed strings, with 100% specificity (and many false negatives).
This is an empirical fact. It has nothing to do with how dFSCI is defined.
To Petrushka (at TSZ):
That seems to have two unrelated problems
Only two? That’s really a compliment.
It violates the ID code of not discussing the motives and attributes of the Designer,
A code I have violated many times, and you of all should know that.
and it makes no sense
Why am I not surprised?
An omniscient being,
Did I speak of omniscient beings? When?
or one that can assemble long strings of functional DNA,
Ah, that’s lowering the reqwuirements a little bit, I believe.
anticipating its function within a changing ecosystem,
A smart designer, I would say. Maybe not omniscient or omnipotent, but certainly smart.
would not have the kind of limitations characteristic of mere mortal designers
But he could certainly have other kinds of limitations.
At any rate it makes no sense to assign attributes to invisible imaginary magicians.
That is, I believe, exactly what you have been doing here.
As for me, I prefer to infer from empirical facts the possible attributes of a very real designer who has left evidence of his existence everywhere.
To Joe Felsenstein (at TSZ):
Again, am I misunderstanding gpuccio’s argument?
Yes. Absolutely.
How?
Indeed, I can recognize practically nothing of my argument in your words.
You say:
Yet confronted by Elizabeth’s GA program, gpuccio was not willing to acknowledge that the amount of SI increased in that program.
I don’t understand what you mean. I said that the program is an algorithm that computes a solution to a well defined question. To do that, it obviously needs to have a lot of SI about the question and the possible solutions. Moreover, the program uses RV + IS to find possible solutions, and it succeeds, as many algorithms using RV + IS can do.
Regarding the computation of dFSI in a string that is a solution to the question, I said that there are two possibilioties: the dFSI of the string is an upper limit to its dFSI. If an executable program with lower dFSI can compute the solution, then the dFSI of the program is the dFSI of the string itself, because we have to consider anyway the lower complexity that can generate the string (IOWs, the Kolmogorov complexity of the string).
I have also said that there is probably a much simpler top down algorithm that can compute a solution without using any RV, as KF has shown.
gpuccio’s argument was that the dFCSI was already there because Elizabeth had made the program’s organisms able to reproduce.
Where did I say that? I only pinted out that Elizabeth’s program uses IS, with a probability of being positively selected at each round of 0.5 for each string. What has that to do with reproduction? What has that to do with NS? The answer is simple. Nothing.
What I said is that NS, in a biological context, is simply a byproduct of the existence of biological beings that:
a) can reproduce themselves
b) have to rely on environmental resources to exist, mainly because they are based on metabolism.
reproduction and metabolism certainly imply a lot of functional information, and therefore, if we are analyzing a scenario about OOL, they become part of what has to be explained. In other, more limited scenarios, reproduction and metabolism (and therefore NS) can be taken as given in the system, because we are not trying to explain them in our context.
That’s when we all started arguing about intelligently computer simulations of unintelligent natural processes.
Well, this at least is clear.
This seems to me to be a big contradiction.
???
When an organism has dFCSI and can reproduce, gpuccio says that we can count the “extra” SI put into the genome by an adaptation.
I can’t even understand what you are saying here, and yet you state that it is something I have said. That seems to me a big contradiction!
I spoke of adaptation as a possible mechanism by which an existing genome takes advantage of environmental changes through intelligent algorithms already embedded in the genome.
I have clearly offered the example of antibody maturation as a model of intelligent adaptation based on RV + IS.
I have mentioned that many believe that active adaptational algorithms do exist in bacteria, and possibly in other living beings. That’s all.
I have not said that any new biological information that arises is explained by adaptation. I don’t believe that. New protein domains, IMO, are designed, and are not the product of adaptation (and, obviously, not even of RV + NS).
But when the genomes are in a GA, gpuccio refused to count the extra SI that was put into those genomes.
I really can’t see what you mean here. An example, please, of when and where I would have done something like that.
There all the SI was said to be coming from the original SI put in when the GA was set up.
Again, what I said is that if an algorithm (whether it uses RV and IS or not) computes a solution, the complexity of that solution is the lowest between the two: the natural complexity of the string, and the complexity of the algorithm that computes the string. There is no doubt that, in extreme xases, the apparent complexity of an ordered string can be much higher than the complexity of the algorithm that can output it. See for example the case of the string made of 10^9 1s, as described in my post #274 here. In that case, the dFSI is the dFSI of the algorithm, which is obviously the Kolmogorov complexity of the string.
And zacho proves its agenda is obfuscation:
That is still incorrect. Your inability to deal with what has been posted proves that you are still an insipid troll, just as telic thoughts says.
I missed this stupidity by Mike Elzinga:
It’s the same mysterious information that allows for communication and information technology. The same mysterious information that people use every day.
For some reason when evos hear “information” their entire being goes into convulsions- hey Mike, call 411 and ask them what is their purpose.
Zachriel:
Umm replication is STILL the thing you need to explain. By just using replication you expose your desperation.
R0bb, nothing to say in response to my posts?
Joe, please. Comments like this don’t help.
What do you think the FILE consists of, if not strings (of text). What do you think the program does if not read in the strings from the file, process them, and write them back out to a different file.
Do you really think you can’t compress a string of text without having first saved it to file?
# Ruby code
require ‘zlib’
hd = “Humpty Dumpty sat on a wall. Humpty Dumpty had a great fall. All the king’s horses and all the king’s men, couldn’t put Humpty back together again.”
hd.length
hdz = Zlib::Deflate.deflate(hd, Zlib::DEFAULT_COMPRESSION)
hd.length
hdz.length
C:\projects>irb
irb(main):001:0> require ‘zlib’
=> true
“Humpty Dumpty sat on a wall. Humpty Dumpty had a great fall. All the king’s
horses and all the king’s men, \n\ncouldn’t put Humpty back together again.”
irb(main):005:0> hd.length
=> 149
irb(main):006:0> hdz = Zlib::Deflate.deflate(hd, Zlib::DEFAULT_COMPRESSION)
=> “x\x9CU\xCC1\n\x800\x10D\xD1\xDESLg#\xDEA\xB0\xF0\x1A\xAB\xAE\x89\xB8n$\xD9 \
xDE\xDE\xA0XXM\xF1>3\xE4\xFD\xB0\v\xFD;\x89\fAA8I\xA4\xC5\xF0COs\x11\x17\xB9D\xC
B\xE3\x9D\b\xCC3\xB6U]\x9D\xE0CL\x9C@Z\xBA\xBF\xEC\xAC\r\xAAj\nYf\xAD\rG\xB6\xEF
|\xA4i\x83\x05\xC7%\x8F G\xAB\xB67D\xC23\xE9”
irb(main):007:0> hd.length
=> 149
irb(main):008:0> hdz.length
=> 108
irb(main):009:0> exit
p.s. Water has three states.
I just love it how the experts at TSZ weigh in when it’s convenient to make a point against ID, but remain strangely silent when one of their own goes about making a fool of himself/herself.
gpuccio,
Without having read Felsenstein’s comments yet in context, I suspect that by SI he means Shannon Information.
Elsewhere, if need be I think I can find this, Elizabeth has argued that a maximally random string has the most Shannon Information (or maybe it was some ID’er =p).
In any event, where has she or anyone else over there shown an increase in Shannon Information in her programmatically generated strings? I sure wouldn’t take JF’s word for it.
OK Mung,
You had to first convert the text to some compressible code and then compress that. Better?
I called it a “file” because that is what R0bb did. So my mistake was saying that he had to first convert it to a file.
ps accumlations of H20 have 3 states
Umm replication is STILL the thing you need to explain. By just using replication you expose your desperation.
Zachriel:
Maybe YOU are. To me it has been well defined and we measure it in bits. IOW you appear to have some personal issues.
Also all of that would be moot if you could just support your position. No need to worry about ID. Even without ID you still don’t have anything- even less because without ID you wouldn’t even have that to misrepresent.
Zachriel:
And the connection to biological reality is?
Mung:
More importantly, the unfortunately-named “Shannon information” is uninteresting. Indeed Shannon “information” is not even true information in any meaningful sense of the word; certainly not in the CSI sense we are interested in for technology, communications, bioinformatics, etc. It is, rather, a simple statistical measure of information carrying capacity. If we examine a string and see that it has a reasonably high information carrying capacity we might have a first clue that it could contain CSI. But that is all it gives us, some kind of initial hint of potential capacity of the string in question. Whether it in fact contains CSI depends on layers of information (syntax, semantics, pragmatics) that go well beyond the so-called Shannon “information.”
I don’t doubt that a computer program could generate a bunch of random strings and that some of them could end up with higher Shannon “information” than we started with. Big deal. Anyone who thinks this demonstrates anything about CSI has no idea what they are talking about.
Joe:
You seem to want me to pretend that you’re ignorant enough to ask this question sincerely, even though I know you’re not. So I’ll play along.
After compression, it is no longer recognizable English text with spatially separated words or letters. But since the compression was lossless, no information was lost. The animals in Shakespeare’s plays weren’t harmed either.
The compression engine produces a sequence from which the original text can be regenerated. That’s what it means to algorithmically compress something.
The letters that make up words that make up English text are always encoded somehow. Shakespeare wrote glyphs on paper with ink, which were subsequently transcribed to similar glyphs on printing presses, which were eventually transcribed to ASCII on computers, which is what I compressed.
English text is compressible in all of those encodings because, like most real-world languages, it’s extremely inefficient. In formal language terms, the vast majority of strings in Σ* are ungrammatical.
If you think I’m cheating by using ASCII-encoded text, please tell me how you would go about testing the compressibility of text.
To Zachriel (at TSZ):
That’s right. #5 is a conclusion.
Are you kidding?
#5 is not a conclusion. It is an independent empirical observation.
Just to be more clear. We define a property (dFSCI) and how to assess it in objects.
Then we assess that property blindly in any number of strings of whoch we may know the true origin. For instance, we mix any number of meaningful strings designed by humans with any number of randomly generated strings, all of them long enough to be beyond the threshold of 500 bits. And then we ask independent observers to tell us which are the meaningful strings designed by humans and which are those that do not allow a design inference.
IOWs we are empirically testing the specificity of the dFSCI property when it is used to infer design in a set of objects where the true origin can be known for certain.
It is an empirical testing, and an empirical observation. Not “a conclusion”.
Is it clear now?
To Zachriel (at TSZ):
Sure, let’s take a protein, say a random sequence that weakly binds to ATP. The specified complexity would be low as these proteins are relatively common in sequence space. Now, let’s replicate and mutagenate the sequences, and select those with the most binding function. The specified complexity has increased. After repeated generations, CSI.
Exactly. You are obviously referring to the shameful Szostach paper.
That paper is good evidence that Intelligent Selection can increase the complexity of a string in relation to a known function.
That is not surprising at all. As I have said many times, RV + IS is a very powerful form of design.
I have also offered the example of an algorithm that computes the first “n” decimal digits of pi. As “n” becomes greater, the complexity of the output will, at some time, become greater than the complexity of the algorithm that computes it.
A protein could also be computed in a top down way. We cannot really do that at present, but we will be able to do it, some time in the future.
And so? Design can create dFSCI. We know that. An algorithm can output dFSCI, but according to my definitions, we should anyway consider the complexity of the algorithm as the true complexity of the outputted string. In practically all cases, however, we will still infer design, because the algorithm is complex enough to infer design.
The important point, however, is that no algorithm can create new dFSCI in relation to any new function that is not already described, or in some way implied, in the algorithm itself.
The reason is simple: algorithms are not conscious. They have no experience of purpose. Therefore, they cannot recognize function, unless in their code something has alredy been defined as “functional”.
So, Lizzie’s algorithm can compute answers to the question that is already embedded in it: it can do nothing else. My pi computing algorithm can compute pi: it can do nothing else.
In some algorithms, the function can be defined more generically, so that they will be more flexible in their performance. But a new function, that is not covered by the definitions embedded in the algorithm, will never be recognized by the algorithm, and therefore no dFSCI related to that new function will ever be computed by the algorithm, because the algorithm cannot recognize that function.
So, Szostack could easily engineer a protein with a strong binding to ATP (however useless in any biological context) because he knew what he wanted (an ATP binding protein), he measured and selected that function at very trivial levels in random sequences, he amplified, mutated, and intelligently selected the resulting sequences for that function. Good design, and very bad interpretation of the results, still echoed by yourself for bad reasoning.
The only algorithm present in biological contexts is NS. It can generate some new information (not much of it) related to the function that is already embedded in the algorithm itself: reproduction by use of environmental resources. That easily explains many microevolutionary events.
But it cannot do anything else.
To Zachriel (at TSZ):
It is. That’s how it can be done.
a) We define the function as the ability to convey the full set of meanings in the original text (we can refer to a standard version, for objectivity).
b) We prepare 1000 detailed questions about various parts of the text.
c) We define the following procedure to measure our function: the function will be considered as present if, and only if, an independent observer, given the text, is able to answer correctly all the questions.
OK, that would not easily include the emotion and the beauty, but I had mentioned them just as a bonus (and a homage to S)!
Thank you Robb,
So you did NOT compress the text, but a digital representation of the text. Got it. Not the same thing and your bait-n-switch is more than a tad dishonest.
To Zachriel (at TSZ):
Gpuccio provided a definition of what he calls “dFSCI”, which, unfortunately, includes design in its definition, so can’t be used to argue for design.
???? What do you mean? Please, refer to post #320.
Eric Anderson: Indeed Shannon “information” is not even true information in any meaningful sense of the word; certainly not in the CSI sense we are interested in for technology, communications, bioinformatics, etc.
Zachriel:
Ummm non-sequitur. Eric was posting about the “information” part which is not information in the ordinary usuage.
And it will be until you get off of your lazy butt and demonstrate that blind and undirected processes can account for it. Your continued whining and misrepresentations sure as heck ain’t going to change anything.
Joe:
LOL. And how do you think text should be represented when we’re testing its compressibility. Ink glyphs on paper?
You claimed that Shakespeare and encyclopedias are not compressible. How did you come to that conclusion? How do you go about testing for compressibility?
R0bb,
Compress the TEXT, Did Shakespear know about ASCII? No- compress the TEXT R0bb or admit you cannot.
Thank you Robb,
So you did NOT compress the text, but a digital representation of the text. Got it. Not the same thing and your bait-n-switch is more than a tad dishonest.”
tonto:
No affect- two different topics.
Joe,
a discussion of a word based text compression scheme:
http://reference.kfupm.edu.sa/....._71379.pdf
Great use it to compress the works of Shakesspear. Let us see the results.
Zachriel chokes:
Nature does NOT select- there isn’t any selection taking place.
So Zach doesn’t understand natural selection and there is no way it will ever understand CSI.
Joe (330)
I’m not that interested actually. I was just pointing out that text compression algorithms exist. I was interested in a previous comment and looked it up.
Zachriel apparently wrote (I don’t check the other thread):
Um, in what sense? Communication systems are interested in information carrying capacity. In addition, there are much more important aspects of information beyond this mere statistical measure of carrying capacity. Syntax, semantics, pragmatics. The Shannon so-called and unfortunately-called “information” says exactly nothing about these aspects.
Unfortunately the term Shannon “information” confuses people who aren’t able or aren’t willing to understand that it is not the be-all-and-end-all of information. Anyone who thinks that Shannon information can ever fully describe, account for, or measure CSI has no idea what they are talking about.
Eric,
Zach is referring to the fact that Shannon first defined the bit and his work was concerned with the transmission and storage of data.
Zach is unconcerned over the fact that Shannon information isn’t really information in the ordinary sense. That way he can conflate the two with no worries.
WRT to being algorithmically compressible and Shakespeare, that would mean we cannot write an algorithm to produce them.
For 500 1s, we could do so.
By what criteria?
Zachriel:
Nope, we do not construct nested hierarchies based on the history. If you think otherwise please provide a valid reference. but we know you won’t…
How very teleological.
Eric, Joe:
Well, I disagree that Shannon Information is not somehow “true” information. The problem is that people often do not understand what Shannon Information is about. We just need to ask, what is Shannon Information about.
That’s why I was asking earlier in the thread about the assumptions or pre-requisites for measuring the amount of information in a string of bits. Zachriel certainly seems to understand.
You must know or assume a set of symbols, an alphabet as it were. You must know or assume the distribution or likelihood of a particular symbol or letter.
So my example ws, how much Shannon Information in the following: 00101
And the correct answer is, we can’t answer the question (without making some perhaps invalid assumptions), because there are things we just don’t know.
Are 0 and 1 the only symbols? The next character was 2. 001012
But say the next character was another 0: 001010
We still can’t really say, because we don’t know if each symbol is equally likely. Perhaps we’re looking at only the first part of the following sequence: 001010011100101110111
in an article on Data
Again, what citeria, ie what traits? What is the nested hierarchy? Define the levels and sets, please.
Zachriel:
No you did not provide any criteria. You did not say what the nested hierarchy was and you sre as hell did NOT define the levels and sets.
Why do you insist on lying all the time?
To Zachriel (at TSZ):
I really have difficulties in understanding what you mean. Let’s see:
You have defined dFSCI as follows: dFSCI: A boolean indicator of the existence of functional complexity of more than 150 bits for which no deterministic explanation is known.
OK
You have also stated that the mechanisms of the modern synthesis are a “deterministic explanation” under your definitions.
They are a RV + NS (where NS is the deterministic part of the algorithm) explanation, that cannot explain what it wants to explain.
You therefore cannot claim that your #5 is an empirical observation when there is no possible empirical observation that could lead to a conclusion that dFSCI is present in an artifact known to have evolved.
I am afraid that here I have lost you completely. Let’s see again my #5:
“#5) Any object whose origin is known that exhibits dFSCI is designed (without exception).”
And my explanation of #5:
“Just to be more clear. We define a property (dFSCI) and how to assess it in objects.
Then we assess that property blindly in any number of strings of whoch we may know the true origin. For instance, we mix any number of meaningful strings designed by humans with any number of randomly generated strings, all of them long enough to be beyond the threshold of 500 bits. And then we ask independent observers to tell us which are the meaningful strings designed by humans and which are those that do not allow a design inference.
IOWs we are empirically testing the specificity of the dFSCI property when it is used to infer design in a set of objects where the true origin can be known for certain.
It is an empirical testing, and an empirical observation. Not “a conclusion”.”
What has that to do with what you say?
there is no possible empirical observation that could lead to a conclusion that dFSCI is present in an artifact known to have evolved.
Again: we test dFSCI with a set of long enough strings. Some of them are designed and meaningful, some of them are generated randomly. We know the origin of each string (if it was designed or randomly originated) because we have direct knowledge of how they were produced. Then we take some independent observer, who knows nothing about the origin of the strings, and ask him to infer desing, or not, using the evaluation of dFSCI for those strings. He will recognize the designed strings, with 100% specificity. Thius is the very simple meaning of my #5: an empirical test where dFSCI can easily recognize designed strings from non designed strings. Empirical test, nothing more.
If an artifact is known to have “evolved” (whatever it means) by an explicit deterministic mechanism that is already present in the system, we will conclude that it does not exhibit dFSCI (in that system), and that there is no reason to infer design for it in that system.
So, let’s take a protein domain in the system which already includes NS (after OOL). We want to decide of we can infer design for it, or not.
So, we ask two questions:
a) Is the string functionally complex in itself, beyond 150 bits (or whatever threshold we have chosen)? Let’s say it is.
b) Is any necessity mechanism explitly known that can explain the emergence of that string in that system? IOWs, can any algorithm already present in the system lower the improbability of the emergence of that string?
Now, the only determinstic mechanisms proposed for biological systems and biological information is NS. So, our question becomes: can NS explicitly intervene to explain this string?
If we know functional, naturally selectable intermediates for that string, then our answer is yes, and we have to re-evaluate dFSI for the RV parts of the process. For instance, as I have explained in detali elsewhere, a “perfect” intermediate, fully functional and fully selectable, can lower significantly the probability of the emergence of the final string. According to our new calculations, we will decide if a design inference is still warranted.
But if nothing of that kind is known, we will assume the total dFSI of the string as unexplained, and infer design.
The concept is very simple: dFSCI that cannot be explained by any known mechanism warrants a design inference. Why? Because dFSCI is a very good indicator of design (100% specificity in empirical tests). The clause about possible necessity explanations is only a safeguard against cases of apparent functionality that are indeed the result of some known mechanism.
The lack of dFSCI is a direct consequence of your definition, nothing else.
This is simple folly. My definition has the purpose of distinguishing designed things from non designed things. And it succeeds empirically in that task. That is not a consequence of the definition. It could certainly fail in its task. For instance, if the following phrase:
“Shannon was born in Petoskey, Michigan. His father, Claude, Sr. (1862 – 1934), a descendant of early settlers of New Jersey, was a self-made businessman, and for a while, a Judge of Probate. Shannon’s mother, Mabel Wolf Shannon (1890 – 1945), the daughter of German immigrants, was a language teacher, and for a number of years she was the principal of Gaylord High School. Most of the first 16 years of Shannon’s life were spent in Gaylord, Michigan, where he attended public school, graduating from Gaylord High School in 1932. Shannon showed an inclination towards mechanical and electrical things. His best subjects were science and mathematics, and at home he constructed such devices as models of planes, a radio-controlled model boat and a wireless telegraph system to a friend’s house a half-mile away. While growing up, he also worked as a messenger for the Western Union company.”
were presented to our observer, he would certainly infer design for it using the dFSCI procedure. Would he be right? That is not a logical necessity. If that phrase were a phrase randomly generated, then you would have a case of false positive. That is perfectly possible. It will never be empirically observed, but it is possible.
So, when I say that dFSCI has 100% specificity, I am stating an empirical fact derived from observation, and not a logical consequence of my definition.
A more interesting question is whether or not evolution can generate functional complexity, by your definition, in excess of 150 bits. If it can, as numerous examples in these threads suggest, then whether you call it dFSCI or not is immaterial — evolution will have been shown to be a sufficient explanation for our actual empirical observations.
Do you really believe what you are saying? Nothing in this thread suggests anything like that. You are purposefully using ambiguous words such as “evolution” to disguise your lack of arguments.
So I ask: what, in this thread or elsewhere, “suggests” that RV + true NS can generate functional complexity in excess of 150 bits?
No, because you have defined dFSCI as something without a known deterministic explanation, hence any object with dFSCI whose origin is known can’t have a deterministic explanation — by definition.
Again! No. The origin can be known and yet no deterministic explanation could be there. If, as already said, a phrase like the one I quoted above were generated in a random system, we would know the origin (we know that it was generated in that system, and that no operator wrote it), and yet we would have no deterministic explanation for it. In that case, and only in that case, we would attribute dFSCI (correctly) to the phrase, and we would (incorrectly) infer design (a false positive). Is that so difficult to understand, even for intelligent people like you?
Shameful? Seriously?!
Absolutely!
And natural selection can often select for very specific functions, just like in Szostak’s experiment.
It was intelligent engineering, in Szostak’s experiment.
A simple example is the evolution of antibiotic resistance which is often seen in natural settings.
A typical case of microevolution for minimal loss of information. That’s exactly what NS can do. And we all know that. Do you really believe that this is an argument?
Only as a thought-experiment is it possible to count them.
Mine was exactly that: a thought experiment. Its purpose was to show that, in principle, descriptive information can be defined as function and measured. Do you agree with that?
Mike Elzinga:
No Mike, what is abundantly clear is that you are a liar- and perhaps senile.
With the clear consequence that there is no “meaningless” information. Something I have been saying for a long time.
With Shannon there can be meaningless information. That is why I defone CSI as shannon information that has meaning/ function and is also complex (see NFL)
Mung @338:
We have to distinguish the measurement from the thing measured. If what you are saying is that once we run an analysis on a string and come up with a measurement of the information carrying capacity of the string, then that measurement itself is “true” information, then sure, that measurement is new information we now have. And what does that information tell us? Well, it give us (within certain parameters) an idea of the information carrying capacity of the string. It tells us nothing about the underlying information content of the string itself. That cannot be measured by Shannon methodology.
Look at it this way. I have a book on my desk. Now I can measure the book, its size, weight, number of pages, even number of words per page. Wonderful. Now I have described certain aspects of the book, and, yes, that description is real information. But it tells us precisely nothing about the quantity, quality, functionality, etc. of the underlying information contained in the book.
The problem is that so many people are trying to use a Shannon calculation in an attempt to ascertain something about the quantity or quality of the underlying information contained in the string. Beyond a simple statistical description of the string’s carrying capacity, it is impossible. Shannon information is useless for this. It is not a question of trying harder or getting more clever with our calculations or defining ourselves into rhetorical knots; it simply can’t be done.
It is very unfortunate that the term “Shannon Information” has become current use. A much less confusing and more accurate term would be “Shannon Measurement” or “Shannon Quotient” or something like that. Then maybe people wouldn’t be so confused into thinking that they can use a descriptive measurement of a string’s carrying capacity (Shannon Information) as a surrogate for the underlying content (information).
This is why I feel it is critical in these discussions to keep in mind that the Shannon measurement (so-called “information”) is not really about the information in a string at all. It is simply a very basic first-order description of the string. Running a Shannon calculation on a digital string to determine how much information it contains is equivalent to weighing a book on a scale to determine how much information the book contains.
—–
(BTW, this is a somewhat different issue, but slightly similar to what we were discussing elsewhere — information “contained” in objects/events vs. information created in describing the objects/events.)
Eric:
I agree. The meaning of Shannon Information is independent of the meaning of the message being analyzed. It does not follow that Shannon Information is meaningless.
Shannon Information is information about something else. It is still information. That’s my point.
Most likely because they do not understand the nature of Shannon’s measure of the amount of information.
But really I think it’s worse than that, or maybe it is the same thing and you are seeing it from a perspective I haven’t grasped yet. They think they can generate Shannon Information. Why do they think that?
And then they think that if hey can just generate enough Shannon Information that it qualifies as CSI. lol
I’ve seen critics here argue that because they can measure “the information content” of a meaningless string of characters using “Shannon Information” that they have demonstrated that information can be without meaning. It’s true!
I think we are in essential agreement on all the points in your post. Thanks for your comments.
Mike Elzinga on October 12, 2012 at 7:02 pm said:
I have a bit string 8 bits in length (possible values are ‘0’ or ‘1’). One bit is set to a 1 by some method of random selection, all others are set to a 0.
Your mission, should you choose to accept it, is to discover which bit is set to a ‘1’ by asking questions. In response to each question I will respond with a yes or a no answer.
Are you confused about what the information you will be getting is about? What is the total amount of information you will need to receive in order to ascertain the location of the ‘1’?
If you choose the following strategy, how many questions, on average, will you need to ask to discover the location of the bit which is set to a ‘1’?
Is bit 0 set to ‘1’?
Is bit 1 set to ‘1’?
Is bit 2 set to ‘1’?
…
Can you calculate the amount of information per query?
Can you think of a better strategy?
If you want to maximize the amount of information obtained by each question consider the following:
log2 8
Does that describe an upper limit upon the amount of information you can get per query?
Can we get the total amount of information without adding together the amount of information from each query? You think maybe using log base 2 has something to do with our ability to add each amount to come up with a total amount?
Seriously. What a dolt.
Zachriel:
Please show us how his definition of dFSCI includes design in the definition.
To Zachriel and onlooker (at TSZ):
I realize only now that by mistake I have conflated in my answer #341 to Zachriel comments made by Zachriel and comments made by onlooker. I humbly apologize to both for that.
Zachriel: And natural selection can often select for very specific functions.
Mung: How very teleological.
Don’t blame your sloppy use of language on language.
So is it the environment that is doing the selecting for or natural selection?
Neither the environment nor natural selection selects for any specific function.
Mung @346:
Thanks for your thoughts. I know I’m preaching to the choir on the substance, but perhaps you’ll indulge me a couple of clarifications on the terminology:
Agreed. Once I take a measurement of a string I now have information about the particular characteristic of the string that I measured. Yet that information is separate from the underlying information in the string and teaches us essentially nothing about the information in the string. I think we agree on this.
We need to focus on this for a moment. A key point is that Shannon information isn’t even a “measure of the amount of information.” This is part of where they are getting off track. It is only a measure of the information carrying capacity. Again, I can weigh a book or even count the number of words in the book, but in doing so I have not measured the amount of information. At most what I have done is determine the potential amount of information the medium can contain. I have not measured the actual amount of information; and I certainly haven’t ascertained anything meaningful about the content of the information.
—–
I don’t doubt someone has a simple computer program that can generate “Shannon information,” because when we look under the hood we find that it isn’t really generating any information at all. Think of it this way: We can easily take a string and make random changes to it and end up with various strings that have more or less information carrying capacity. However, and this is the key, in doing so we haven’t generated any information. All we have done is generate random pipes. Then, as a separate exercise after the fact we measure the pipes and, lo and behold, some pipes are bigger than others (surprise, surprise). There is no information in the pipes. The so-called “Shannon Information” that we think we have generated is not information in the pipe at all; it is simply an after-the-fact measurement of the size of the pipes.
Again, people have to understand that they are not measuring the amount of information in the string. Shannon calculations cannot and never will be able to do that — it is a fool’s errand.
I haven’t been following the other thread at all (or even this one too closely), so I’m not exactly sure what the TSZ folks are claiming. If Lizzie’s or anyone else’s program generates random strings, some of which have greater information carrying capacity (i.e., have a higher Shannon measurement) than other strings, big deal. There are two important takeaways: (i) it is an exercise in irrelevance, (ii) if she (they) think it has anything to do with CSI, then they have no idea what they are talking about.
keiths:
Yes, I see you do the same thing as Lizzie. You don’t actually calculate CSI.
// program stops when this fitness threshold is exceeded
#define FITNESS_THRESHOLD 1.0e60
while (genome_array[0].fitness < FITNESS_THRESHOLD) …
I've only taken a brief look, but it looks like you dispensed with any phenotype. Not saying that's bad. I didn't feel a need to add that extra layer myself.
I don't suppose your fitness function smuggles in any information either. Doesn't it help favor strings with a higher product?
Mung- helloooo- the threshold holds the PRIZE and getting the prize means you have CSI!
Don’t you know nuthin’? 🙂
“There are reasons certain people are no longer allowed to post here.
tonto:
Nope, not even close. Keep trying though you may get it yet.
R0bb, Jerad, Mung-
Are we clear what is meant by compressibility wrt CSI?
WRT to being algorithmically compressible and the works of Shakespeare, that would mean we cannot write an algorithm to produce them.
For 500 1s, we could do so.
My reference is the very paper that has been the focus of the TSZ ilk- pages 9-11
oh. you mean they don’t have to do any calculation?
Calculation is only required when they want us to do it?
Zachriel:
That’s false. Given how long you’ve been debating against ID on the net you have to know this is wrong. You’re just another liar who has found a comfortable home at TSZ.
gpuccio,
You need to start resorting to copy and paste responses, since they just keep repeating the same old canards. What an intellectually bankrupt group.
Mike Elzinga:
Emergence. Nice to know.
So maybe species don’t evolve at all, maybe new species just “emerge”. I wonder how predictable that is.
Joe (355),
Are you sure pages 9 -11 is the section you want? I could only find one place that compressibility was discussed and that was on page 12:
Dr Dembski seems to be saying that the non-random sequences are algorithmically compressible. He’s not talking about an algorithm to produce such sequences.
Mung:
I am, indeed, very disappointed. When darwinists recur to the pseudo argument of dFSCI circularity, that really means that they are desperate.
Now, I must say that I fully expected such an attitude from some of them (just not to make names, Keiths), given their usual level of intellectual correcteness (I was saying honesty, but let’s keep it civil, at least this time).
But I really did not expect it from others (just not to make names, Zachriel), who are usually intelligent and correct in their discussions.
If even Zachriel can’t see that there is no circularity in the dFSCI procedure, after I have given him explicit examples of how it is empirically capable of distinguishing designed strings from non designed strings with 100% specificity, then there is really no hope. There must really be something wrong in how these people reason.
I knew that cognitive bias is strong and powerful in humans, but I really believed that it can be partially controlled in intelligent and goodwilled people. Evidently, that is not always the case.
Keiths (at TSZ):
Thank you for giving me a precious example of your cognitive bias:
I’m not aware of any argument that succeeds in showing that unguided evolution cannot generate biological complexity. I’m not aware of any argument that succeeds in showing that unguided evolution cannot generate biological complexity.
You see, the correct statement is:
“I’m not aware of any argument that succeeds in showing that unguided evolution can generate biological complexity.”
But obviously, for you ideologically committed guys, a non design non explanation is anyway the default (indeed, the only admissible truth).
To Zachriel (at TSZ):
It’s always getting worse:
Heh. You couldn’t have stated the God of the Gaps more explictly. Per your own statements, there are some sequences with “functional complexity” and that some of these sequences have known causes! But you still conclude that those that don’t must be designed.
Complete nonsense.
I don’t understand your reference to known causes. Either you misunderstand, or you don’t even read with a minimal attention what I write.
The “known causes” have nothing to do with the assesment of dFSCI. The requisites to assess dFSCI are two (as I have said millions of times):
a) High functional information in the string (excludes RV as an explanation)
b) No known necessity mechanism that can explain the string (excludes necessity explanation)
The “known causes” enter the scene only when we want to test the procedure against real examples. So, someone takes n strings of sufficient length whose origin he knows because he was responsible for their collection. Let’s say that 5 strings are taken from books, of which we know the author. 5 strings are generated by a random generator.
Then another person, who does not know the origin of the 10 strings, evaluates dFSCI in them. He will correctly attribute dFSCI to the first 5, and infer design. Take for example the paragraph about Shannon’s biography from Wikipedia. The questions are:
a) Is the dFSI of the string high? Answer: Yes.
b) Do we know a necessity mechanism that can output that paragraph? Answer: No.
So, we infer that the piece was written by a designer. And we are right. The first person, who collected the strings, knows that it was written by someone, and can confirm that the inference is correct.
For the 5 randomly generated strings, I will not be able to recognize any function (meqaning) in them, and I will not infer design. Correctly. The first person will confirm that they were generated randomly, without any intelligent design.
So, where does the necessity mechanism come into action?
Suppose that one of the strings is a series of aaaaaa, of the same length as the Shannon biography. Will I infer design? No. Because such a string could be originated by a mechanism, such as the tossing of a coin which has the “a” symbol on both sides. Even if I did consider the string specified (for example, because it is compressible), I would not consider it complex (for the same reason, because it is highly compressible, and its Kolgomorov complexity is very low). Even if the string was designed, that would be a false negative.
Three different kinds of strings. Three different empirical assessments of dFSCI. Three independent confirmations from the person who knows the origin of the strings. no false positives. Maybe a false negative.
100% specificity.
It’s simple, but you will probably not understand, or pretend that you don’t understand. I really don’t know, I have lost any hope to have a constructive discussion with you all.
To Zachriel (at TSZ):
Actually, that’s precisely how we read gpuccio’s statements. He defines functional complexity, excludes those with known causes, then concludes the remaining sequences are designed. Keiths summarized it above.
Your “reading” is terrible, and comnpletely wrong.
To Zachriel (at TSZ):
So evolutionary algorithms can generate dFSCI, per your definition #2-4.
Sure, why not? Dawkin’s Weasel (can we consider it an EA?) can generate the Weasel phrase. Not enough? Well, I suppose that a “big weasel EA”, which has the whole text of Hamlet, can generate the whole text of Hamlet through RV and IS in a reasonable time. That would certainly be dFSCI from an EA. What a pity that the algorithm would be much more complex than the solution! OK, it could also print the text directly, but then it woul probably not be any more an EA, but just an algorithm, and where is the fun?
Your software can generate words. That’s fun again. What a pity that it has to have a whole dictionary inside to do that! But that’s not a proble, let’s just call the discionary “a landscape”, and not an oracle that is part of the algorithm, and the fun starts again.
So yes, EA can certainly generate dFSCI. I have offered myself an example, maybe more intgeresting, of an algorithm that can generate a specified string more complex than the algorithm itsel: the algorithm that computes the first “n” decimal digits of pi, for values of “n” big enough. In that case, and only in that case, the dFSI of the solution would be the dFSCI of the algorithm itself. The very big limitation here is that such an algorithm can only increase the FSI for one given function: as “n” grows, the dFSI in the string grows too, but the specification remains the same. No algorithm, of any kind, can ever generate dFSCI for a function about which it has no direct or indirect information.
So, to sum up, if I see a copy of Hamlet, I will infer design. The fact that an EA that knows the text of Hamlet can output it is of no relevance. The text of Hamlet in the algorithm would be designed just the same, and its dFSI would be the same. It’s the same reason why copying a string of DNA is not creating new dFSCI. But I am afraid that you guys cannot even understand that simple concept.
To Zachriel:
Just to avoid silly criticisms, let’s clarify that when I say:
“as “n” grows, the dFSI in the string grows too, but the specification remains the same.”
What I mean is that the apparent dFSI in the string grows with its length. But the Kolmogorov complexity, which is in effect the true dFSI, remains the same (the complexity of the algorithm, if lower than the apparent complexity of the string).
Just to avoid silly criticisms.
Jerad:
It starts on page 9, Jerad. Pages 10 and 11 cover exactly what I am talking about.
Joe,
Well, I looked through pages 9 – 11 . . . perhaps you could be more specific. I found compression only mentioned twice in that section, on page 11:
and on page 12:
And both those quotes assert that nonrandom sequences are compressible and random ones are not. Is that your view?
Zachreil relies on the dictionary to define natural selection:
Natural selection is a result and does not result in the best of anything.
Whatever is good enough survives and reproduces.
Jerad,
Do you see the short descriptions for the strings on pages 10 and 11?
Joe (369)
Yup, after which Dr Dembski writes:
In other words, random sequences are less compressible (if at all) compared to non-random sequences. Just like in the other two quotes.
gpuccio:
Yes, it can be controlled, among people of good will.
Maybe you just caught them on a bad day.
To Zachriel (at TSZ):
Previously, you used said “no deterministic explanation for the string is known”. Now you use “necessity mechanism”. We suggested there was confusion with your terminology.
Why? “Deterministic explanation” and “necessity mechanism” mean the same thing for me. What is the problem?
Is evolution a necessity mechanism?
“Evolution”, as I have said many times, does not mean anything if it is not better detailed.
If you mean the neo darwinian explanation for biologic information, it is obviously an explanatin based on RV + NS acting sequencially. The RV part is a probabilistic explanation of the origin of new arrangements, the NS part is a deterministic effect that intervenes after RV, modifying the scenarion through differential reproduction. That’s why, as I have written so many times, and as I have also modeled, the effects of RV and the effects of NS must be considered separately for any proposed neo darwinist scenario. But the effects of NS can be taken into account only if and when NS is demonstrated (that is, when naturally selectable intermediates are shown to exist).
I really can’t see where the confusion is.
You seem to imply so when you exclude protein relatives from the set of dFSCI.
I am not sure what you mean. A transition from a protein to another similar one, that implies only a few bits of modification, is not a transition that exhibits dFSCI, because it is not complex enough. It can be considered a microevolutionary event, of low functional complexity. Is that what you mean?
Just a friendly advice: if you think there is “confusion” in my terminology, you could just ask for clarification, instead of attacking me for things I have never said. I am always willing to clarify my thought. I believe that if you read what I write with a minimum of attention and respect, you will probably understand what I mean.
I am always respectful of different motivated opinions, like yours about the possibility of traversing the protein landscape, but I definitely don’t like having to answer repeated accusations of “circularity” which have absolutely no logical consistency and justification, if not in misunderstanding or (that’s not for you, I hope) ill faith.
To Zachriel (at TSZ):
Word Mutagenation can’t address biological evolution specifically, but it can address general statements about evolutionary processes, such as “isolated islands of function in vast seas of non function”.
The fact remains that Word Mutagenation includes a dictionary as an oracle, and the dictionary is part of the algorithm, and should be included in the computation of its complexity.
Your point is obviously that the same role which is of the dictionary in your software is performed by NS as an estimator of protein function in the biological context. I unbderstand that point, but I also understand that it is not based on any evidence. NS is not a library of sequences, while the dictionary is exactly that. You may believe that functional sequences that are naturally selectable are so connected that NS can act as a dictionary acts for words. I can find no support to such a strange assumtpion in all that we know about proteins, but I am happy to accept that point as “controversial”.
But if you didn’t know the evolutionary origin of nylonase, you would conclude design, a false positive. Worse, you would know it with certainty!
No. First of all, I never “know things with certainty” in science, and I believe that this should be true of all serious scientists.
I would definitely make a design inference for nylonase, and I would be right. The protein, indeed, does exhibit dFSCI. The fact that it is derived from penicillinase does not change the fact. The penicillinase-nylonase group of proteins clearly exhibits dFSCI. Natural history can explain that penicillinase is the older form, and that nylonase is a recent variation, implying only one or two mutations at the active esterase site, with a shift in the affinity for specific substrates of the same kind.
That’s why I always speak about “basic protein domains”, and not about a single protein. Similarly, Durston computes functional information for protein families. Similarly, Axe is interested in the evolution of protein domains.
I have always admitted that if you can show a real ladder of intermediates that can build up protein domains through microevolutionary events, you win. But you have to do exactly that, not just invoke that “it could be possible in principle to do that, but unfortunately any trace of the intermediates has been cancelled, according to our theory, and unfortunately it is impossible to find those intermediates in the lab, according to our theory, but our theory is so beautiful, why should we give evidence fot it?”
Frankly, I have no respect for theories like that.
Moreover, if I remember well, it was you that, a short time ago, were so sure, with Ohno, that nylonase had originated as the result of a sudden frameshift mutation. Maybe you knew it with certainty! 🙂
“I really can’t see where the confusion is.”
I can. They don’t understand ID. They don’t understand their own theory of evolution. So of course they can’t understand what you’re saying. They lack the necessary mental concepts and categories.
gpuccio:
Does it seem to you like they are doing everything they can to avoid addressing the missing functional intermediates?
Surely they must have once existed? Do they have an explanation for why they were lost? At least Darwin could appeal to a spotty fossil record to “explain” the absence of intermediates.
Jerad:
And teh works of Shakespeare would appear, to any algorithm, to be random, as they haz no short and neat description
I would love to hear in what way saying nylonase is the product of design is a false positive.
Zachriel:
But it canNOT address general statements about blind and undirected chemical process. And that is all that matters.
Zachriel:
Especially knowing the evolutionary origin of nylonase I say it evolved by design.
Zachriel on October 13, 2012 at 2:20 pm said:
I see they are still confused about fitness landscapes over at TSZ. And Joe Felsenstein thinks it’s irrelevant.
You’re confused. Lateral is differences in genomes. Vertical is rates of reproduction.
And that’s why it’s neither a model of evolution nor a model of any evolutionary process.
You could. But then fitness would take on a different meaning. So, you’re equivocating.
You seem to be confusing the thing being modeled with the model.
No, it can’t.
Zachriel finally gets something right:
Then why do you do it constantly? If it weren’t for your blatant misrepresentations, handwaving would be all you have.
So apparently some members over at TSZ prefer “fitness wells.” One has to wonder why.
For those following along, the population in a GA is under constant selection.
Fitness
Joe (376):
I guess you’ll have to argue with Dr Dembski on that since he clearly states at least three times in the paper you referenced that non-random sequences are more compressible than random ones.
Unless you think that the works of Shakespeare are random sequences . . .
Mung: “For those following along, the population in a GA is under constant selection.”
Zachriel:
Yes, yes. Of course they can. They can include pink elephants for all I care. But it does not follow that they actually do.
Mung: For those following along, the population in a GA isn’t made up of pink elephants.
Zachriel: That’s not correct. Genetic algorithms can include pink elephants.
sigh
I could have chosen better wording. Lizzie’s program. keiths’ program. Probably even in your Word Mutagenation program. Constant selection.
You didn’t put forth an argument, you put forth an assertion. I replied in kind. Apparently handwaving is good enough if you’re the one doing it.
Zachriel:
Wikipedia:
Zachriel:
Well, let’s just throw out all of theoretical population genetics then.
To Zachriel (at TSZ):
You ask, again:
Which emphasizes that you are excluding known evolutionary transitions per #4 of your definition. Is that correct? Is your “deterministic explanation” dichotomous with design?
I am not sure what is your problem. I have said that the neo darwinian mechanism is mixed, RV + NS. The functional complexity of a string, or of a transition, limits what RV can obtain. If evolutionary transitions that include a NS deterministic effect are documented, we have to take them into account. They do not exclude a design inference, is still there are transition that depend exclusively on RV and are beyond the threshold.
I a string can be entirely explained by necessity mechanism already included in the system, then no dFSI can be attributed to it. But that is never the case with RV+NS, because the new arrangementes are always generated by RV, and NS can only act on what has already been generated.
Therefore, in any “evolutionary transition”, there will always be a RV part, or parts, that must be evaluated in terms of dFSI.
Let’s take the case of nylonase. We can split the evolution of nylonase into two separate steps:
a) The emergence of the penicillinase structure, which could be identified with the emergence of the beta-lactamase/transpeptidase-like fold/superfamily.
b) The recent emergence of nylonase from penicillinase.
Assuming that b) implies one or two mutations as its RV part, and that the variant was naturally selectable because of its ability to degrade nylon, we can say that the second transition has very low dFSI, and does not warrant a design inference. It is probably a microevolutionary event, compatible with pure RV + NS, even if other alternatives (for instance, active adaptation) could be considered.
For the emergence of the penicillinase structure, instead, a design inference is warranted. Indeed, the structure is extremely complex (a coli penicillinase is almost 300 AAs long), and no credible evolutionary path with selectable intermediates is available.
So, I hope it is clear that there is nothing “dichotomous” in my definition. All my definitions are empirical, and not purely logical.
The isdea is: we need an explanation for the functional complexity we observe. Both RV and deterministic effect such as NS can contribute to an explanation. While functional complexity is empiricall a marker of design. still we can accept that some functional complexity may emerge from RV or from the interaction of RV and NS. But we have to verify what these things can do, and what they cannot do.
For pure RV, the limit is essentially probabilistic: RV cannot, alone achieve extremely improbable functional results. The evaluation of this limit relies essentially on the calculation of the dFSI of the observed string.
For NS, we must have a real scenario, with real proposals that can be analyzed. Then, we can integrate the possible deterministic effects of the proposed, realistic scenario, on the RV components of the event, and calculate the final probability of the whole explanation (IOWs, claculate how the deterministic effect of NS changes the probabilistic scenario due to pure RV).
I have given an example of how that can be done here:
http://www.uncommondescent.com.....selection/
starting more or less at post 62 and going on to the end (especially the last posts).
Jerad:
English text is not random.
Over in the other thread CentralScrutinizer posted results from a simple Huffman encoding. In your opinion, is this the same sort of algorithmic complexity Dembski has in mind in his paper? Is that what Dembski means by algorithmically compressible?
olegt:
Can’t say I blame you if you’re not keeping up with the conversations over here.
The description of the program is the problem. We need a description of the pattern. Describing the algorithm that produced the pattern is not a description of the pattern.
Zachriel on October 13, 2012 at 2:34 am said:
Information about what?
If I take a 504 bit string and “randomize” it, I’ve generated Shannon Information?
How much?
How you read his statement doesn’t turn your statement from false to true.
Zachriel:
Ah, progress! Post some examples (fitness values) from your program. Here are some examples from OMTWO:
http://complexspecifiedinformation.appspot.com/
gpuccio:
Seriously, I think folks over there at TSZ confuse the pattern with the process.
NS cannot, even in principle, explain the origin of new traits.
At best, it can explain why they persisted and/or spread through the population.
In the end they are left with, it just happened, that’s all.
I still want to know, how does the nylon make it through the cell membrane?
It’s all they have.
NS is not a creator. At best it’s a spreader.
RV has to throw up something functional for NS to even take notice.
Say that RV tosses up some functional selectable element _A_.
How does the existence of _A_ change the probability that RV will throw up another functional element _B_?
If it doesn’t, then aren’t the probabilities independent and therefore multiplicative?
And teh works of Shakespeare would appear, to any algorithm, to be random, as they haz no short and neat description
Jerad:
I am pretty sure I just said that:
And the works of Shakespeare would appear, to any algorithm, to be random, as they haz no short and neat description
Earth to toronto- Lizzie’s example does not produce CSI. Not by Dembski’s definition and definitely not by any definition I have read from an ID proponent.
You are confused.
So, as an example of someone over at TSZ who thinks mere ‘compressibility’ is enough to identify a specification I offer the following:
madbat089 on March 17, 2012 at 4:51 pm said:
This person also argues that Lizzie’s program follows the exact same logic.
Forget for now whether or not Lizzie’s program follows the same logic. Is that even what Dembski says?
Dembski:
toronto:
All that is true. However Lizzie did not generate dFSCI- there isn’t any function, no meaning, nothing.
Zachriel:
Then what makes it an evolutionsry algorithm?
Doubtful. And you still haven’t demonstrated any understanding of nested hierarchies
Patrick (aka MathGrrl) on March 18, 2012 at 12:41 am said:
For certain we appreciate how intelligent tweaking can lead to results not otherwise simply achievable by random changes.
I just love how they brag about how their intelligent actions can lead to unguided results of low probability.
I’ll be looking at his/her code to see if I can find where he/she calculates the amount of CSI generated.
The web page has a title: Evolving CSI
haha.
Of course you don’t ask. You know where it came from.
That’s why we think it’s trivial. doh.
Patrick: “Run the GA engine against the PROBLEM until the fitness is maximized.”
heh
No design here. Move along.
Patrick: “Return a fitness comparator function that takes two genomes and returns T if the first is more fit according to the characteristics of the PROBLEM.”
heh.
No design here. Move along.
Patrick: “Determine the number of bits required for a genome to solve the specified coin product problem.”
heh.
No design here. Move along.
Imagine a huckster who only pretends to toss a fair coin 500 times each run.
All that is true. However Lizzie did not generate dFSCI- there isn’t any function, no meaning, nothing. “
Toronto:
Umm that is not functionality…
: And the works of Shakespeare would appear, to any algorithm, to be random, as they haz no short and neat description
Zachriel:
You are wrong.
Then please write an algorithm that can generate the works of Shakespeare. Oh shut up because you are talking abiout the worng type of compression.
And you still haven’t demonstrated any understanding of nested hierarchies
Zachriel:
That’s you, in a nutshell-> afraid to define your sets and very afarid to define your nested hierarchy. OTOH I have presented you with examples in which everything was well defined.
Mung @389 re: Zachriel:
Again, this is the key (reiterating #351): a Shannon calculation does not measure the amount of information in a string. It is simply a statistical measure of potential carrying capacity (or, on the other side of the coin, if we have a pre-existing string of information, compressibility (i.e., how much capacity is required for a given string)).
People can randomize all they want and can no doubt come up with some increasing amount of pipeline capacity (based on some “fitness” function) and it is entirely irrelevant to the generation of CSI. The whole ‘GA-generates-Shannon-Information’ discussion as it relates to CSI is a red herring, a rabbit hole, a dead end, a distraction, an irrelevancy.
Eric:
Sure it does! People are just confused about what that information is about. They are confused about the meaning of Shannon Information.
Zachriel:
I missed it. Where was it posted?
A no selection model would not favor the preservation of any particular trait. Agreed?
So we can toss the dictionary and re-run your program?
No, you didn’t say that.
How close together are these separate peaks?
Why isn’t the population evolving together?
Show us the runs from your program along with the mean fitness.
Joe:
Toronto:
sigh. and just when I was starting to like you.
Lizzie’s program is written in MatLab. There is no #define FITNESS_THRESHOLD 1.0e60.
Here is Lizzie’s code:
while MaxProducts<1.00e+58
Zachriel on October 14, 2012 at 2:32 am said:
Digital communication was taking place long before Shannon.
I’m asking you.
I have a randomly generated string. I ‘randomize’ my randomly generated string. According to you, I’ve generated “Shannon Information.”
How and why?
Mung: If I take a 504 bit string and “randomize” it, I’ve generated Shannon Information?
Zachriel:
No. I don’t.
Say my string has a method that calculates the amount of Shannon Information:
v1 = str.si
puts v1
Now I “randomize” it:
str.randomize!
Now I recalculate the amount of Shannon Information:
v2 = str.si
puts v2
You say that the second value will always be greater than the first. Is that what you are saying?
__________
nb: Real codes have some degree of redundancy, so do not have the calculated maximum of capacity that is worked out for a flat random distribution. Which ideal value is irrelevant to the real world absent demo of a code that conveys meaningful, specifically functional, linguistic coded messages and has that distribution. KF
Mung @406:
OK, sure, whatever. Just like counting the number of pages in a book measures the amount of information in the book. Right.
I’m a drive-by commenter on this thread anyway, so I’ll go back to lurking. Was just trying to inject some reason into the discussion. Sigh . . .
Well, carry on with the discussion. It is quite obvious that it is possible to randomly generate strings with more or less carrying capacity (SI). So good luck talking sense into anyone as long as you are willing to grant that they are measuring the amount of actual, real “information” that is contained in the string . . .
Zachriel:
Random sequences of what?
I propose a test:
Identify a text of Shakespeare. Randomly select n characters from that text. Randomly select a portion of contiguous text with length equal to n.
Compress each using the same compression algorithm.
Decompress and compare to the original to validate that the compression was lossless.
Run multiple times and store averages.
Display results
Eric,
Please don’t assume your input isn’t appreciated, it is. I am truly interested in what you have to say.
No! You know better than this.
Counting the pages in a book has no relationship to information carrying capacity.
That’s a seriously flawed analogy.
Honestly, this makes no sense to me. What’s the encoding?
If the information source can only generate two symbols, 0 and 1, and either symbol can be generated with equal probability, 1/2, how does one string of length 504 contain more Shannon Information than any other string of length 504 from the same information source?
________
Real codes are not flat random in distribution of symbols, where meaningful, coded messages are conveyed. This implies that the info carrying bit bucket cannot in practice be filled to the brim. KF
Toronto:
How do you calculate dFSCI and what is the threshold you compare it to?
Toronto:
Help me out here. What’s in the bucket and why is whatever is in the bucket in the bucket?
So when the IDist finds out that the object really is designed, he takes it out of the bucket?
ok. that makes sense, sort of.
Stuff for which we have a reason to infer design?
Well, no. What’s in the bucket and why is whatever is in the bucket in the bucket?
Who cares? Design is objective.
The stuff still in the bucket, you mean?
Why is it in the bucket?
Please tell me you’re over 18. For some reason I feel like I’m beating up on children.
Toronto:
Assume that I know that a 504 bit string does not exhibit dFSCI merely because it is a 504 bit string.
When would such a string exhibit dFSCI?
…the information that results in…
Where is this information? In the string?
population size of 2? really? why?
Mung:
How does the existence of _A_ change the probability that RV will throw up another functional element _B_? If it doesn’t, then aren’t the probabilities independent and therefore multiplicative?
No, they are not. This is not an easy point, but it is important.
Porbabilities are independent and multiplicative as long as the two events hace to happen independently in the same individual or clone of the original population.
So, let’s say that in a population of 10^15 bacteria, and in a certain Time Span, event A has a probability of, say, 10^-9 (a complexity of 9 bits), and event B too. The total probability, if having A and B in any individual clone of the population is then multiplicative, therefore 10^-18, 18 bits.
But if A (or B), after one of them happens, expand to the whole population, for a deterministic effect like NS, in a short time, then the scenario changes. The probabilistic resources for the second event are multiplied by 10^9.
I have wondered how such a scenario could be evaluated probabilistically and I have offered what I believe is a good approximation to the problem, in the posts many times linked, that, making extrene assumptions in favour of the NS mechanism (a perfect intermediate, perfect and quick expansion to the whole population), uses the binomial distribution to compute the probability of having two events of similar probability in a certain time span.
The results show clearly that NS can indeed lower the probabilistic barrier, in a significant degree.
That’s why I have always admitted that NS can in principle help explain biological information. The problem is not: it can’t. The problem is: how much can it help?
The real reason why NS completely fails is that complex functions are not deconstructible into simpler intermediates, each of them naturally selectable. We have to stick to real reasons, and not to imagination.
NS can do really very little, because really very little new arrangements generated by RV are naturally selectable, and those that are are simply variations of the existing information and of the existing functions, and in no way consitute steps towards new, not yet existing functions. Indeed, all acses of NS observed are cases of microevolution, one or two bits, function conserved or slightly changed.
The most classical examples of NS (antibiotic resistance, expansion of Hb S due to malaria) are indeed examples of protection from extreme environmental attacks because of minimal loss of existing information, as Behe explains very well (the “burning the bridges” argument). In those cases, not even a true new biochemical function is created, and the survuval advantage is merely due to a loss of functions (or strucures) that already existed.
None of that helps in generated new complex sequences for new biochemical functions that do not exist before. Therefore, NS is a myth where macroevolution is concerned.
Neo darwinists have been dreaming for decades that macroevolution is a sum of naturally selectable microevolutionary events. That is simply not true. They don’t find the intermediates they are looking for, not because they have been cancelled by their theory, but because they simply do not exist.
That’s also the reason why all neodarwinist arguments are made in terms of generic traits that would confer reproductive advantage. They hate to reason in terms of what I call “local functions”. A local function is the true biochemical function that makes a protein functional. The local function of an enzyme is to accelerate a biochemical reaction. That, in itself, has nothing to do with survival or reproduction. Darwinists never ask themselves: how did this local function come into existence? They reason in terms of abstractions, because in any other way their reasonings would appear for what they are: wishful thinking.
One of the best papers that IMO support the ID views is the famous “rugged landscape” paper. In a context extremely favourable to true NS (an existing function, altered artificially, that must be retrieved, and a viral setting) the authors conclude:
In practice, the maximum library size that can be prepared is about 10^13. Even with a huge library size, adaptive walking could increase the fitness, ~W, up to only 0.55. The question remains regarding how large a population is required to reach the fitness of the wild-type phage. The relative fitness of the wild-type phage, or rather the native D2 domain, is almost equivalent to the global peak of the fitness landscape. By extrapolation, we estimated that adaptive walking requires a library size of 10^70 with 35 substitutions to reach comparable fitness. Such a huge search is impractical and implies that evolution of the wildtype phage must have involved not only random substitutions but also other mechanisms.
Darwinists should seriously reflect on this empirical evidence, before fantasizing about what true NS can really do.
F/N: It seems the objectors, again, need to bone up on the design inference explanatory filter (also cf here in the ID founds series at UD and no’s 29 & 30 in the weak argument correctives). They have made so many strawman caricatures that they are confusing themselves. In particular, design is only inferred on tested, empirically reliable signs, e.g. digitally coded functionally specific complex info such as text strings in this thread or strings in functional programs. The D/RNA strings that produce proteins are coded, are specifically functional, are known to come in deeply isolated fold domains in potential AA chaining space, and are known to be complex, quite often well beyond any reasonable threshold of exhausting blind search resources. Step one, the relevant aspect is examined and if it can be explained on observed mechanical necessity is assigned to law, proteins are highly contingent. Step 2, can chance based statistical distributions explain, no as we are not coming from the bulk, but special, functional zones isolated to 1 in 10^60 or more. The reasonable explanation, then, is design. KF
PS: GP, a decimal digit has 10 possibilities and can store up to 3.32 bits of info on avg. (And, that is a Shannon, info capacity metric.)
Mung (387):
Joe (392):
Joe, if the works of Shakespeare are not random then, as Dr Dembski says, they are more compressible than random text strings because, to some extent, they are predictable. For example, in English ‘q’ is generally followed with a ‘u’. So that two letter combination can be compressed. The paper I linked to some ways back discussed such a scheme. Common letter combinations and words can be compressed. N fct, jst lvng t th vwls s knd f cmprssn. Not a good one but a compression nonetheless.
________
Correct, as I noted earlier. What that means is that paradoxically nonsense random strings score higher than actual code-bearing ones on the Shannon info capacity metric, The bit bucket cannot be filled to the brim in practical cases. The import is, that ORDERED patterns, where a unit cell replicates say n times, are most compressible, but cannot carry real messages beyond what may be in the unit. Flat random strings likewise cannot carry info of any general practical use. Real message carrying strings lie in the middle, and have organisation based on functionality requisites. They are neither simply ordered nor flat random. But then, Wicken and Orgel were talking about such and defining functionally specific complex information and organisation by direct implication as an upshot of OOL studies in the 1970’s, as the IOSE discusses here on — try Midori browser BTW as a nice, tight fast running secondary. KF
Mung (387):
You read Dr Dembski’s paper, what do you think? Did he discuss Huffman encoding? What other things in the field have you read? Do the research, figure it out!!
Joe (404):
No one is talking about generating Shakespeare. Compressing is like .jpg files as opposed to .bmp files. The information is all there (for lossless compression anyway) but in a condensed form. Or .zip files. You need the code to ‘uncompress’, the compressed version is not necessarily ‘readable’ on its own.
What type of compression are you talking about?
KF (419):
I’ll check out your IOSE discussion when I’ve got some time, promise. You’ve pointed out why Shannon information is not really the point of trying to find complex, functional, specified information. Which is why Dr Dembski came up with a similar but different definition.
Midori doesn’t run on Macs. I’ll try Chrome for a bit, see if that is better. This forum looks different . . . different type-face for one. And some different formatting. Browsers, you’d think they’d all work the same. If Chrome doesn’t do it I’ll try Opera and then Firefox. I don’t generally like Firefox but Opera is pretty fast.
________
I’ll vouch for Opera. Firefox too. Chrome and derivatives have a problem of the missing menu bar for me, and no the wrench plus stories about saved screen real estate do not hack it, I want those push-buttons where I can reach them pronto. Deal breaker. BTW, I also outright despise the MS Office 2007 Ribbon. KF
KF:
Thank you for the correction. You are obviously right. It was just writing in a hurry!
______
Hear you, and you obviously underestimated bit capacity by about 2/3, but we deal with those who would pounce on anything to make a counter talking point with intent to deride and dismiss. KF
Jerad: Algorithmic compressibility speaks to the possibility of squeezing out redundancy. As I noted, order is highly compressible, and that is one way of looking at laws of necessity such as F = m*a or the like or specify unit cell, repeat n times etc. Truly random sequences basically have to be quoted outright. Functionally organised coded ones will be somewhat compressible but nowhere near as much as simple order. WmAD’s discussion was general. This begins to be a side track from the pivotal issues and the challenge in this thread. KF
F/N: Folks, day 19, still those crickets are chirping; no offers to submit a 6,000 word essay that warrants, blind watchmaker thesis molecules to Mozart per empirically grounded argument. KF
Jerad:
I am- that is what I have been talking about. So please do TRY to follow along.
Toronto:
Joe: “Umm that is not functionality…”
No, not even close.
Just cuz YOU say so?
BWAAAAAAAAAAHAAAAAAAAAAHAAAAAAA
Toronto:
keiths just erects strawman after strawman. Just because you are too clueless to recognize taht doesn’t mean anything to us.
Then what makes it an evolutionsry algorithm?
Zachriel:
That is not all it takes to be an EA, Zachriel.
What is the problem it is trying to solve?
I see we are back to talking about latching-
hey keiths, if there isn’t any latching then there wouldn’t be any nested hierarchy as a result.
Joe Felsenstein is confused:
Artificial selection towards a goal, Joe. Evolutionism doesn’t have such a mechanism.
Joe (426):
The title of the section in Dr Dembski’s paper (on pages 9 – 12) is “Specifications via Compressibility” so I thought that’s what we were talking about.
Shakespere’s work was not randomly generated and does not appear random therefore it’s more compressible than a random text string.
Jerad,
Obvioulsy you have reading comprehension issues as Dembski says compressibility = description. He makes it quite clear.
As for Dawkins’ “weasel” and latching:
The program was supposed to demonstrate CUMULATIVE selection. And you cannot have cumulative selection if the proper mutations do not latch. Otherwise it would be called back and forth and sometimes cumulative selection.
Being algorithmically compressible means that you can produce it with an algorithm.
Zachriel:
Reference please. And what is the testable hypothesis that accumulations of random mutations didit?
However it is not established that recombination is a blind watchmaker mechanism.
And it is very noticeable you didn’t provide any evidence to support your bald assertion.
“Artificial selection towards a goal, Joe. Evolutionism doesn’t have such a mechanism. “
toronto:
In what way is that a mechanism?
and petrushka’s daily nonsense:
Strange, I never said, thought nor implied such a thing.
Umm natural selection is BLIND, so it doesn’t see anything, meaning nothing can be seen by natural selection.
Joe: And what is the testable hypothesis that accumulations of random mutations didit?
In the past I have explained to you why random wrt fitness is meaningless gibberish because it does not mean that the mutations were not directed by an internal algorithm. Also the mutations allow for fitness- ie successful reproduction, so it would appear to be an example of built-in responses to environmental cues.
So why do you insist on being so obtuse?
Not to be out done, toronto shares its nonsense-
Umm natural selection is BLIND, so it doesn’t see anything, meaning nothing can be seen by natural selection.”
toronto:
1- Jonsey still isn’t in any position to say what is and isn’t science as he is still clueless on the subject.
2- What I said has nothing to do with ID
3- natural selection still doesn’t see anything.
Just because you can say so that doesn’t make it so. And natural selection still doesn’t see anything. It is still a RESULT that doesn’t do anything.
OMTWO:
OMTWO I have constantly asked you to support the claims of your position and like teh coward you are you have always refused to do so. And instead always tried to push the onus back on me, as cowards always do.
And cowards always throw in false accusations for good emasure, just as you have done, again. You are just a pathetic imp and apparently proud of it.
To see if a die is fair, you would weigh and measure it. You would check its balance, its edges and corners and finally you would roll it to see what type of distributation you got.
Mung @412:
Thanks, Mung, for your kind words. Perhaps I can provide a couple of additional thoughts.
I am not being facetious, and I think in fact it is a decent analogy. A Shannon calculation does not tell us anything about the substance of the underlying information. It just tells us how much underlying information could be in the string. In the same way, if I count pages in a book (or the words in the book if you prefer), I have ascertained how much information could in principle be contained in the book. And that page count or word count is itself a piece of information, analogous to your “real” Shannon information. I’m using a physical example of the same principle so we can see clearly what is going on. People who are enamored with GA’s tend to get off in the weeds when they talk about strings and bits and fancy math, so I am using a simple physical example to highlight the issue.
Well, I haven’t looked at Lizzie’s program, so perhaps I should just keep quiet, but I’ll charge ahead anyway. 🙂
There are at least three ways we can program a GA to easily generate more Shannon “information” through random changes. First, we can lengthen the string (the old accidental-extra-copy-of-a-gene kind of idea). Second, even if we keep the string length the same, if we introduce a previously unavailable character into the string. Third, if we change the relative distribution of the characters (i.e., change the probability of occurrence).
I agree with you that if we: (i) keep the string length the same, (ii) establish beforehand a fixed, exclusive character set that cannot be changed, and (iii) establish beforehand that each character has an identical probability of occurrence, then, yes, the Shannon entropy calculation should be identical, regardless of whether we shuffle the characters around or not.
I have no idea what Lizzie and company are claiming to have done. My suspicion, however, is that they have incorporated in their GA one of the three things I mentioned above could be done. That seems to be the only possible source of the confusion on the calculation. Otherwise, if they kept all the variables as you have proposed them (same length, identical character set, pre-set probability), then it should just be a question of math and there should quickly be agreement on the calculation. That there is an ongoing back and forth and disagreement suggests to me that they have (perhaps inadvertently) slipped in one of the three changes I mentioned.
Anyway, I think you and I are on the same page w/r/t the calculation.
—–
Again, at a higher level though, I think the whole Shannon discussion in the present context of GA’s as an avenue for demonstrating evolution’s ability to generate new information is largely an exercise in irrelevance. This is because even if we have a fixed character set (say, ATCG), and even if we assume equal probability of occurrence, we still know for a fact that the string can lengthen or shorten in biology. So however we cut it, we can get more or less Shannon “information” through random changes to the string. Big deal. All we’ve done is increase our pipeline, our available resources, our number of available pages or words. It tells us nothing about the underlying information and is singularly unhelpful in determining whether we have CSI.
toronto:
Joe (435):
Well, I’ve found a few different definitions:
From http://kwelos.tripod.com/algor.....ession.htm
“If a computer program or algorithm is simpler than the system it describes, or the data set that it generates, then the system or data set is said to be ‘algorithmically compressible’.”
So in this definition it seems like algorithmic compressibility could refer to the system or the data and it could refer to the data being generated.
But, as I said, there are other definitions:
From Theories of Everything: The Quest for Ultimate Explanation, p. 14-15 by Barrow:
“The goal of science is to make sense of the diversity of Nature. It is not based upon observation alone. It employs observation to gather information about the world and to test predictions about how the world will react to new circumstances, but in between these two procedures lies the heart of the scientific process. This is nothing more than the transformation of lists of observational data into abbreviated form by the recognition of patterns. The recognition of such a pattern allows the information content of the observed sequence of events to be replaced by a shorthand formula which possesses the same, or almost the same, information content. … On this view, we recognize science to be the search for algorithmic compressions. … Without the development of algorithmic compressions of data all science would be replaced by mindless stamp collection – the indiscriminate accumulation of every available fact.”
From A Modest Proposal (by a Somewhat Modest Engineer)
“A pattern’s algorithmic compressibility can be an objective measurement and all we have to do is make sure we are comparing measurements from the same programming language”
Which seems to imply that algorithmic compressibility is a measurement or number.
The paper Empirical Data Sets are Algorithmically Compressible: Reply to McAllister by Twardy and Gardner definitely uses algorithmically compressible to mean compressible by an algorithm.
The paper is available as a pdf and discusses several real world data sets including DNA and might be worth some time.
Context Jerad- I have explained my position. Sure you can ignore that and prattle on regardless. But I don’t care.
To see if a die is fair, you would weigh and measure it. You would check its balance, its edges and corners and finally you would roll it to see what type of distributation you got.
OMTWO spews:
The manufacturing process rules out any internal algorithm, duh.
No, you are obviously a loser with nothing to say. Not only that you don’t seem to understand anything beyond misrepresenation and strawmen.
Mike Elzinga chimes in with more sunstance-free drivel:
I will never be as loathsome as you are Mikey. And, thankfully, I will never be as dishonest and dispicable as you either.
Now go melt some water, lser.
_______
Joe, kindly restrain yourself on tone. You are liable to fall off the wagon if you allow yourself to fall into intemperate language and personalities. KF
The manufacturing process rules out any internal algorithm, duh.
OMTWO:
The way it is made and what it is made up of. That is the manufacturing process, duh.
Well there are ways we can tell the properties of any given die. Are you that ignorant of technology? Really?
Physics.
Also the mutations allow for fitness- ie successful reproduction, so it would appear to be an example of built-in responses to environmental cues.
Zachriel:
We have already been over this Zachriel. Apparently you chose to be willfully ignorant. And that is not a good place to argue from.
Decades ago the Lederbergs conducted an experiment using bacteria.
This experiment demonstrated that the resistance to anti-biotics was already in the population when the anti-biotics were introduced (put on the plate).
IOW the resistance did not come in response to the exposure.
This was supposed to demonstrate that mutations are random with respect to fitness.
However that “conclusion” was reached before we knew that bacter