Uncommon Descent Serving The Intelligent Design Community

Siding with Mathgrrl on a point, and offering an alternative to CSI v2.0

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

There are two versions of the metric for Bill Dembski’s CSI. One version can be traced to his book No Free Lunch published in 2002. Let us call that “CSI v1.0”.

Then in 2005 Bill published Specification the Pattern that Signifies Intelligence where he includes the identifier “v1.22”, but perhaps it would be better to call the concepts in that paper CSI v2.0 since, like windows 8, it has some radical differences from its predecessor and will come up with different results. Some end users of the concept of CSI prefer CSI v1.0 over v2.0.

It was very easy to estimate CSI numbers in version 1.0 and then argue later whether the subjective patterns used to deduce CSI were independent and not postdictive. Trying to calculate the CSI in v2.0 is cumbersome, and I don’t even try anymore. And as a matter of practicality, when discussing origin-of-life or biological evolution, ID-sympathetic arguments are framed in terms of improbability not CSI v2.0. In contrast, calculating CSI v1.0 is a very transparent transformation going from improbability to taking the negative logarithm of probability.

I = -log2(P)

In that respect, I think MathGrrl (who’s real identity he revealed here) has scored a point with respect to questioning the ability to calculate CSI v2.0, especially when it would have been a piece of cake in CSI v1.0.

For example, take 500 coins, and suppose they are all heads. The CSI v1.0 score is 500 bits. The calculation is transparent and easy, and accords with how we calculate improbability. Try doing that with CSI v2.0 and justifying the calculation.

Similarly, with pre-specifications (specifications already known to humans like the Champernowne Sequences), if we found 500 coins in sequence that matched a Champernowne Sequence, we could argue the CSI score is 500 bits as well. But try doing that calculation in CSI v2.0. For more complex situations, one might get different answers depending on who you are talking to because CSI v2.0 depends on the UPB and things like the number possible primitive subjective concepts in a person’s mind.

The motivation for CSI v2.0 was to try account for the possibility of slapping on a pattern after the fact and calling something “designed”. v2.0 was crafted to try to account for the possibility that someone might see a sequence of physical objects (like coins) and argue that the patterns in evidence were designed because he sees some pattern in the coins somewhat familiar to him but no one else. The problem is everyone has different life experiences and they will project their own subjective view of what constitutes a pattern. v2.0 tried to use some mathematics to create at threshold whereby one could infer, even if the recognized pattern was subjective and unique to the observer of a design, that chance would not be a likely explanation for this coincidence.

For example, if we saw a stream of bits which some claims is generated by coin flips, but the bit stream corresponds to the Chapernowne sequence, some will recognize the stream as designed and others will not. How then, given the subjective perceptions that each observer has, can the problem be resolved? There are methods suggested in v2.0, which in and of themselves would not be inherently objectionable, but then v2.0 tries to quantify how likely the subjective perception can arise out of chance and then it convolves this calculation with the probability of the objects emerging by chance. Hence we mix the probability of an observer concocting a pattern in his head by chance and then mixing it with the probability an event or object happens by chance, and after some gyrations out pops a CSI v2.0 score. v1.0 does not involve such heavy calculations regarding the random chance an observer formulates a pattern in his head, and thus is more tractable. So why the move from v1.0 to v2.0? The v1.0 approach has limitations witch v2.0 does not. However, I recommend, that when v1.0 is available to use, use v1.0!

The question of post diction is an important one, but if I may offer an opinion — many designs in biology don’t require exhaustive rigor as attempted in v2.0 to try to determine if our design inferences are postdictive (the result of our imagination) or whether the designed artifacts themselves are inherently evidence against a chance hypothesis. This can be done using simpler mathematical arguments.

For example, consider if we saw 500 fair coins all heads, do we actually have to consider human subjectivity when looking at the pattern and concluding it is designed? No. Why? We can make an alternative mathematical argument that says if coins are all heads they are sufficiently inconsistent with the Binomial Distribution for randomly tossed coins, hence we can reject the chance hypothesis. Since the physics of fair coins rules out physics as being the cause of the configuration, we can then infer design. There is no need in this case to delve into the question of subjective human specification to make the design inference in that case. CSI v2.0 is not needed to make the design inference, and CSI v1.0, which says we have 500 bits of CSI, is sufficient in this case.

Where this method (v1.0 plus pure statistics) fails is in questions of recognizing design in a sequence of coin flips that follow something like the Champernowne sequence. Here the question of how likely it is for humans to make the Champernowne sequence special in their minds becomes a serious question, and it is difficult to calculate that probability. I suppose that is what motivated Jason Rosenhouse to argue that the sort of specifications used by ID proponents aren’t useful for biology. But that is not completely true if the specifications used by ID proponents can be formulated without subjectivity (as I did in the example with the coins) 🙂

The downside of the alternative approach (using CSI v1.0 and pure statistics) is that it does not include the use of otherwise legitimate human subjective constructs (like the notion of motor) in making design arguments. Some, like Michael Shermer or my friend Allen MacNeill, might argue that we are merely projecting our notions of design by saying something looks like a motor or a communication system or a computer, but the perception of design is owing more to our projection than to an inherent design. But the alternative approach I suggest is immune from this objection, even though it is far more limited in scope.

Of course I believe something is designed if it looks like a motor (flagellum), a telescope (the eye), a microphone (the ear), a speaker (some species of bird can imitate an incredible range of sounds), a sonar system (bat and whale sonar), a electric field sensor (sharks), a magnetic field navigation system (monarch butterflies), etc. The alternative method I suggest will not directly detect design in these objects quite so easily, since the pure statistics are hard pressed to describe the improbability of such features in biology even though it is so apparent these features of biology are designed. CSI v2.0 was an ambitious attempt to cover these cases, but it came with substantial computational challenges to arrive at information estimates. I leave it to others to calculate CSI v2.0 for these cases.

Here is an example of using v1.0 in biology regarding homochirality. Amino acids can be left or right handed. Physics and chemistry dictate that left-handed and right-handed amino acids arise mostly (not always) in equal amounts unless there is a specialized process (like living cells) that creates them. Stanley Miller’s amino acid soup experiments created mixtures of left and right handed amino acids, a mixture we would call racemic (a mix of right and left-handed amino acids) versus the homochiral variety (only left-handed) we find in biology.

Worse for the proponents of mindless oirgins of life, even homochiral amino acids will racemize spontaneously over time (some half lives are on the order of hundreds of years), and they will deanimate. Further, when Sidney tried to polymerize homochiral amino acids into protoproteins, they racemized due to the extreme heat and created many non-chains, and the chains he did create had few if any alpha peptide bonds. And then in the unlikely event the amino acids polymerize, in a soup, the amino acids can undergo hydrolysis. These considerations are consistent with the familiar observation that when something is dead, it tends to remain dead and moves farther away from any chance of resuscitation over time.

I could go on and on, but the point being is we can provisionally say the binomial distribution I used for coins also applies to the homochirality in living creatures, and hence we can make the design inference and assert a biopolymer has at least -log2(1/2^N) = N bits of CSI v1.0 based on N stereoisomer residues. One might try to calculate CSI v2.0 for this case, but me being lazy will stick to the CSI v1.0 calculation. Easier is sometimes better.

So how can the alternative approach (CSI v1.0 and pure statistics) detect design of something like the flagellum or DNA encoding and decoding system? It cannot do so as comprehensively as CSI v2.0, but v1.0 can argue for design in the components. As I argued qualitatively in the article Coordinated Complexity – the key to refuting postdiction and single target objections one can formulate observer independent specification (such as I did with the 500 coins being all heads) by appeal to pure statistics. I gave the example of how the FBI convicted cheaters of using false shuffles even though no formal specifications for design were asserted. They merely had to use common sense (which can be described mathematically as cross or auto correlation) to detect the cheating.

Here is what I wrote:

The opponents of ID argue something along the lines: “take a deck of cards, randomly shuffle it, the probability of any given sequence occurring is 1 out of 52 factorial or about 8×10^67 — Improbable things happen all the time, it doesn’t imply intelligent design.”

In fact, I found one such Darwinist screed here:

Creationists and “Intelligent Design” theorists claim that the odds of life having evolved as it has on earth is so great that it could not possibly be random. Yes, the odds are astronomical, but only if you were trying to PREDICT IN ADVANCE how life would evolve.

http://answers.yahoo.com/question/index?qid=20071207060800AAqO3j2

Ah, but what if cards dealt from one random shuffle are repeated by another shuffle, would you suspect Intelligent Design? A case involving this is reported in the FBI website: House of Cards

In this case, a team of cheaters bribed a casino dealer to deal cards and then reshuffle them in same order that they were previously dealt out (no easy shuffling feat!). They would arrive at the casino, play cards which the dealer dealt and secretly record the sequence of cards dealt out. Thus when the dealer re-shuffled the cards and dealt out the cards in the exact same sequence as the previous shuffle, the team of cheaters would be able to play knowing what cards they would be dealt, thus giving them substantial advantage. Not an easy scam to pull off, but they got away with it for a long time.

The evidence of cheating was confirmed by videotape surveillance because the first random shuffle provided a specification to detect intelligent design of the next shuffle. The next shuffle was intelligently designed to preserve the order of the prior shuffle.

Biology is rich with self-specifying systems like the auto correlatable sequence of cards in the example above. The simplest example is life’s ability to make copies of itself through a process akin to Quine Computing. Physics and chemistry makes Quine systems possible, but simultaneously improbable. Computers, as a matter of principle, cannot exist if they have no degrees of freedom which permit high improbability in some of its constituent systems (like computer memory banks).

We can see the correlation between a parent organism and its offspring not being the result of chance, and thus we can reject the chance hypothesis for that correlation. One might argue that though the offspring (copy) is not the product of chance, the process of copying is the product of a mindless copy machine. True, but we can further then estimate the probability of randomly implementing particular Quine computing algorithms (that makes it possible for life to act like computerized copy machines). The act of a system making copies is not in-and-of-itself spectacular (salt crystals do that), but the act of making improbable copies via an improbable copying machine? That is what is spectacular.

I further pointed out that biology is rich with systems that can be likened to login/password or lock-and-key systems. That is, the architecture of the system is such that the components are constrained to obey a certain pattern or else the system will fail. In that sense, the targets for individual components can be shown to be specified without having to calculate the chances the observer is randomly formulating subjective patterns onto the presumably designed object.

lock and key

That is to say, even though there are infinite ways to make lock-and-key combinations, that does not imply that emergence of a lock-and-key system is probable! Unfortunately, Darwinists will implicitly say, “there are infinite number of ways to make life, therefore we can’t use probability arguments”, but they fail to see the errors in their reasoning as demonstrated with the lock-and-key analogy.

This simplified methodology using v1.0, though not capable of saying “the flagellum is a motor and therefore is designed”, is capable of asserting “individual components (like the flagellum assembly instructions) are improbable hence the flagellum is designed.”

But I will admit, the step of invoking the login/password or lock-and-key metaphor is a step outside of pure statistics, and making the argument for design in the case of login/password and lock-and-key metaphors more rigorous is a project of future study.

Acknowledgments:
Mathgrrl, though we’re opponents in this debate, he strikes me a decent guy

NOTES:
The fact that life makes copies motivated Nobel Laureate Eugene Wigner to hypothesize a biotonic law in physics. That was ultimately refuted. Life does copy via a biotonic law but through computation (and the emergence of computation is not attributable to physical law in principle just like software cannot be explained by hardware alone).

Comments
@Joe:
LoL! I was mocking my attackers and now keiths sez I am confused!
Who is keith?
I don’t have to agree with it.
Why not? It's just a definition. Make up your own definition and call it "Joe's cardinality".
Geez students get taught evolutionism in school, do they have to agree with it?
Evolutionism entails more than just definitions.JWTruthInLove
May 20, 2013
May
05
May
20
20
2013
10:51 AM
10
10
51
AM
PDT
LoL! I was mocking my attackers and now keiths sez I am confused! But wait- rationals are an infinite set. And irrationals are an infinite set. Bijection says they have the same cardinality mocking you guys, keiths- you and mr infinity = infinity.Joe
May 20, 2013
May
05
May
20
20
2013
10:48 AM
10
10
48
AM
PDT
So there are infinite sets that do not have a one-to-one correspondence, ie not all infinite sets have the same cardinality. Sal, I thought someone refuted Cantor's diagonal argument... And no JW, I don't have an issue with abstract thinking. Just because we cannot comprehend infinity doesn't mean I have issues with abstract thinking. And I undersatnd what wiki says and what the rule is. I don't have to agree with it. Geez students get taught evolutionism in school, do they have to agree with it? OR are people allowed to challenge something that may be not as established as thought?Joe
May 20, 2013
May
05
May
20
20
2013
10:40 AM
10
10
40
AM
PDT
@Joe:
It seems to me we do that just because no one wants to actually think about it because we cannot really comprehend infinity
You have a problem with abstract thinking, which is necessary in math (or computer science).JWTruthInLove
May 20, 2013
May
05
May
20
20
2013
10:28 AM
10
10
28
AM
PDT
I don’t know of a way to tell.
Try Rational Numbers Countable, Home School Math which helps one understand Cantor's Diagonal Argument Which shows the reals are uncountable. If the reals are uncountable and the reals are composed of rationals (countable) and irrationals, then it stands to reason the irrationals are uncountable, hence the irrationals have a higher cardinatlity than the rationals.scordova
May 20, 2013
May
05
May
20
20
2013
10:27 AM
10
10
27
AM
PDT
@Joe:
But wait- rationals are an infinite set. And irrationals are an infinite set. Bijection says they have the same cardinality
Please provide a citation for that claim!
My issue is, as I stated, with supersets and subsets we use one alignment. And then with infinite sets we use another alignment.
Again, wiki helps:
Two sets A and B have the same cardinality if there exists a bijection (…) from A to B. (Wiki)
It doesn't matter whether the set is finite or infinite, or what alignment your favorite alignment is. If you have a problem with the definition, make up your own definition.JWTruthInLove
May 20, 2013
May
05
May
20
20
2013
10:26 AM
10
10
26
AM
PDT
JWTruthInLove, My issue is, as I stated, with supersets and subsets we use one alignment. And then with infinite sets we use another alignment. It seems to me we do that just because no one wants to actually think about it because we cannot really comprehend infinity. And franklin- YOU are pathetic. Your inability to think outside of your sock-puppet is duly noted.Joe
May 20, 2013
May
05
May
20
20
2013
10:17 AM
10
10
17
AM
PDT
Sal, My apologies and thank you for your input wrt sets and CSI.Joe
May 20, 2013
May
05
May
20
20
2013
10:14 AM
10
10
14
AM
PDT
Sal:
The fact the rationals and irrationals don’t have the same cardinality is therefore important.
But wait- rationals are an infinite set. And irrationals are an infinite set. Bijection says they have the same cardinality- and just so you know that is the point I am arguing. I don't think all infinite sets are the same size. I don't believe Cantor did either. But I don't know of a way to tell. "Infinite is infinite, dude"Joe
May 20, 2013
May
05
May
20
20
2013
10:13 AM
10
10
13
AM
PDT
By the way, have at it guys here in this thread. It looks like we've discussed away the original post. So if you want to conduct your off topic dicussions here, go ahead, better here than in other thread. Enjoy!scordova
May 20, 2013
May
05
May
20
20
2013
10:07 AM
10
10
07
AM
PDT
Yes, the bijection and the arbitrary rule. That's the problem. However it is minor because obvioulsy it has no impact on anything and is a matter of debate- measuring infinite sets and the number of power sets. It seems like bijection is a tool for the lazy to not actually have to actually do the work. "Oh infinite sets- same size- bijection" If that ever has some practical use I will change my opinion of bijection's use on infinite sets.Joe
May 20, 2013
May
05
May
20
20
2013
10:07 AM
10
10
07
AM
PDT
Again, what practical application is there in saying that two sets, that cannot be measured, are the same size?
If you mean by size the cardinality, it does matter. It helps you solve problems in calculus or at least determine if you can solve a problem using certain methods. Let f(x) = 1 for all reals on the interval[0,1] the Riemann integral of f(x) is F(x) = 1 let g(x) = 1 for all rationals and 0 for all irrationals there is no Riemann integral for g(x) even though there are an infinite number of rationals and an infinite number irrationals between 0 and 1, you can't do a 1 to 1 mapping so you won't be able to say what the Riemann integral of g(x) is, in other words G(x) = ???? The fact the rationals and irrationals don't have the same cardinality is therefore important. Many times when doing real world applied math, it is helpful when we can take something with a finite number of points (like say the gas molecules in a box) and come up with an infinite idealized fluid model that isn't exact but has easier math. It's important to know when we can make such leaps from "finite exact but computationally impossible calculations" to "infinitesimal approximation but computationally possible calculations". Such considerations as above then become very important in determining if our approximate methods will give us usable answers.scordova
May 20, 2013
May
05
May
20
20
2013
10:06 AM
10
10
06
AM
PDT
joe:Why do you keep ignoring that?
I keep ignoring it because it is so wrong it is pathetic and it is also not my job to educate you although many others have already tried to disabuse you of your ignorance on the subject. But don't let your ignorance stop you continue to carry on which is providing much humor albeit at your expense. see #74 another individual that understands set theory! pssst... JWTIL the problem is Joe and his lack of understanding. Sorta like a mega case of the 'arrogance of ignorance' that characterizes his online persona.franklin
May 20, 2013
May
05
May
20
20
2013
10:03 AM
10
10
03
AM
PDT
@Joe:
In what way can these two sets be the same size?
In that way:
Two sets A and B have the same cardinality if there exists a bijection (...) from A to B. (Wiki)
What's the problem?JWTruthInLove
May 20, 2013
May
05
May
20
20
2013
09:52 AM
9
09
52
AM
PDT
franklin, Is {1,2,3,4,...} a proper subset of {0,1,2,3,...} because its 1 matches/ aligns with the superset's 0 or its 1? And again, I did not ask about the practical applications for set theory. So why are you blathering on as if I did? It's as if you think your belligerence is really going to hurt me. It doesn't. I will just keep correcting you as you spew. Also: Take two sets of whole numbers- A and B. Set A contains every single number set B has plus one number B does not have. In what way can these two sets be the same size? And what practical application does it have to say they are the same size? Why do you keep ignoring that?Joe
May 20, 2013
May
05
May
20
20
2013
09:30 AM
9
09
30
AM
PDT
Joe, you may have answered those people but there is one problem in that your answers are incredibly and obviously wrong to anyone who understands set theory. That you cannot grasp the concepts in set theory is your problem. Your alignments make no sense at all and underscore your inability to comprehend the subject matter. Every one who understands set theory can ignore your incredibly wrong answers, alignments, and assertions. Try educating yourself on set theory before pontificating on it. Your inadequacies in grasping and understanding the subject matter becomes immediately obvious when you go off half-cocked in your delusional posts where you think that you actually understand set theory. Go visit the Stanford site if you are really interested in understanding the practical applications of set theory. Your ignorance does nothing to refute anything in set theory but it is kinda funny observing your antics. You should ask yourself why everyone else understands set theory and you don't. Is everyone else wrong or does the problem lie with you. The answer is obvious to everyone! If you need help maybe Dembski or KF can give you a hand with the material.franklin
May 20, 2013
May
05
May
20
20
2013
09:07 AM
9
09
07
AM
PDT
franklin, Infinity cannot be measured. Also I have answered those people who can't even address my explanations- just as you cannot. The alignment is arbitray for the reasons provided. Now you can ignore my explanations but your ignorance is not a refutation. Again, what practical application is there in saying that two sets, that cannot be measured, are the same size?Joe
May 20, 2013
May
05
May
20
20
2013
08:22 AM
8
08
22
AM
PDT
joe:Yes, it is. That is the only issue with infinite sets.
People have answered you in many different ways and with different examples. That you aren't capable of grasping the concepts and math of set theory is no ones problem but your own. Do you really think there are two different values, i.e.,' infinity' and 'infinity + 1'? And the alignment is anything but arbitrary it is a direct one to one mapping....I know it must be tough for you to grasp these mathematical concepts but perhaps if you applied yourself or even ask oleg to explain it to you yet again you might be able to pick up what most everyone else already understands.franklin
May 20, 2013
May
05
May
20
20
2013
07:59 AM
7
07
59
AM
PDT
franklin:
It isn’t that you have a disagreement with set theory ...
Yes, it is. That is the only issue with infinite sets. And I have asked, and no one has answwered, what practical application this aribitary alignment has. IOW your Stanford quote-mine is meaningless as it does not deal with this specific case.Joe
May 20, 2013
May
05
May
20
20
2013
03:35 AM
3
03
35
AM
PDT
You are welcome, Chance. Talking to you was a lot of fun, and I wish more of my thread were like this one.scordova
May 19, 2013
May
05
May
19
19
2013
11:58 PM
11
11
58
PM
PDT
Eric, thanks for your additional comments. I think the conversation has gone as far as it will. I appreciate your efforts, and Sal's, to engage my points directly.Chance Ratcliff
May 19, 2013
May
05
May
19
19
2013
11:25 PM
11
11
25
PM
PDT
Chance @50:
There might be some relationship between entropy and design.
Absolutely there is. It is relevant to the calculation of complexity. @62:
The first string would have less entropy than the second; and if it didn’t, it couldn’t contain a discernible meaning. Do you think this is mistaken?
While it is true that any grammatical sentence will be less random, on average, than a purely random string of letters, that does not necessarily translate to meaning. Back to my example: tobeornottobethatisthequestion and brnottstinoisqotebeeootthuathe have the exact same amount of Shannon "information." The same will hold true if I incorporate into my calculation the relative frequency of letters in, for example, the English alphabet. Still get the same result on a letter-by-letter calculation. Now we could step up a level and calculate whole words, but in that case we would still end up with a situation where: tobeornottobethatisthequestion and bebetotoquestionthenotthatoris have the exact same Shannon information. It won't be until we have actually stepped up to the level of a grammatical sentence (or at least coherent phrases) as our minimal search parameter that we actually start getting away from the purely statistical Shannon calculation into being able to search for actual meaning/function/specification. Yes, we could then search for whole meaningful sentences (or phrases), but that would mean we have really just snuck in the meaning/specification in the back door. At that point we have defined a particular specification and we are just searching to see if we can find it. Beyond a general observation that meaningful sequences are typically not characterized by pure randomness, I don't think Shannon calculations are able, by definition, to distinguish between functional, coherent, meaningful sequences and complete gibberish.Eric Anderson
May 19, 2013
May
05
May
19
19
2013
11:04 PM
11
11
04
PM
PDT
I think I'll just leave it at that. Thanks for your help. Eric too.Chance Ratcliff
May 19, 2013
May
05
May
19
19
2013
11:01 PM
11
11
01
PM
PDT
that any string which contains a discernible message cannot have maximum entropy, like a random sequence would have.
Careful here. A random sequence has close to maximum ALGORITHMIC entropy, that is not the same as SHANNON entropy. If I have a 10meg zip file that expands to 500 megs and then can be recompressed down to 5 megs, how much information does the zip file really have? The answer is in the eyes of the beholder! You could make a case for either numbers as being the Shannon Entropy, and practically speaking most engineers don't care as long as they get paid to make compression and decompression algorithms. For example, 500,000 coins all heads has 500,000 bits of Shannon entropy, but it may have only a few bits of Algorithmic Entropy (because it takes only a few bits to represent the string relative to the actual string size). Zipped up, compressed files, JPEG files are packed with nearly maximum alorithmic entropy. They are far more meaningful than an empty string of 500,000 zeros (which have low ALGORITHMIC entropy). Hence there is the case where the high algorithmic entropy file (which looks disordered white noise, but is not) has far more meaning than a low algorithmic entorpy file (which is all zeros).scordova
May 19, 2013
May
05
May
19
19
2013
09:26 PM
9
09
26
PM
PDT
for the onlookers here is Joe's conceptual understanding of set theory....see if you can make any sense out of it. http://intelligentreasoning.blogspot.com/ A Tale of Two Sets - Take two sets of whole numbers- A and B. Set A contains every single number set B has plus one number B does not have. Now take an arbitrary measuring system and voila, both sets are the same size!franklin
May 19, 2013
May
05
May
19
19
2013
08:07 PM
8
08
07
PM
PDT
Sal @60, I don't disagree with anything there. I'm suggesting that any string which contains a discernible message cannot have maximum entropy, like a random sequence would have. So I'm not suggesting uncovering specific meanings, but rather ruling out meaning in cases where the symbols approach total randomness, which I'm attempting to correlate with high uncertainty. Letter Frequencies If meaning is present in a string, then some sort of signal will be discernible. Not the specific meaning, but the increased likelihood of the presence of any meaning. Of course, the longer the string the more entropy. But take for instance a couple of 1000 letter sequences. The first is from the Declaration of Independence, and the second is totally random at 4,755 bits and maximum entropy. Analysis of the first string would produce letter frequencies approaching those linked to above. This would reduce entropy. Further analysis, like frequency of letter pairs or letter triplets would likely reduce entropy even more. The first string would have less entropy than the second; and if it didn't, it couldn't contain a discernible meaning. Do you think this is mistaken?Chance Ratcliff
May 19, 2013
May
05
May
19
19
2013
07:49 PM
7
07
49
PM
PDT
joe:And franklin, only a moron would say that I am ignorant of set theory just because I disagree with an arbitrary rule. And here you are. Grow up loser…
It isn't that you have a disagreement with set theory it is that you are clueless about it and don't have the wherewithal to recognize that simple fact. When folks who do understand set theory try to explain where you are mistaken it flies right over your head. Why don't you run some of your assertions about set theory here to folks at UD and see what they think. As it has been pointed out to you: the fromStanford Encyclopedia of Philosophy (remember its importance to nested hierarchies as well): Set Theory is the mathematical science of the infinite. It studies properties of sets, abstract objects that pervade the whole of modern mathematics. The language of set theory, in its simplicity, is sufficiently universal to formalize all mathematical concepts and thus set theory, along with Predicate Calculus, constitutes the true Foundations of Mathematics. As a mathematical theory, Set Theory possesses a rich internal structure, and its methods serve as a powerful tool for applications in many other fields of Mathematics. Set Theory, with its emphasis on consistency and independence proofs, provides a gauge for measuring the consistency strength of various mathematical statements. and this does seem quite appropriate for you: "I recently had a similar thought, that the quality of conduct is often proportional to the strength of one’s argument."franklin
May 19, 2013
May
05
May
19
19
2013
07:16 PM
7
07
16
PM
PDT
Is it possible in principle for a string to have detectable specification when uncertainty is maximal? If not, then a string can only have discernible meaning when its entropy can be reduced.
Chance Ratcliff, The longer the string the higher Shannon Entropy is needed to describe it. Meaning is discernable if: 1. the designer of the string is working to ensure you understand the meaning 2. you had some luck Here is a-non string object where we humans hope someone will figure out the meaning: http://en.wikipedia.org/wiki/Voyager_Golden_Record In the world of formal languges, meaning is never uncovered unless the observer has the capability and tools to discern meaning, and that entails the observer is provided tons of information to decode meaning in a language. Example: the Java Language interpreter. For that matter, any computer language interpreter or compiler.scordova
May 19, 2013
May
05
May
19
19
2013
07:14 PM
7
07
14
PM
PDT
And franklin, only a moron would say that I am ignorant of set theory just because I disagree with an arbitrary rule. And here you are. Grow up loser...Joe
May 19, 2013
May
05
May
19
19
2013
06:55 PM
6
06
55
PM
PDT
Footnote 3 for #46, Is it possible in principle for a string to have detectable specification when uncertainty is maximal? If not, then a string can only have discernible meaning when its entropy can be reduced. This is not a sufficient condition for specification, but perhaps a necessary one. Also, I think it needs to be stressed that just because a signal can be detected, it does not follow that the detection also entails the details. Just because there might be a metric which allows us to discern that specificity is more likely to be present in a string, it does not mean that we've described the specification or even quantified it. Consider two piles of strings, sorted based on their entropy. One pile has high entropy, the other has a more moderate amount. Which pile will be more likely to contain meaningful phrases? This seems like something that could be explored empirically.Chance Ratcliff
May 19, 2013
May
05
May
19
19
2013
06:52 PM
6
06
52
PM
PDT
1 2 3 4

Leave a Reply