Uncommon Descent Serving The Intelligent Design Community

Orgel and 500 Coins

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In his 1973 book The Origins of Life Leslie Orgel wrote: “Living organisms are distinguished by their specified complexity. Crystals such as granite fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity.” (189).

In my post On “Specified Complexity,” Orgel and Dembski I demonstrated that in this passage Orgel was getting at the exact same concept that Dembski calls “specified complexity.”  In a comment to that post “Robb” asks:

500 coins, all heads, and therefore a highly ordered pattern.
What would Orgel say — complex or not?

Orgel said that crystals, even though they display highly ordered patterns, lack complexity. Would he also say that the highly ordered pattern of “500 coins; all heads” lacks complexity?

In a complexity analysis, the issue is not whether the patterns are “highly ordered.” The issue is how the patterns came to be highly ordered. If a pattern came to be highly ordered as a result of natural processes (e.g., the lawlike processes that result in crystal formation), it is not complex. If a pattern came to be highly ordered in the very teeth of what we would expect from natural processes (we can be certain that natural chance/law processes did not create the 500 coin pattern), the pattern is complex.

Complexity turns on contingency. The pattern of a granite crystal is not contingent. Therefore, it is not complex.  The “500 coins; all heads” pattern is highly contingent. Therefore, it is complex.

What would Orgel say? We cannot know what Orgel would say. We can say that if he viewed the “500 coins; all heads” pattern at a very superficial level (it is just an ordered pattern), he might say it lacks complexity, in which case he would have been wrong. If he viewed the “500 coin; all heads” pattern in terms of the extreme level of contingency displayed in the pattern, he would have said the pattern is complex, and he would have been right.

About one thing we can be absolutely certain. Orgel would have known without the slightest doubt that the “500 coin; all heads” pattern was far beyond the ability of chance/law forces, and he would therefore have made a design inference.

Comments
Thank you, BA77. As I thought. How could it be otherwise? Given the nature of matter and the concept of non-locality. Axel
Axel, as stated before, although naturalists have postulated some far fetched scenarios, such as many worlds, etc..., to deal with quantum mechanics, none of those scenarios in themselves are compatible with reductive materialism that under-girds neo-Darwinian thought. “[while a number of philosophical ideas] may be logically consistent with present quantum mechanics, …materialism is not.” Eugene Wigner Quantum Physics Debunks Materialism – video playlist https://www.youtube.com/watch?list=PL1mr9ZTZb3TViAqtowpvZy5PZpn-MoSK_&v=4C5pq7W5yRM Why Quantum Theory Does Not Support Materialism By Bruce L Gordon, Ph.D Excerpt: The underlying problem is this: there are correlations in nature that require a causal explanation but for which no physical explanation is in principle possible. Furthermore, the nonlocalizability of field quanta entails that these entities, whatever they are, fail the criterion of material individuality. So, paradoxically and ironically, the most fundamental constituents and relations of the material world cannot, in principle, be understood in terms of material substances. Since there must be some explanation for these things, the correct explanation will have to be one which is non-physical – and this is plainly incompatible with any and all varieties of materialism. http://www.4truth.net/fourtruthpbscience.aspx?pageid=8589952939 bornagain77
Do the materialists have any explanation for non-locality, BA77? It just seems to me to knock materialism on the head, like an angler's 'priest' does to a fish he's caught. Giving it the last rites.... Axel
Thus, as far as empirical science itself is concerned, Neo-Darwinism is falsified in its claim that information is ‘emergent’ from a material basis. Of related interest to 'non-local', beyond space and time, quantum entanglement 'holding life together', in the following paper, Andy C. McIntosh, professor of thermodynamics and combustion theory at the University of Leeds, holds that non-material information is what is constraining the cell to be so far out of thermodynamic equilibrium. Moreover, Dr. McIntosh holds that regarding information as independent of energy and matter 'resolves the thermodynamic issues and invokes the correct paradigm for understanding the vital area of thermodynamic/organisational interactions'.
Information and Thermodynamics in Living Systems - Andy C. McIntosh - May 2013 Excerpt: The third view then that we have proposed in this paper is the top down approach. In this paradigm, the information is non-material and constrains the local thermodynamics to be in a non-equilibrium state of raised free energy. It is the information which is the active ingredient, and the matter and energy are passive to the laws of thermodynamics within the system. As a consequence of this approach, we have developed in this paper some suggested principles of information exchange which have some parallels with the laws of thermodynamics which undergird this approach.,,, (Dr Andy C. McIntosh is the Professor of Thermodynamics Combustion Theory at the University of Leeds. - the highest teaching/research rank in U.K. university hierarchy) http://www.worldscientific.com/doi/pdf/10.1142/9789814508728_0008
Here is a recent video by Dr. Giem, that gets the main points of Dr. McIntosh’s paper over very well for the lay person:
Biological Information – Information and Thermodynamics in Living Systems 11-22-2014 by Paul Giem (A. McIntosh) – video https://www.youtube.com/watch?v=IR_r6mFdwQM
Of related interest, here is the evidence that quantum information is in fact ‘conserved’;,,,
Quantum no-hiding theorem experimentally confirmed for first time Excerpt: In the classical world, information can be copied and deleted at will. In the quantum world, however, the conservation of quantum information means that information cannot be created nor destroyed. This concept stems from two fundamental theorems of quantum mechanics: the no-cloning theorem and the no-deleting theorem. A third and related theorem, called the no-hiding theorem, addresses information loss in the quantum world. According to the no-hiding theorem, if information is missing from one system (which may happen when the system interacts with the environment), then the information is simply residing somewhere else in the Universe; in other words, the missing information cannot be hidden in the correlations between a system and its environment. http://www.physorg.com/news/2011-03-quantum-no-hiding-theorem-experimentally.html Quantum no-deleting theorem Excerpt: A stronger version of the no-cloning theorem and the no-deleting theorem provide permanence to quantum information. To create a copy one must import the information from some part of the universe and to delete a state one needs to export it to another part of the universe where it will continue to exist. http://en.wikipedia.org/wiki/Quantum_no-deleting_theorem#Consequence
Besides providing direct empirical falsification of neo-Darwinian claims as to the generation of information, the implication of finding 'non-local', beyond space and time, and ‘conserved’, quantum information in molecular biology on a massive scale is fairly, and pleasantly, obvious:
Quantum Entangled Consciousness - Life After Death - Stuart Hameroff - video https://vimeo.com/39982578 Does Quantum Biology Support A Quantum Soul? – Stuart Hameroff - video (notes in description) http://vimeo.com/29895068
Verse and Music:
John 1:1-4 In the beginning was the Word, and the Word was with God, and the Word was God. He was in the beginning with God. All things were made through Him, and without Him nothing was made that was made. In Him was life, and the life was the light of men. While I’m Waiting - John Waller http://myktis.com/songs/while-im-waiting/
bornagain77
In fact an entire human can, theoretically, be reduced to quantum information and teleported to another location in the universe:
Quantum Teleportation Of A Human? – video https://vimeo.com/75163272 Will Human Teleportation Ever Be Possible? As experiments in relocating particles advance, will we be able to say, "Beam me up, Scotty" one day soon? By Corey S. Powell|Monday, June 16, 2014 Excerpt: Note a fascinating common thread through all these possibilities. Whether you regard yourself as a pile of atoms, a DNA sequence, a series of sensory inputs or an elaborate computer file, in all of these interpretations you are nothing but a stack of data. According to the principle of unitarity, quantum information is never lost. Put them together, and those two statements lead to a staggering corollary: At the most fundamental level, the laws of physics say you are immortal. http://discovermagazine.com/2014/julyaug/20-the-ups-and-downs-of-teleportation
Thus not only is information not reducible to a energy-matter basis, as is presupposed in the reductive materialism of Darwinism, but in actuality both energy and matter ultimately reduce to a information basis as is presupposed in Christian Theism (John1:1-4). Moreover, this 'spooky action at a distance', i.e. beyond space and time, quantum entanglement/information, by which energy and matter are reducible to a material basis, is now found in molecular biology on a massive scale. i.e. Beyond space and time, i.e. 'non-local', quantum entanglement is now found in every DNA and protein molecule.
Quantum entanglement holds together life’s blueprint – 2010 Excerpt: When the researchers analysed the DNA without its helical structure, they found that the electron clouds were not entangled. But when they incorporated DNA’s helical structure into the model, they saw that the electron clouds of each base pair became entangled with those of its neighbours. “If you didn’t have entanglement, then DNA would have a simple flat structure, and you would never get the twist that seems to be important to the functioning of DNA,” says team member Vlatko Vedral of the University of Oxford. http://neshealthblog.wordpress.com/2010/09/15/quantum-entanglement-holds-together-lifes-blueprint/ Quantum Information/Entanglement In DNA - short video https://vimeo.com/92405752 Coherent Intrachain energy migration at room temperature – Elisabetta Collini and Gregory Scholes – University of Toronto – Science, 323, (2009), pp. 369-73 Excerpt: The authors conducted an experiment to observe quantum coherence dynamics in relation to energy transfer. The experiment, conducted at room temperature, examined chain conformations, such as those found in the proteins of living cells. Neighbouring molecules along the backbone of a protein chain were seen to have coherent energy transfer. Where this happens quantum decoherence (the underlying tendency to loss of coherence due to interaction with the environment) is able to be resisted, and the evolution of the system remains entangled as a single quantum state. http://www.scimednet.org/quantum-coherence-living-cells-and-protein/ In fact, highly sophisticated quantum computation, due to the monster 'travelling salesman problem' being dealt with in regards to protein folding and DNA repair, is directly implicated in protein folding and DNA repair,, etc.. etc...
That quantum entanglement, which conclusively demonstrates that ‘information’ in its pure ‘quantum form’ is completely transcendent of any time and space constraints (Bell, Aspect, Leggett, Zeilinger, etc..), should be found in molecular biology on such a massive scale is a direct empirical falsification of Darwinian claims, for how can the ‘non-local’ quantum entanglement ‘effect’ in biology possibly be explained by a material (matter/energy) cause when the quantum entanglement effect falsified material particles as its own causation in the first place? Appealing to the probability of various 'random' configurations of material particles, as Darwinism does, simply will not help since a timeless/spaceless cause must be supplied which is beyond the capacity of the material particles themselves to supply!
Looking beyond space and time to cope with quantum theory – 29 October 2012 Excerpt: “Our result gives weight to the idea that quantum correlations somehow arise from outside spacetime, in the sense that no story in space and time can describe them,” http://www.quantumlah.org/highlight/121029_hidden_influences.php Closing the last Bell-test loophole for photons - Jun 11, 2013 Excerpt:– requiring no assumptions or correction of count rates – that confirmed quantum entanglement to nearly 70 standard deviations.,,, http://phys.org/news/2013-06-bell-test-loophole-photons.html etc.. etc..
In other words, to give a coherent explanation for an effect that is shown to be completely independent of any time and space constraints one is forced to appeal to a cause that is itself not limited to time and space! i.e. Put more simply, you cannot explain a effect by a cause that has been falsified by the very same effect you are seeking to explain! Improbability arguments of various ‘special’ configurations of material particles, which have been a staple of the arguments against neo-Darwinism, simply do not apply since the cause is not within the material particles in the first place! And although Naturalists/Materialists have proposed various, far fetched, naturalistic scenarios to try to get around the Theistic implications of quantum non-locality, none of the ‘far fetched’ naturalistic solutions, in themselves, are compatible with the reductive materialism that undergirds neo-Darwinian thought.
"[while a number of philosophical ideas] may be logically consistent with present quantum mechanics, ...materialism is not." Eugene Wigner Quantum Physics Debunks Materialism - video playlist https://www.youtube.com/watch?list=PL1mr9ZTZb3TViAqtowpvZy5PZpn-MoSK_&v=4C5pq7W5yRM Why Quantum Theory Does Not Support Materialism By Bruce L Gordon, Ph.D Excerpt: The underlying problem is this: there are correlations in nature that require a causal explanation but for which no physical explanation is in principle possible. Furthermore, the nonlocalizability of field quanta entails that these entities, whatever they are, fail the criterion of material individuality. So, paradoxically and ironically, the most fundamental constituents and relations of the material world cannot, in principle, be understood in terms of material substances. Since there must be some explanation for these things, the correct explanation will have to be one which is non-physical – and this is plainly incompatible with any and all varieties of materialism. http://www.4truth.net/fourtruthpbscience.aspx?pageid=8589952939
bornagain77
keith s, not that I hold much hope you will acknowlege it, but there is an empirical falsification for the materialistic, neo-Darwinian, claim that information is emergent from a material basis. A falsification of neo-Darwinism that does not rely on probabilistic calculations, but instead relies on observational evidence. Contrary to materialistic thought, information is now shown to be its own independent entity which is separate from matter and energy. In fact, information is now shown to be physically measurable.,,
Maxwell’s demon demonstration turns information into energy – November 2010 Excerpt: Scientists in Japan are the first to have succeeded in converting information into free energy in an experiment that verifies the “Maxwell demon” thought experiment devised in 1867.,,, In Maxwell’s thought experiment the demon creates a temperature difference simply from information about the gas molecule temperatures and without transferring any energy directly to them.,,, Until now, demonstrating the conversion of information to energy has been elusive,,,, they describe how they coaxed a Brownian particle to travel upwards on a “spiral-staircase-like” potential energy created by an electric field solely on the basis of information on its location. As the particle traveled up the staircase it gained energy from moving to an area of higher potential, and the team was able to measure precisely how much energy had been converted from information. http://www.physorg.com/news/2010-11-maxwell-demon-energy.html Demonic device converts information to energy – 2010 Excerpt: “This is a beautiful experimental demonstration that information has a thermodynamic content,” says Christopher Jarzynski, a statistical chemist at the University of Maryland in College Park. In 1997, Jarzynski formulated an equation to define the amount of energy that could theoretically be converted from a unit of information2; the work by Sano and his team has now confirmed this equation. “This tells us something new about how the laws of thermodynamics work on the microscopic scale,” says Jarzynski. http://www.scientificamerican.com/article.cfm?id=demonic-device-converts-inform “Is there a real connection between entropy in physics and the entropy of information? ….The equations of information theory and the second law are the same,,,” Siegfried, Dallas Morning News, 5/14/90, [Quotes Robert W. Lucky, Ex. Director of Research, AT&T, Bell Laboratories & John A. Wheeler, of Princeton & Univ. of TX, Austin]
Moreover, the total information content of the bacterial cell, when it is calculated from this now 'measurable' thermodynamic perspective, is far larger than just what is encoded on the DNA,,
Biophysics – Information theory. Relation between information and entropy: – Setlow-Pollard, Ed. Addison Wesley Excerpt: Linschitz gave the figure 9.3 x 10^12 cal/deg or 9.3 x 10^12 x 4.2 joules/deg for the entropy of a bacterial cell. Using the relation H = S/(k In 2), we find that the information content is 4 x 10^12 bits. Morowitz’ deduction from the work of Bayne-Jones and Rhees gives the lower value of 5.6 x 10^11 bits, which is still in the neighborhood of 10^12 bits. Thus two quite different approaches give rather concordant figures. https://docs.google.com/document/d/18hO1bteXTPOqQtd2H12PI5wFFoTjwg8uBAU5N0nEQIE/edit “a one-celled bacterium, e. coli, is estimated to contain the equivalent of 100 million pages of Encyclopedia Britannica. Expressed in information in science jargon, this would be the same as 10^12 bits of information. In comparison, the total writings from classical Greek Civilization is only 10^9 bits, and the largest libraries in the world – The British Museum, Oxford Bodleian Library, New York Public Library, Harvard Widenier Library, and the Moscow Lenin Library – have about 10 million volumes or 10^12 bits.” – R. C. Wysong ‘The information content of a simple cell has been estimated as around 10^12 bits, comparable to about a hundred million pages of the Encyclopedia Britannica.” Carl Sagan, “Life” in Encyclopedia Britannica: Macropaedia (1974 ed.), pp. 893-894
As well, it is important to note that, counter-intuitive to materialistic thought (and to every kid who has ever taken a math exam), a computer does not consume energy during computation but will only consume energy when information is erased from it. This counter-intuitive fact is formally known as Landauer’s Principle.
Landauer’s principle Of Note: “any logically irreversible manipulation of information, such as the erasure of a bit or the merging of two computation paths, must be accompanied by a corresponding entropy increase ,,, Specifically, each bit of lost information will lead to the release of an (specific) amount (at least kT ln 2) of heat.,,, Landauer’s Principle has also been used as the foundation for a new theory of dark energy, proposed by Gough (2008). http://en.wikipedia.org/wiki/Landauer%27s_principle
It should be noted that Rolf Landauer himself, despite the counterintuitive fact that information is not generated by an expenditure of energy but can only be erased by an expenditure of energy, presumed that the information in a computer was merely ‘physical’, i.e. merely emergent from a material basis, because the information in a computer required energy to be spent for the information to be erased from it. Landauer held this materialistic position in spite of objections from people like Roger Penrose and Norbert Weiner who held that information is indeed real and has its own independent existence separate from matter-energy.
“Those devices (computers) can yield only approximations to a structure (of information) that has a deep and “computer independent” existence of its own.” - Roger Penrose – The Emperor’s New Mind – Pg 147 “Information is information, not matter or energy. No materialism which does not admit this can survive at the present day.” Norbert Weiner – MIT Mathematician -(Cybernetics, 2nd edition, p.132) Norbert Wiener created the modern field of control and communication systems, utilizing concepts like negative feedback. His seminal 1948 book Cybernetics both defined and named the new field.
Yet the validity of Landauer’s materialistic contention that ‘Information is physical’ has now been overturned, because information is now known to be erasable from a computer without consuming energy.
Scientists show how to erase information without using energy – January 2011 Excerpt: Until now, scientists have thought that the process of erasing information requires energy. But a new study shows that, theoretically, information can be erased without using any energy at all. Instead, the cost of erasure can be paid in terms of another conserved quantity, such as spin angular momentum.,,, “Landauer said that information is physical because it takes energy to erase it. We are saying that the reason it is physical has a broader context than that.”, Vaccaro explained. http://www.physorg.com/news/2011-01-scientists-erase-energy.html Quantum knowledge cools computers: New understanding of entropy – June 2011 Excerpt: No heat, even a cooling effect; In the case of perfect classical knowledge of a computer memory (zero entropy), deletion of the data requires in theory no energy at all. The researchers prove that “more than complete knowledge” from quantum entanglement with the memory (negative entropy) leads to deletion of the data being accompanied by removal of heat from the computer and its release as usable energy. This is the physical meaning of negative entropy. Renner emphasizes, however, “This doesn’t mean that we can develop a perpetual motion machine.” The data can only be deleted once, so there is no possibility to continue to generate energy. The process also destroys the entanglement, and it would take an input of energy to reset the system to its starting state. The equations are consistent with what’s known as the second law of thermodynamics: the idea that the entropy of the universe can never decrease. Vedral says “We’re working on the edge of the second law. If you go any further, you will break it.” http://www.sciencedaily.com/releases/2011/06/110601134300.htm
Moreover, if physically measuring information, and/or erasing information from a computer without using energy, were not bad enough for the Darwinian belief that information is merely emergent from a material basis, it is now shown, by using quantum entanglement as a ‘quantum information channel’, that material reduces to information instead of information reducing to material as is believed in Darwinian materialistic presuppositions.
Quantum Entanglement and Information Quantum entanglement is a physical resource, like energy, associated with the peculiar nonclassical correlations that are possible between separated quantum systems. Entanglement can be measured, transformed, and purified. A pair of quantum systems in an entangled state can be used as a quantum information channel to perform computational and cryptographic tasks that are impossible for classical systems. The general study of the information-processing capabilities of quantum systems is the subject of quantum information theory. http://plato.stanford.edu/entries/qt-entangle/
And, as mentioned previously, by using this 'measurable' quantum information channel of entanglement, matter-energy has been reduced to quantum information: (of note: energy is completely reduced to quantum information, whereas matter is semi-completely reduced, with the caveat being that matter can be reduced to energy via e=mc2).
How Teleportation Will Work - Excerpt: In 1993, the idea of teleportation moved out of the realm of science fiction and into the world of theoretical possibility. It was then that physicist Charles Bennett and a team of researchers at IBM confirmed that quantum teleportation was possible, but only if the original object being teleported was destroyed. — As predicted, the original photon no longer existed once the replica was made. per howstuffworks Quantum Teleportation – IBM Research Page Excerpt: “it would destroy the original (photon) in the process,,” http://researcher.ibm.com/view_project.php?id=2862 Atom takes a quantum leap – 2009 Excerpt: Ytterbium ions have been ‘teleported’ over a distance of a metre.,,, “What you’re moving is information, not the actual atoms,” says Chris Monroe, from the Joint Quantum Institute at the University of Maryland in College Park and an author of the paper. But as two particles of the same type differ only in their quantum states, the transfer of quantum information is equivalent to moving the first particle to the location of the second. http://www.freerepublic.com/focus/news/2171769/posts etc.. etc..
bornagain77
Barry, Before you leave for your trip, I hope you'll explain to us why you deleted an entire thread, including comments. keith s
Barry, A reminder. See above. keith s
Barry, When will you explain why you deleted an entire thread, along with two of Joe's comments? keith s
keiths #120:
Orgel was smart enough to keep complexity separate from improbability. Dembski conflated the two.
Eric #147:
I’m not sure where your allegation of Dembski’s conflation comes from.
It's simple, and it's right there in 1) the name itself: complex specified information; and 2) the equation, which includes P(T|H), a probability; and 3) the fact that Dembski attributes CSI when the probability becomes small enough. keith s
Eric:
I am not familiar enough with Orgel’s work to be able to say precisely what he was driving at. But if he is talking about the origin of complex cellular structures then he is most definitely not just talking about Kolmogorov complexity.
In the passages where Orgel talks about specified complexity, he is not discussing origins. He presents specified complexity as a characteristic property of life vs. non-life, not an indicator of design vs. non-design. So there is no need to bring up probabilities, and indeed he doesn't. He makes it very clear that he referring to Kolmogorov complexity. R0bb
Kolmogorov, nor Orgel, duh Joe
Orgel was smart enough not to blindly apply probability theory and he found/ developed a methodology to help distinguish those circumstances in which probability theory (alone?) does not apply. Joe
keith @120:
Orgel was smart enough to keep complexity separate from improbability. Dembski conflated the two.
Dembski discussed Kolmogorov complexity in his writings, taking time to show both the relevance and the distinction between Kolmogorov complexity and specified complexity. I'm not sure where your allegation of Dembski's conflation comes from. Dembski has written an incredible quantity over the years, so there might be some quote someone could find somewhere that can be understood as less than clear on the point. But generally Dembski is quite clear about the distinction. Furthermore, we can be relatively confident that he knows more about the topic than either you or I. ----- R0bb @121:
Just to be clear, the dispute in this thread is over the claim that Orgel and Dembski mean the same thing when they say “complexity”. Setting aside issues like origins, design, and the quality of Orgel’s work, what is your take on this claim?
Thanks, R0bb. That is a helpful way forward in the discussion. I'm not sure it makes any sense to set aside issues like origins. Particularly, as you have pointed out, the question of the origin of a structure/sequence/information is linked to the probability side (rather than just the Kolmogorov descriptive side). If Orgel was talking about the origin of complex structures, then he was definitely interested in probability, as that is the only thing that would be relevant (Kolmogorov is essentially irrelevant). That doesn't mean he wouldn't discuss a concept like Kolmogorov complexity -- just as Dembski does in his writings. My personal take? I am not familiar enough with Orgel's work to be able to say precisely what he was driving at. But if he is talking about the origin of complex cellular structures then he is most definitely not just talking about Kolmogorov complexity. Furthermore, it is quite common for a later researcher to build upon the ideas of an earlier researcher. In doing so, the later researcher will inevitably add a nuance, or a slightly different take, or a clarification, or a new way of looking at things. But we can still see the chain of thought linking the two and still would be justified in saying that the later researcher is building upon the ideas of the former, or that the former was describing essentially the same thing as the later, albeit the former would obviously not have included the later's additional thoughts or nuances on the topic. Dembski himself, makes the tie:
Neither Orgel nor Davies, however, provided a precise analytic account of specified complexity. I provide such an account in The Design Inference (1998b) and its sequel No Free Lunch (2002). In this section I want briefly to outline my work on specified complexity. Orgel and Davies used specified complexity loosely. I’ve formalized it as a statistical criterion for identifying the effects of intelligence.
So Dembski himself says that Orgel's concept is not a "precise analytic account" like Dembski's effort. He also says that Orgel "used specified complexity loosely," while Dembski feels he as "formalized it." This means, obviously, that Dembski has added to or developed Orgel's concept. Thus, is Dembski talking about exactly the same thing as Orgel, in the sense of simply repeating verbatim Orgel's thoughts on the topic? Of course not; he says he is going further and developing the concept beyond Orgel's discussion. Is Dembski, in developing his own take, talking about largely the same thing as Orgel? Yes. My take on the "dispute in this thread" is that people are straining at gnats. Dembski is clearly building on Orgel and they both talk about specified complexity in the origins context. Those seem to be undisputable facts. Unfortunately, some people seem so obsessed with bashing Dembski that they refuse to see the practical realities and have gotten into a dispute that turns on a single quote here or a phrase there. Together with what appears to be a false allegation by keith that Dembski conflates concepts, the usefulness of the discussion may be less than it otherwise could have been. I think it would be useful for us all to better understand Orgel's approach, as well as how Dembski has built upon it in his work on the design inference. Unfortunately, we're stuck with a take-no-prisoners battle by some who are intent on discrediting Dembski at all costs, even with unfounded allegations. Eric Anderson
Proof that Aleta is totally hopeless: I had said: That is incorrect as the KC is a measure of the description of the thing. The strings have the same probability, however that is given a purely random occurrence. To which Aleta responded:
KC has nothing to do with the probability of the string occurring – it is just a measure of a property of the string irrespective of how it came about.
Notice that I never said that KC = probability. I never even implied it.
The sentence you quoted says nothing about probability – nothing about where the string came from, and the example in the article makes that clear.
Umm the sentence I quoted was to refute what you said earlier:
1. You say that it is false that all strings have some measure of Kolmogorov complexity. I quoted the Wikipedia article that makes it clear that all strings do have some measure of Kolmogorov complexity. Can you provide some evidence or citation to back up your claim that some strings don’t have any measure of Kolmogorov complexity?
Are you that daft that you cannot remember what you posted? Joe
Aleta:
Of course not all strings are the same – Joe’s response doesn’t address the point at all.
Yet you said:
You say that it is false that all strings have some measure of Kolmogorov complexity. I quoted the Wikipedia article that makes it clear that all strings do have some measure of Kolmogorov complexity
Please make up your mind. And if you don't know what is meant by a level playing field then perhaps you shouldn't be having a discussion on probabilities.
This directly contradicts what he said earlier where he agreed the all events didn’t need to have equal probability.
Only in your mind. Not having a equal probability does not mean there isn't a level playing field. I conclude that Aleta is totally hopeless. Joe
I conclude that Joe is hopeless - his answers don't even begin to address my points, and are in fact contradictory. When I wrote,
KC has nothing to do with the probability of the string occurring – it is just a measure of a property of the string irrespective of how it came about.
Joe replied, "KC has to do with the string’s description and not all strings are the same. The two in the example are not the same." Of course not all strings are the same - Joe's response doesn't address the point at all. And when I wrote,
Are you saying that the only place probabilities matter is when all events have equal probability?
, Joe replied, "No. A level playing field is required, though." But when I asked him to explain what he meant by a level playing field, he replied, "And your example has a weighted coin- it is not a level playing field." This directly contradicts what he said earlier where he agreed the all events didn't need to have equal probability. I'll move on to better things in my life. Aleta
R0bb- Leave the complex strings alone and try to meet my challenge. And your claim of my being vague is laughable. Joe
R0bb:
I’ll repeat what I said before: Apply a ROT13 to a very complex string. The resulting new string has a probability of 1 because ROT13 is a deterministic operation, and the new string is also guaranteed to be very complex.
You are very desperate. Joe
Aleta:
KC has nothing to do with the probability of the string occurring – it is just a measure of a property of the string irrespective of how it came about.
KC has to do with the string's description and not all strings are the same. The two in the example are not the same. And your example has a weighted coin- it is not a level playing field. Joe
But Joe, KC has nothing to do with the probability of the string occurring - it is just a measure of a property of the string irrespective of how it came about. The sentence you quoted says nothing about probability - nothing about where the string came from, and the example in the article makes that clear. And what do you mean by "a level playing field is required." What is there about my example that is not a "level playing field"? Aleta
Joe:
If there aren’t any cases in which something complex also has a high probability of occurring then it is clear the Kolmogorov complexity and probability go hand in hand.
I'll repeat what I said before: Apply a ROT13 to a very complex string. The resulting new string has a probability of 1 because ROT13 is a deterministic operation, and the new string is also guaranteed to be very complex. Of course, you can easily come up with an ad hoc reason to reject this response to your challenge. The problem is that your challenge is so vaguely conceived that the goalposts are highly mobile. Expressing your challenge in mathematical notation would be a good first step toward planting the goalposts. R0bb
1. You say that it is false that all strings have some measure of Kolmogorov complexity. I quoted the Wikipedia article that makes it clear that all strings do have some measure of Kolmogorov complexity. Can you provide some evidence or citation to back up your claim that some strings don’t have any measure of Kolmogorov complexity?
That is incorrect as the KC is a measure of the description of the thing. The strings have the same probability, however that is given a purely random occurrence. Wikipedia: The Kolmogorov complexity … of an object, such as a piece of text, is a measure of the computability resources needed to specify the object. Geez you can't even understand your reference.
Are you saying that the only place probabilities matter is when all events have equal probability?
No. A level playing field is required, though. Joe
Joe, I believe you are wrong on both counts. 1. You say that it is false that all strings have some measure of Kolmogorov complexity. I quoted the Wikipedia article that makes it clear that all strings do have some measure of Kolmogorov complexity. Can you provide some evidence or citation to back up your claim that some strings don't have any measure of Kolmogorov complexity? 2. When I offered the example that, for a coin which turns up heads 99% of the time, P(10 heads) = 82% and P(10 tails) = 10^-20, you replied,
You cannot use a totally biased example to make your case. Probabilities only matter on a level playing field.
Are you saying that the only place probabilities matter is when all events have equal probability? If so, that is certainly wrong. Many (most) real world problems involving probability involve situations where some event is more likely than 50%. When I taught beginning stats, we had all sorts of problems involving such things as the reliability of medical tests, random sampling of products for defects, etc. where the probability of success and the probability of failure were very far from a 50-50 split - in fact some were even more unbalanced than the 99-1% split in my example. Aleta
And AGAIN: If there aren’t any cases in which something complex also has a high probability of occurring then it is clear the Kolmogorov complexity and probability go hand in hand. So far so good... Joe
Aleta:
So all strings have some measure of Kolmogorov complexity.
That is false. Allegedly they have the same probability but that is also false.
As above, the string HHHHHHHHHH has a probability of 0.99^10 = 82%. The string TTTTTTTTT has a probability or 0.01 ^ 10 = 10^-20, which is extremely small.
You cannot use a totally biased example to make your case. Probabilities only matter on a level playing field. Try again Joe
R0bb:
Even if everything that’s complex is also improbable, it could still be the case that some things that are improbable are not complex.
Examples please. The string in 36 is not highly improbable as randomness did not produce it. Joe
This is very interesting. I had forgotten about the Wikipedia article on the problem when I posted it Wednesday, but I had read it before. The article says that the way I stated the problem leads to the answer of 1/2, not 1/3, but that the "at least one boy" formulation leads to 1/3. I see that, and I think RObb offered a good explanation why this is the case:
The solution in #112 assumes that BB, BG, and GB are all equally likely. But given that we’ve seen a boy, BB is actually twice as likely as each of the others. So the answer is in fact 1/2.
The main issue seems to be how you find out that there is at least one boy - whether through a random process by looking in the window (which leads to a probability of 1/2), or by being told there is at least one boy. For instance, if the father walked up to you and said "my boy Bill is sick" and then went into the house, I think the interpretation that leads to an answer of 1/3 might still hold. Aleta
Re the boy-girl paradox, I think I was wrong and that R0bb and Orloog are right. But I could be wrong. :-) In any case, a good night's sleep should help clear things up. keith s
Barry, When are you going to explain why you deleted that thread the other day? keith s
Mark, thank you for the link to the wikipedia article. According to it and the way Aleta phrased the question, the answer is indeed 1/2. Orloog
Mark Frank:
(I just missed the deadline for deleting my comment!)
Don't delete comments! That way lies chaos. Just think of the 20-minute window as an opportunity to correct typos and add additional comments prefaced by "ETA"="edited to add". (I speak from experience. Internet discussions can become chaotic if people start deleting comments.) Even worse is when people delete entire threads. keith s
#127 This inspired me to look at the wikipedia article on the paradox. Apparently it is more complex and debateable than I thought. (I just missed the deadline for deleting my comment!) Mark Frank
Keith S, Orloog Aleta is right - the probability is 1/3 not 1/2. It is a well-known paradox. Prior to seeing the boy at the window the four possibilities: BB, GB, BG and GG are all equally probable. Observing the boy eliminates GG but does not change the relative probability of the other three possibilities. I am interested to know how you did your simulation Orloog. You can be pretty certain there is something wrong with it as the maths is bomb proof as far as I know. Mark Frank
Thank you, R0bb: I even run a simulation, as I didn't trust my calculations any longer, the result was 1/2... Orloog
R0bb:
Not to be contrarian, but I’m going to have to disagree with those who gave an answer of 1/3 to Aleta’s riddle. The solution in #112 assumes that BB, BG, and GB are all equally likely. But given that we’ve seen a boy, BB is actually twice as likely as each of the others. So the answer is in fact 1/2.
Interesting! I think I understand your logic, and I think I can show where it goes wrong, but let me think about it some more and reply later. In the meantime, I have some other comments to write. :-) keith s
Not to be contrarian, but I'm going to have to disagree with those who gave an answer of 1/3 to Aleta's riddle. The solution in #112 assumes that BB, BG, and GB are all equally likely. But given that we've seen a boy, BB is actually twice as likely as each of the others. So the answer is in fact 1/2. R0bb
Eric, After you've answered R0bb, a challenge awaits on your own thread. keith s
Mung @ 54:
Well there I went again. But this time it wasn’t Orgel using the word “information” but Kolmogorov. Can’t wait to see your snide remark about this one.
You've pointed out that Orgel, Kolmogorov, and Dembski all use the word "information". I'll gladly respond when you tell me what conclusion you draw from this. R0bb
Eric @ 119: Just to be clear, the dispute in this thread is over the claim that Orgel and Dembski mean the same thing when they say "complexity". Setting aside issues like origins, design, and the quality of Orgel's work, what is your take on this claim? R0bb
Eric, Orgel was talking about Kolmogorov complexity in the quote you gave us:
One can see intuitively that many instructions are needed to specify a complex structure. On the other hand a simple repeating structure can be specified in rather few instructions.
Does that mean that he wasn't interested in probability? Of course not. You can't do OOL work without taking probability into account. Orgel was smart enough to keep complexity separate from improbability. Dembski conflated the two. keith s
keith @29:
Orgel is talking about Kolmogorov complexity while Dembski is talking about improbability.
It is a simple question I am asking: Is Orgel in his book really focused on Kolmogorov complexity rather than improbability? Did Orgel say he was talking about Kolmogorov complexity in the context of the origin of specified complexity? That is the question. If he did, then he was off base. If not, then you have gotten us off track. Eric Anderson
Eric Anderson:
keith @29 commented that Orgel was interested in Kolmogorov complexity, not probability.
No, I didn't. Please read more carefully, Eric. Orgel was interested in both complexity and improbability, but unlike Dembski, he didn't conflate the two. keith s
Regarding Kolmogorov complexity, I've always been of the view (though I am certainly open to being corrected) that Kolmogorov complexity has little to do with what we are interested in for design purposes. keith @29 commented that Orgel was interested in Kolmogorov complexity, not probability. Much of the back and forth on this thread depends on whether that is in fact the case. Does anyone have a clear statement from Orgel that he was primarily interested in Kolmogorov complexity and not probability? After all, he wrote a book about the origins of life, so presumably he was interested -- one would think -- in the origin and source of the specified complexity he observed in living organisms, not so much on the compressibility of that complexity for modern information systems purposes. It seems strange that Orgel would be focusing only on Kolmogorov complexity and not on the probabilities that relate to the origin of such specified complexity. I'm wondering if the whole discussion has been taken down the garden path by comment #29. Again, however, while it may not have much relevance to the design inference I'd nevertheless be curious to know whether Orgel was in fact only discussing algorithmic compressibility as opposed to probability in his book on the origins of life. If so, then it seems he may have been off on the wrong track. Eric Anderson
Thanks, Aleta @112. Good explanation. Orloog, taken together, Aleta's riddle and mine make a great example of how information helps narrow the range of possibilities. In other words, an infusion of information helps narrow the search space. Very simple example, but quite clear. Indeed, one possible way of defining information is "the elimination of possibilities." Eric Anderson
Hi Orloog. See my explanation at #112 and see if that makes sense to you, and then see how my problem differs from Eric's, for which your reasoning applies. Aleta
keiths:
I’m curious. What do you think of Barry’s behavior?
Let's say Barry posted something and then realized he disagreed with what he wrote and so deleted it. So what? Every user has the opportunity to do just that. What do folks think of keiths's behavior? Should he maybe use that feature and delete most of his posts after submitting them? Mung
in 104, I wrote,
to Joe: consider a “coin” that is weighted so it comes up heads 99% of the time, and throw 20 of these coins. These would come up all heads about 82% of the time, which is a pretty high probability. However, 20 heads would have low Kolmogorov complexity because a simple rule could describe them. This is a case where Kolmogorov complexity and probability do not go hand in hand.
Joe replied,
There isn’t any complexity in your example. The high probability matches the simple rule.
This doesn't make sense. First consider what Wikipedia says (and I'm sure other sources would confirm this.)
The Kolmogorov complexity ... of an object, such as a piece of text, is a measure of the computability resources needed to specify the object. For example, consider the following two strings of 32 lowercase letters and digits: abababababababababababababababab and 4c1j5b2p0cv4w1x8rx2y39umgw5q85s7 The first string has a short English-language description, namely "ab 16 times", which consists of 11 characters. The second one has no obvious simple description (using the same character set) other than writing down the string itself, which has 32 characters. More formally, the complexity of a string is the length of the shortest possible description of the string in some fixed universal description language. It can be shown that the Kolmogorov complexity of any string cannot be more than a few bytes larger than the length of the string itself. Strings, like the abab example above, whose Kolmogorov complexity is small relative to the string's size are not considered to be complex.
So all strings have some measure of Kolmogorov complexity. You can't say that "there isn't any complexity" in a string. Consider this. As above, the string HHHHHHHHHH has a probability of 0.99^10 = 82%. The string TTTTTTTTT has a probability or 0.01 ^ 10 = 10^-20, which is extremely small. HHHHHHHHHH is quite probable, and TTTTTTTTTT is extremely improbable, but both have the same Kolmogorov complexity: both can be described with the same "computability resources", one as "10 heads" and one as "10 tails" Thus Kolmogorov complexity and improbability do not "go hand in hand." Aleta
Re: 110 1/2. Others might like an explanation The difference is between knowing that one of the children is a boy and knowing that a particular child (the oldest) is a boy. There are four possibilities, and assume the first in each pair is the oldest. BB BG GB GG In problem #1, at 64, knowing that we saw a boy eliminates the GG possibility. Of the remaining three possibilities, if we saw a boy, only one of the three has another boy, so the probability the other child is a boy is 1/3. In Eric's problem we are told the first child is a boy, which eliminates both GG and GB. Of the remaining two possibilities, one has the youngest child a boy, so the probability is 1/2. Aleta
Keith S, Aleta:
Aleta:
Fun with probability – maybe you guys know this one: A family moves into a house across the street. You know they have two children, but you know nothing about their gender. One day you see a boy in the window. Assuming equal probabilities for boys and girls, what is the probability the other child is also a boy?
I was waiting to see if any of the IDers would tackle this, but since they haven’t, I will. The probability that the other child is a boy is 1/3.
Sorry, I don't get it. For me, the probability that the other child is a boy is still 1/2 - isn't that independent from the sex of the child at the window? I painted trees, diagrams, filled out charts, but the result is always the same - unless you claim that boys are domineering windows... Orloog
Aleta @64: Good Punnett Square riddle! And a good example of how information can be used to help narrow a search space. Here is the follow up riddle: The neighbor walks across the street with the boy and says: "I'd like you to meet my son. He is our oldest." Now, what is the probability that the younger child is a boy? :) Eric Anderson
R0bb @36:
The probability of a string depends on the process (or hypothesized process) that produced it. Kolmogorov complexity does not.
Was Orgel's discussion related to the question of the process that produced such features, that is to say, in the origin of such features? His book, I believe, was called "The Origins of Life"? Eric Anderson
R0bb: WRT Shallit and randomness, you have to understand Shallit’s approach to ID discussions.
You have got it backwards. We don't have to understand Jeffrey Shallit, it's exactly the other way around. And J.S. fails miserably. Since it was Jeffrey Shallit who commented on an article by Barry. A quick summary of the article: Barry offered two strings of text. String #1 was created by Barry haphazardly running his hands across his computer keyboard. String #2 was the first 12 lines of Hamlet’s soliloquy. Now what should be obvious - in the context of an ID-debate - to anyone with half a brain is that string #1 is obviously random and string #2 is obviously DESIGNED (the opposite of random). So what does Jeffrey Shallit do? He entered the ID-debate but does he understand what it is all about? No, he hasn't got a clue. So, Jeffrey Shallit runs both strings through a stupid compression algorithm and states that a shakespearean sonnet is “more random” than keyboard pounding, thereby 'proving' that Barry is wrong. Talk about missing the point .... Box
Joe:
If there aren’t any cases in which something complex also has a high probability of occurring then it is clear the Kolmogorov complexity and probability go hand in hand.
First of all, non sequitur. Even if everything that's complex is also improbable, it could still be the case that some things that are improbable are not complex. And actually, there are both types of mismatches. To get something that's complex but highly probable, consider applying a ROT13 to a very complex string. With a probability of 1 you'll get a particular new string, and that string will be complex. And for an improbable outcome that isn't complex, consider the string in #36. R0bb
to Joe: consider a “coin” that is weighted so it comes up heads 99% of the time, and throw 20 of these coins. These would come up all heads about 82% of the time, which is a pretty high probability. However, 20 heads would have low Kolmogorov complexity because a simple rule could describe them. This is a case where Kolmogorov complexity and probability do not go hand in hand.
There isn't any complexity in your example. The high probability matches the simple rule. Joe
Box @ 103, Barry appealed to Shallit in Barry's now-disappeared post. Keith was responding to that post. WRT Shallit and randomness, you have to understand Shallit's approach to ID discussions. His usage of terms is always technical and rigorous, and he assumes (or pretends) that IDists are using the terms likewise. So when IDists talk about information, randomness, or even specified complexity, Shallit responds as if the IDists have the formal definitions of those terms in mind. One could argue that he's paying IDists a compliment. Most people have an informal understanding of the term "random", which they associate with non-determinism or arbitrariness. But in formal randomness measures, a highly random string may be produced by a deterministic process, or by deliberate design. What matters is the string itself, not the process that produced it. So while it may seem that the product of arbitrary tapping on the keyboard must be more random than an intentionally crafted sonnet, such is not necessarily the case for formal definitions of "random". R0bb
to Joe: consider a "coin" that is weighted so it comes up heads 99% of the time, and throw 20 of these coins. These would come up all heads about 82% of the time, which is a pretty high probability. However, 20 heads would have low Kolmogorov complexity because a simple rule could describe them. This is a case where Kolmogorov complexity and probability do not go hand in hand. Aleta
Keith:
Keith #101: Barry is the one who brought Shallit up in support of his argument, not me.
Now I'm confused ... if you did not bring Jeffrey Shallit up, then who is the guy - by the name of Jeffrey Shallit - that you quote extensively in your post #81?? So, here is my question again: are we talking about the same ‘barking mad’ Jeffrey Shallit who claims that a shakespearean sonnet is “more random” than keyboard pounding? And if so, wouldn't you agree that Jeffrey Shallit is a fine one to talk about “spouting nonsense”? Box
It is worth saying it again: If there aren’t any cases in which something complex also has a high probability of occurring then it is clear the Kolmogorov complexity and probability go hand in hand. Joe
Box, Barry is the one who brought Shallit up in support of his argument, not me. Take a look at the vanishing OP. I'm curious. What do you think of Barry's behavior? keith s
Keith,
[ Keith quoting Jeffrey Shallit ]: But that’s Barry’s M. O.: spout nonsense, never admit he’s wrong, claim victory, and ban dissenters.
Your buddy Jeffrey Shallit you keep talking about, is that the same 'barking mad' Jeffrey Shallit who claims that a shakespearean sonnet is “more random” than keyboard pounding? If so, he is a fine one to talk about "spouting nonsense". Box
Barry, About that thread you deleted yesterday -- what is your explanation of your behavior? keith s
I am not arguing I was just making an observation that the two scenarios are pretty much the same wrt the contestants, ie the people trying to determine the probability Joe
I'm not sure what we arguing about. In both cases the observer, who is the contestant in the Monte Hall problem, have some beginning knowledge, which includes some beginning probabilities. Then the observer learns something new that changes the probabilities in respect to the original situation. That is what is similar about the two problems, although other aspects of the problems are different. Aleta
So what? The contestant is the one getting the boost in odds if she/he chooses to switch. The Monte Hall scenario pertains to the contestants and your scenario pertains to the outside observer. Joe
Yes, but Monte is the guy who opens the door before presenting the contestant with the offer of switching his original choice or not. Aleta
Umm the Monte Hall problem pertains to the contestant(s), not Monte. Joe
To Joe: not exactly the same as the Monte Hall problem, but a similar concept. A key difference in the two problems is that in the Monte Hall problem, Monte knows what is behind each door and chooses a door that he knows does not hide the prize. In the two children problem, the child that shows up in the window is a random choice between the two children. So the two problems are different in that regard. Aleta
BTW your example of 2 children is the same as the "Monty Hall" problem. Joe
OK Aleta, good luck with that Joe
Hi Joe. I'm not discussing evolution. I'm discussing probability. I'm interested in the way the world unfolds in general, and in how we can use math to model various aspects of the world, but I'm not very interested in the evolution debate that goes on here. Aleta
Joe writes,
Earth to Aleta- Instead of playing games why don’t you at least try to support your position?
My position is that computing probabilities is more complicated than just simple one-step events composed of a multitude of independent events, such as throwing 500 coins. In particular, models that don't take into account multiple steps in which each step is dependent on what happened before are not likely to be good models of what happens in the real world. The 500 coins example is commonly used to illustrate a situation in probability theory. I am offering some more complicated examples from probability theory in order to illustrate some complexities that the 500 coins example doesn't cover. My examples and comments are supporting my position, I believe. Aleta
Aleta unguided evolution cannot be modelled so how can we have a good model for it? Probabilities are all we have wrt unguided evolution yet evos cannot provide those probabilities and they want to blame ID. Why don't you find that strange? Joe
Bob writes,
Coin flipping models do models situations with time too, but the probabilities of a Heads depend on the previous set of coin flips.
Yes, that's why I asked the question I did back at 19: if we flip coins, and then do someting else that depends on the first outcomes, we now have a step-by-step situation that is different than just throwing all the coins at once and looking at just that result. It's like the game of yahtzee: if I throw five dice, the probability of all five being the same is 1 out of 6^4 = 1 out of 1296. However, if I can leave some behind and throw the remainder, and then do that once more, the probability of getting all five the same would be much less - about 1 out of 21, according to several places on the internet. This goes to the heart of a statement I have made in this thread: that merely throwing 500 coins is not a good model for things that happen in the real world. Mung says I am blatantly wrong, and I have asked him to give me an example so I can understand why he thinks that. Aleta
And keith blows it again- he quotes Dembski:
But given nothing more than ordinary probability theory, Kolmogorov could at most say that each of these events had the same small probability of occurring, namely 1 in 2^100, or approximately 1 in 10^30. Indeed, every sequence of 100 coin tosses has exactly this same small probability of occurring. Since probabilities alone could not discriminate E sub R from E sub N, Kolmogorov looked elsewhere. Where he looked was computational complexity theory. The Design Inference, p. 169
All that means is sometimes one needs more information to make an inference wrt randomness- one also needs the context. Joe
Earth to Aleta- Instead of playing games why don't you at least try to support your position? Or is your position so pathetic that it cannot be supported? Joe
If there aren't any cases in which something complex also has a high probability of occurring then it is clear the Kolmogorov complexity and probability go hand in hand. Joe
Re: Fun with probability at 64. I knew Keith would know. I figured, however, that the question wouldn't draw much interest. This one won't either, but I'll offer it anyway - maybe someone here will have not seen it and find it an fun problem to think about. Three players, A, B, and C, are placed at the vertices of an equilateral triangle, armed with "guns". They are to take turns shooting at each other, one shot per turn. If a player shoots at another player and hits him, the second player is out of the game (i.e., "dead"). On his turn, a player may shoot at any surviving player, or pass and not shoot at anyone. The contest continues until one player wins by being the only survivor. A has a 1/3 chance of hitting on any shot (33 1/3%), B has a 1/2 chance of hitting on any shot (50%), and C always hits (100%). A gets to shoot first. If B is still alive, he gets to shoot second. If C is still alive then he gets to shoot next. The rotation continues between the surviving players until only one person is left. Some Assumptions We assume that each player knows the accuracy level of each of the other players (e.g., both players know that C is a sure shot, A knows that B is a 50% shooter, and so on.) We assume that each player will adopt the strategy which maximizes his own chance of survival, and we assume that each player knows that the other players will act so as to maximize their own survival. The questions are: a) given that everyone plays to maximize their own chances of survival, who has the best chance of winning? b) what are the best strategies for each player? c) what are the exact odds of each person surviving if everyone follows their best strategy? Aleta
Keith, Just to make sure, this Jeffrey Shallit you keep talking about, is that the same idiot who claims that a shakespearean sonnet is more random than keyboard pounding? Box
I just discovered something even funnier: Jeffrey Shallit himself -- the very authority that Barry appeals to -- confirms that Barry got it completely wrong: Barry Arrington: A Walking Dunning-Kruger Effect:
The wonderful thing about lawyer and CPA Barry Arrington taking over the ID creationist blog, Uncommon Descent, is that he's so completely clueless about nearly everything. He truly is the gift that keeps on giving. For example, here Barry claims, "Kolmogorov complexity is a measure of randomness (i.e., probability). Don’t believe me? Just ask your buddy Jeffrey Shallit (see here)". Barry doesn't have even a glimmer about why he's completely wrong. In contrast to Shannon, Kolmogorov complexity is a completely probability-free theory of information. That is, in fact, its virtue: it assigns a measure of complexity that is independent of a probability distribution. It makes no sense at all to say Kolmogorov is a "measure of randomness (i.e., probability)". You can define a certain probability measure based on Kolmogorov complexity, but that's another matter entirely. But that's Barry's M. O.: spout nonsense, never admit he's wrong, claim victory, and ban dissenters. I'm guessing he'll apply the same strategy here. If there's any better example of how a religion-addled mind works, I don't know one.
Excellent work, Barry. You've shown all of us that: 1. You have strong opinions about things you know nothing about. 2. You've attempted to mock someone who understands this stuff far better than you do. 3. The very authority you appealed to confirms that you got it completely wrong, as do Robb and I and Dembski himself, through his book. 4. You tried to erase the evidence by deleting the entire thread. You look pretty ridiculous right now. Is there anything else you'd like to do to embarrass yourself in front of your audience? keith s
keith s @ 69
It’s because 500 bits is Dembski’s “universal probability bound”, aka “the UPB”.
OK, that makes sense Me_Think
keiths #29, to Eric:
My point is to refute the silly notion that Barry and KF keep repeating: that Dembski’s “specified complexity” is essentially the same thing as Orgel’s. It obviously isn’t. Kolmogorov complexity and improbability are not the same thing.
Barry disagreed and even posted a mocking OP to that effect which he later surreptitiously deleted. From the deleted OP:
Keiths responds:
Not at all. Orgel is talking about Kolmogorov complexity while Dembski is talking about improbability.
Uh, Keiths, Kolmogorov complexity is a measure of randomness (i.e., probability). Don't believe me? Just ask your buddy Jeffrey Shallit (see here).
Once he realized his error, Barry deleted the thread to hide the evidence. That's funny enough, but here's another good one: Dembski himself stresses the distinction between Kolmogorov complexity and improbability:
But given nothing more than ordinary probability theory, Kolmogorov could at most say that each of these events had the same small probability of occurring, namely 1 in 2^100, or approximately 1 in 10^30. Indeed, every sequence of 100 coin tosses has exactly this same small probability of occurring. Since probabilities alone could not discriminate E sub R from E sub N, Kolmogorov looked elsewhere. Where he looked was computational complexity theory. The Design Inference, p. 169
I look forward to Barry's explanation of how Dembski is an idiot, and how we should all trust Barry instead when he tells us that Kolmogorov complexity and improbability are the same thing. keith s
Mung
Meanwhile, in the realm of what is actually possible, still no 40 heads in a row.
So ?
Now give us the correlation with Salvador Cordoza.
????? Me_Think
Aleta @ 60 - Coin flipping models do models situations with time too, but the probabilities of a Heads depend on the previous set of coin flips. This is how the Wright-Fisher model in population genetics works, as well as a lot of stochastic process models. The first comment on this thread is trying to engage with Barry on this, but he keeps on ignoring it. Incidentally, a lot of the early work developing the maths behind stochastic processes was done by Kolmonogorov. Bob O'H
Aleta:
Fun with probability – maybe you guys know this one: A family moves into a house across the street. You know they have two children, but you know nothing about their gender. One day you see a boy in the window. Assuming equal probabilities for boys and girls, what is the probability the other child is also a boy?
I was waiting to see if any of the IDers would tackle this, but since they haven't, I will. The probability that the other child is a boy is 1/3. keith s
R0bb @59: Quoting Dembski:
For something to exhibit specified complexity, it must conform to a specification that signifies an event that has small probability (i.e., is probabilistically complex) but also is simple as far as patterns go (i.e., has low patterned complexity).
Yes, unfortunately this is one of the more misunderstood statements by Dembski. I mean among ID proponents. Nearly everything Dembski said is misunderstood by his detractors. :) His reference to "simple" here needs to be properly understood. He is simply saying that there is some generalized pattern, as opposed to a pure random distribution. Obviously a Shakespearean sonnet is quite complex in terms of its probability, as well as having a specification. Yet is it less complex (more "simple") than a pure random distribution of English characters, because it follows certain rules of spelling, grammar, punctuation, as well as higher order patterns of word phrases and perhaps even ideas conveyed. Thus, while not an absolute truism, it is often the case that a designed object will be more "simple" than a pure random distribution. That is all Dembski is referring to. But this comparative "simplicity" versus a random draw is very different from the kind of simplicity that arises through necessity: repeating patterns with little complexity. ----- What this means in practice, is that at one end of the spectrum we have a repetitive, non-complex pattern. At the other far end of the spectrum we have a pure random distribution (as random as such a thing can be). Designed objects can be anywhere along the spectrum, because an agent can purposely produce a simple repetitive pattern or something essentially indistinguishable from a random draw. However, in most cases, designed things will lie somewhere in the middle of the spectrum. The design filter (or CSI if you prefer) will not pick up a designed object at the first end of the spectrum because it is not complex enough. It will not pick up something designed to look like a random draw at the other end of the spectrum because it lacks a recognizable specification. In both such cases, the design filter will return a false negative. However, in the sweet spot (which is actually quite wide and covers much of the spectrum) it will properly flag designed objects, because they have a recognizable specification plus adequate complexity. Eric Anderson
R0bb @36: There is a serious problem with the "everything-is-just-as-improbable" line of argumentation when we are talking about ascertaining the origin of something.
Randomly generate a string of 50 English characters. The following string is an improbable outcome (as is every other string of 50 English characters):
Yes, but that is assuming the string is generated by a random generator. However, the way in which an artifact was generated when we are examining it to determine its origin is precisely the question at issue. Saying that every string of that length is just as improbable as any other, in the context of design detection, is to assume as a premise the very conclusion you are trying to reach. We cannot say, when we see a string of characters (or any other artifact) that exhibits a specification or particular pattern, that "Well, every other outcome is just as improbable, so nothing special to see here." The improbability, as you point out, is based on the process that produced it. And the process that produced it is precisely the question at issue. When we come across a string like: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa or some physical equivalent, like a crystal structure or a repeating pulse from a pulsar, we most definitely do not conclude it was produced by some random process that just happened to produce all a's this time around, because, hey, every sequence is just as improbable as the other. Eric Anderson
Meanwhile, in the realm of what is actually possible, still no 40 heads in a row. Mung
Why do you think Barry deleted the other thread along with your comments?
He realized that it (somehow) violated the ONH of opening posts containing "keith". Joe
keiths could perhaps be taken serious if he asserts that there is no maximum number of events that could possibly have happened in the history of the universe. Mung
Joe, Why do you think Barry deleted the other thread along with your comments? keith s
Me_Think:
A simple search shows UD is obsessed with 500 coins. Apparently 500 coin flips are somehow metaphysically linked to evolution of life.
It's because 500 bits is Dembski's "universal probability bound", aka "the UPB". He "justifies" it by calculating the maximum number of events that could possibly have happened in the history of the universe, taking the log base 2, and then rounding up to 500 bits. keith s
Me Think- We talk about probabilities because you and yours don't have anything else for us to discuss. So we are providing examples of our methodology but you and your ilk don't seem to be able to grasp those. It's kind of difficult to proceed if the examples are troublesome so we keep trying. Joe
If complexity isn't linked to probability what examples are there of complex objects, structures or events that have a high probability of occurring? Joe
Me_Think, Now give us the correlation with Salvador Cordoza. Thanks Mung
A simple search shows UD is obsessed with 500 coins. Apparently 500 coin flips are somehow metaphysically linked to evolution of life. Me_Think
Fun with probability - maybe you guys know this one: A family moves into a house across the street. You know they have two children, but you know nothing about their gender. One day you see a boy in the window. Assuming equal probabilities for boys and girls, what is the probability the other child is also a boy? Aleta
Ooops- Barry- 500 heads in a row is a simple sequence. And by Dembski's standards it is complex. You have to watch out for ALL of their little traps, Barry. Joe
Given abababababababababababababababab and 4c1j5b2p0cv4w1x8rx2y39umgw5q85s7 we would say the first was caused by a deterministic process whereas the second was via a random process (or was designed to appear random). The probability of the two sequences is not the same. And R0bb 500 heads in a row is a simple pattern with a small probability Joe
Barry even deleted the thread after Joe had already posted a couple of comments to it. Poor Joe gets no respect from anybody. keith s
Let me be clearer. Flipping coins is an example of independent events, when the probability of one event isn't affected by the outcome of some other event. I'm sure there are real world example that this might model, such as electoral polling of a random sample of people. However, what it doesn't model is situations where things develop through a series of steps where what happens on step 2 is affected by what happened on step 1. Aleta
Barry:
To answer your question, I continue to believe that Dembski would not believe that a “simple sequence” is a “complex sequence.”
We both know that Dembski believes that a simple sequence can also be complex. For example:
For something to exhibit specified complexity, it must conform to a specification that signifies an event that has small probability (i.e., is probabilistically complex) but also is simple as far as patterns go (i.e., has low patterned complexity).
Your response will be that he's using complex and simple in different senses. And my response is that I was too, of course. R0bb
Give me an example, Mung. I'm willing to learn. I explained the kind of things it doesn't model - things which happens through a series of steps, and I can't think of any significant things it does model. Can you give me an example? Aleta
Aleta:
Earlier I pointed out that flipping 500 coins doesn’t model anything realistic about the world.
This is just so blatantly wrong. I leave it to you to figure out why. Which is to say that I have you in the category of "capable of self-correction." I hope I'm right about that. I'll flip you for it. Mung
It perhaps would have been better for Barry to explain his mistake, in part to help, as Eric said, "everyone to be on the same page" rather than just deleting the whole thread. Aleta
keiths, ignorance is not something to be scolded, it's something to be corrected. Self-imposed ignorance aka willful ignorance, on the other hand, is different. Are you accusing Barry of willful ignorance? I find it difficult to think of anything worse on this planet than a willfully ignorant person, with the possible exception of someone who revels in their willful ignorance. What do you think? Mung
R0bb:
Kolmogorov most definitely was interested in probability theory.
Good for you.
There are two common approaches to the quantitative definition of "information"; combinatorial and probabilistic. The author briefly describes the major features of these approaches and introduces a new algorithmic approach that uses the theory of recursive functions. - Three Approaches to the Quantitative Definition of Information
Well there I went again. But this time it wasn't Orgel using the word "information" but Kolmogorov. Can't wait to see your snide remark about this one. Mung
Mung asks, "But what about the probabilities? How then do we calculate them?" Earlier I pointed out that flipping 500 coins doesn't model anything realistic about the world. The reason is that the real world goes from one moment to the next, and probabilities about what might happen in any one moment affect all further calculations about the next moment, and so on through very many moments. Therefore, one needs to use probability trees to calculate the probability of events that take place through a series of steps. That's the general answer. In practice, in real world situations, I imagine this is very difficult. But flipping coins is not a good model for real situations at all, because it doesn't take the passage of time into account. Aleta
Mung, Will you be scolding Barry for his ignorance? keith s
That's amusing. So the answer to Mung's question in 46 "Who thought they were? [the same]" is "Barry does". Aleta
R0bb:
Mung finds the word “information” in Orgel’s work, and Joe finds the words “Kolmogorov” and “probability” in the same sentence. Waterloo!!!!
Well pardon me for answering your question. I guess I was wrong about you and need to move you over to the "not to be taken seriously" category. Mung
This is hilarious. Earlier today, Barry posted a mocking thread entitled Keiths: The Gift that Keeps On Giving to ID In it he tried to use Jeffrey Shallit to demonstrate that Kolmogorov complexity and "Dembski complexity" were the same thing. I was about to reply a few minutes ago, but the thread was gone. That's right. Barry 1) posted a mocking thread; 2) realized, after reading R0bb's comments above, that it was going to backfire horribly on him; and 3) tried to erase the evidence by deleting the entire thread, including two comments by Joe. Here are screenshots of the vanishing OP and the comments bar. Barry, do you realize how pitiful your behavior is, and how you appear to the onlookers? keith s
Silver Asiatic:
Evolution can get 500 heads, Dawkins proved it.
ok, so my program, once it encounters a tails, it starts the process all over again. But that's not evolution? So I need to flip each coin until it is a heads and then move on to the next coin and repeat, but never ever start over? So programming in a massive meteor strike is out? ok, I can change my code. But what about the probabilities? How then do we calculate them? And once we do, does that give us the probability that evolution is true? Mung
to Mung: is there a difference? If I say throwing HHH has a probability of 1/8, is that not a measurement of a probability? I remember back when some geometry textbooks for high school kids made them continually make a distinction between a line AB and the length of the line mAB, so that lines were congruent but the lengths of the lines were equal. Although the distinction is worth making and understanding, constantly making the distinction is pedantic, I think. So, to rephrase, is there a significant difference between probability and probability measure? Aleta
keiths: Kolmogorov complexity and improbability are not the same thing. Who thought they were? This is getting tedious. Mung
keiths: P(T|H) is a probability keiths: P(T|H) is a probability measure Mung
Joe, I was addressing the issue that Kolmogorov complexity is not the same as improbability. Specification has nothing to do with that distinction. Eric asked a question in 34, and the answer, as supplied by Robb, seems pretty clear. Do you think Kolmogorov complexity is a measure of improbability? Aleta
Kolmogorov most definitely was interested in probability theory. He may have been interested in gardening also, for all I know. R0bb
So R0bb is saying the paper is wrong and Kolmogorov wasn't interested in the foundation of probability theory, even though I can cite several other sources that say he was? Really? Joe
They only have the same probability given the chance hypothesis. However no one would expect chance alone to produce ababababababababab... rookies Joe
After reading a bit on Wikipedia, I can add to what Robb said.
For example, consider the following two strings of 32 lowercase letters and digits: abababababababababababababababab and 4c1j5b2p0cv4w1x8rx2y39umgw5q85s7 The first string has a short English-language description, namely "ab 16 times", which consists of 11 characters. The second one has no obvious simple description (using the same character set) other than writing down the string itself, which has 32 characters.
Therefore, the two strings have different Kolmogorov complexity and the same probability. So obviously the two ideas are not the same. Aleta
F/N: We already know coins are highly contingent. So, fr coins to be in a position that has 500 H's, that implies imitation of a low contingency outcome. On a chance process, that would be maximally implausible, but on design, that would be readily understood as a targetted pattern. So, ironically the seemingly simple outcome is the credible product of design as it is a special case of a highly contingent system maximally implausible on blind chance but very reasonable on design. The designer would implement an algorithm, that sets H then increments and does so over and over, requiring a second order complex system to effect the algorithm physically, i.e. controlled coin flipping per design that must recognise H, T -- a non-trivial problem -- and then manipulate and place the coins in the string. It is only by overlooking that implied process that we can think of setting 500 coins in a row is a simple exercise. By contrast, a system that uses existing electro-chemical and physical forces to crystallise and extend a unit cell of crystal from a solution or the like, has no requirement of an algorithm executing device or a manipulating device. One may make arguments about the underlying physics and its fine tuning relative the requisites of life and questions as to whether the cosmos is designed, but that is a different order of issue on different evidence requiring a cosmos as a going concern and intelligent observers with appropriate technology and instruments, which already implies massive existence of design. KF kairosfocus
RObb, I went back and read the comment you linked to. It is reproduced here:
RObb, do you really read what you write before you post it here? Listen to yourself. In your passion to defend mathgrrl’s indefensible position you have actually gone around the bend of linguistic sanity. You are now saying that Dembski believes simple and complex is the same. “his [i.e., Dembski's] examples of . . . complexity include simple . . . sequences” Do you really believe Dembski believes simple things are complex? Give me a break.
To answer your question, I continue to believe that Dembski would not believe that a “simple sequence” is a “complex sequence.” Barry Arrington
Mung finds the word "information" in Orgel's work, and Joe finds the words "Kolmogorov" and "probability" in the same sentence. Waterloo!!!! R0bb
Eric:
Just so everyone is on the same page, though, how would you describe the difference between, say, a string exhibiting Kolmogorov complexity and exhibiting improbability?
Randomly generate a string of 50 English characters. The following string is an improbable outcome (as is every other string of 50 English characters):
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
But it has low Kolmogorov complexity. The probability of a string depends on the process (or hypothesized process) that produced it. Kolmogorov complexity does not. R0bb
Hey keith, when " Orgel is talking about Kolmogorov complexity", was he he referring to this Kolmogorov?:
As we have already mentioned, the two main originators of the theory of Kolmogorov complexity were Ray Solomonoff (born 1926) and Andrei Nikolaevich Kolmogorov (1903-- 1987). The motivations behind their work were completely different; Solomonoff was interested in inductive inference and artificial intelligence and Kolmogorov was interested in the foundations of probability theory and, also, of information theory. (bold added)
Yeah baby. You may want to rethink your attack Joe
keith @29:
My point is to refute the silly notion that Barry and KF keep repeating: that Dembski’s “specified complexity” is essentially the same thing as Orgel’s. It obviously isn’t. Kolmogorov complexity and improbability are not the same thing.
OK. I don't really have a dog in that fight, but alright. Just so everyone is on the same page, though, how would you describe the difference between, say, a string exhibiting Kolmogorov complexity and exhibiting improbability? Eric Anderson
Barry, also, are you ever going let us in on the secret of determining, without a chance hypothesis, that the coin pattern is improbable? R0bb
Barry, BTW, has your understanding of "specified complexity" changed since you scoffed at the claim that that Dembski's "examples of specified complexity include simple repetitive sequences, plain rectangular monoliths, and a narrowband signals epitomized by a pure sinusoidals[sic]"? R0bb
Barry, thanks for your response.
We can say that if he viewed the “500 coins; all heads” pattern at a very superficial level (it is just an ordered pattern), he might say it lacks complexity, in which case he would have been wrong. If he viewed the “500 coin; all heads” pattern in terms of the extreme level of contingency displayed in the pattern, he would have said the pattern is complex, and he would have been right.
So Orgel might assess the complexity of the coins by their degree of order or by their degree of contingency (which I assume you intend to be synonymous with improbability). If his usage of the term "complexity" is the same as Dembski's, as you have claimed it is, he would presumably do the latter, which you claim to be "right". To not use the term as Dembski does, and instead base the complexity assessment on the degree of order, is "wrong", you say. Setting aside the question of what you mean by right and wrong here, I have yet to see an actual defense of the claim that Orgel, previous to Dembski, equated "complexity" with "improbability". Dembski seems to want us to believe it, but I hope you'll understand that I don't accept claims on Dembski's say-so. And you haven't given us a single reason to believe it -- you've only claimed that it's obvious, even to a casual reader. Do you have any evidence that anyone previous to Dembski defined "complexity" to mean "improbability"? Is there anything in Orgel's writings that would give us any reason to believe that Orgel defined the term this way? I see nothing, although I do see him associating the term with disorder and, as keith has pointed out, Kolmogorov complexity. With regards to that last point, I don't know why the IDists on this site see Mung's quotes from Orgel as a good thing. Orgel makes it very clear that when he says "information", he's referring to algorithmic information, aka Kolmogorov complexity. Dembski, on the other had, always uses the term "information" to refer to probability measures, a la Shannon. Far from helping your case, Mung's quotes underscore the fact that Orgel was not talking about probability, but rather complexity vs. simplicity in the ordinary non-Dembskian sense. R0bb
Evolution can get 500 heads, Dawkins proved it. Evolution flips the coin. If it's a head, you keep it place it in a row and flip another. If it's a tails, you flip the same one. It really doesn't take that long! And keep in mind, evolution had billions of years. Evolution just keeps the positive mutations and keeps flipping the coin if it's a negative. You guys really don't understand how evolution works. :-) A little Thanksgiving sarcasm for ya. Silver Asiatic
Eric Anderson:
Q: How did Orgel define “specified complexity”? Specifically, what was his understanding of “complex”? A: “One can see intuitively that many instructions are needed to specify a complex structure. On the other hand a simple repeating structure can be specified in rather few instructions. Complex but random structures, by definition, need hardly be specified at all.” This is very much along the lines of what Dembski is talking about.
Not at all. Orgel is talking about Kolmogorov complexity while Dembski is talking about improbability.
You seem to be hung up on the idea that some “simple” structures can be designed.
No. Not sure where you got that idea.
Regardless, I’m not sure what your larger point is.
My point is to refute the silly notion that Barry and KF keep repeating: that Dembski's "specified complexity" is essentially the same thing as Orgel's. It obviously isn't. Kolmogorov complexity and improbability are not the same thing. keith s
Also of interest is this quote from Neil Johnson, professor of physics who works in complexity theory and complex systems: ". . . even among scientists, there is no unique definition of complexity - and the scientific notion has traditionally been conveyed using particular examples . . ." (Courtesy Wikipedia, "Complexity") Eric Anderson
keith: Q: How did Orgel define "specified complexity"? Specifically, what was his understanding of "complex"? A: "One can see intuitively that many instructions are needed to specify a complex structure. On the other hand a simple repeating structure can be specified in rather few instructions. Complex but random structures, by definition, need hardly be specified at all." This is very much along the lines of what Dembski is talking about. You seem to be hung up on the idea that some "simple" structures can be designed. Sure they can. And as a result they might not get flagged as "designed" if we apply the concept of CSI and/or the explanatory filter on an initial examination of the structure. Dembski is not the first to talk about "specified complexity." And I don't think the one quote you have repeated from Orgel gives us any indication that he is talking about something meaningfully different than is Dembski. Regardless, I'm not sure what your larger point is. Do you just not like the name "complex specified information" or do you have a substantive issue with the idea of using complexity or probability as a tool to help recognize potential design? Eric Anderson
The point is that P(T|H) is a probability measure, not a complexity measure.
The two are one in the same, you willfully ignorant little person. Joe
Mung @ 6 -
Do you mean for example a system that tosses all 500 coins at once in repeated attempts to have them show up all heads?
No. Bob O'H
Eric, The point is that P(T|H) is a probability measure, not a complexity measure. "Complex specified information" and "specified complexity" are misnomers. Dembski's equation classifies anything that is specified and sufficiently improbable as exhibiting CSI/specified complexity, whether it is simple or complex. Again, CSI is a misnomer. keith s
keith @7:
A cylindrical crystal of pure silicon is not complex at all, yet it is highly improbable by purely natural processes. That’s why we have to grow them to make silicon wafers instead of just mining them somewhere. Dembski’s equation would therefore attribute CSI/specified complexity to such a crystal, despite its simplicity. “Complex specified information” is really “improbable (under natural processes) specified information”. “CSI” is a misnomer.
This is an interesting comment and worth thinking about. Dembski speaks of complexity being cashed out as probability in many cases -- in many circumstances they are speaking to the same thing, particularly with functional machines, like the living organisms Orgel referred to. But more to the point, you seem to be assuming that Dembski's criteria would spit out "designed" when a cylindrical crystal of pure silicon is examined. I'm not sure this is the case. If we ran across such a crystal on another planet would we be forced, per Dembski's criteria, to conclude that it was designed? Probably not. The same goes for any repetitive pattern that is being examined initially. Dembski's criteria would initially classify it as not designed. This would be an example of a false negative. This isn't to say that you aren't on to something with your broader point about complexity and improbability. It probably partly turns on how we define "complex". ----- On a related note, you have stated that Orgel's definition of "complex" is different than Dembski's. Do you have any further evidence for that point, other than the single quote from Orgel? Not that it is critical (they may be using the words with slightly different connotations, but that doesn't demonstrate that Dembski's use is incorrect), but I'm just curious as to the claim regarding Orgel's use. Eric Anderson
Lowered expectations. 25 heads in a row... Mung
meanwhile ... 50 consecutive heads has still not been reached. Mung
Aleta:
and to Mung: yes, absolutely, your program is extremely unlikely to show 500 heads in a row in your lifetime, or in the lifetime of the universe. And you know that.
Just trying to understand how this is known. Aleta:
But flipping 500 coins is not a good model for how things happen in the real world anyway. This is just an interesting discussion, to me, from a purely mathematical point of view.
Yup. Bob O'H can chime in now. Mung
to Me_think: There are 2^500 possible results in throwing 500 coins, which is about 3.25 x 10^150. However, if you threw the coins that many time you would still have a certain probability of having no cases of 500 heads, a certain probability of 1 case, a certain probability of 2 cases, etc., all according to the binomial probability theorem. So I don't understand what you mean when you write,
The formula for getting number of required toss is 2*(2^N – 1), N is the number of heads, so for 500 Heads in a row you need 6.5*10^150 tosses.
Why do you have twice the number I have, and how can you make a statement about getting 500 heads without mentioning a probability - it certainly isn't a certainty that you would get 500 heads in 6.5 x 10^150 throws. Can you explain more. And, to Bob H: You write. More generally, any stochastic process on the number of heads with all heads and all tails as absorbing boundaries (i.e. once you’re in that state you can’t leave) will inevitably reach one of the absorbing states in finite time (if you have a finite number of heads, and if it’s possible to get from any state to any other). Could you explain more. What I think you might be saying is that after you flip the 500 coins, if a coin is a head it stays a head, and you flip the other coins again In which case, eventually you would approach all heads as the limiting case. Do you mean this, or something else? And how does this relate to flipping 500 coins at once. and to Mung: yes, absolutely, your program is extremely unlikely to show 500 heads in a row in your lifetime, or in the lifetime of the universe. And you know that. But flipping 500 coins is not a good model for how things happen in the real world anyway. This is just an interesting discussion, to me, from a purely mathematical point of view. Aleta
Barry, honestly, who cares about Orgel and 500 coins. I can't even get 50 coins to come up all heads! Mung
Me_Think, I hate giving up. I can throw a more computers at the problem. How many more computers do I need to add? 10? 100? 1000?
2*(2^N – 1),N is the number of heads, so for 500 Heads in a row you need 6.5*10^150 tosses.
crap. that seems to be right around Dembski's UPB. but I thought Dembski was a nutcase and ID was for loons. Are you saying that if I could set every atom in the universe to solving this problem that it would still fail? Meanwhile, a string of 50 heads in a row is still not achieved. ID must be false. It hasn't shown that 50 heads cannot possibly be achieved. Right? Mung
Are you saying that I should terminate my program?
Yes. You should.You have not yet reached 10^15, you need to reach 10^150 range before you see 500 Heads.
Should I lower the expectation?
Definitely. Me_Think
Me_Think, Some people require empirical evidence. Simply calculating probabilities are not enough. But thank you. Are you saying that I should terminate my program? No chance in hell of a positive result in my lifetime? Granted, its' still running. Not even 50 heads in a row, much less 500. Should I lower the expectation? Mung
Barry @5, not necessarily. They may simply be unintended consequences. The weighting, for example, could be a simple artifact of the creation process. Indeed, maybe the machine that was making them was malfunctioning and acting contrary to its design. In addition, some things can result from a design process, but not necessarily be designed (or indicative of design) themselves -- like shavings falling to the floor from a sculptor's knife, or scrap material from a manufacturing process. At any rate, I was just making the point clear to everyone that we need to exclude necessity for purposes of the coin examples. Eric Anderson
Mung @ 11 I don't know why you are breaking your head over a simple problem. The formula for getting number of required toss is 2*(2^N – 1),N is the number of heads, so for 500 Heads in a row you need 6.5*10^150 tosses. Me_Think
Set the number of required HEADS to 50 and the program is still running =p Maybe it's a flaw in my code. I should probably add a display of average. But of course if the chance on he first toss is 50/50 = 1/2 then on the second it would be 1/2 x 1/2 and on the third 1/2 x 1/2 x 1/2 and this turns uot to be an exponential scale ... gah ... I may never see the result! Perhaps this should be a lesson to me. I can write a program and wait for the result, or I can try to calculate the probability. Mung
got to love keiths!: simple - not complex complex - not simple complex - complicated complicated - complex Add keiths to the list of critics who haven't read Orgel.
Paley was right to emphasize the need for special explanations of the existence of objects with high information content, for they cannot be formed in nonevolutionary, inorganic processes. – p. 196
So when Orgel said simple, he meant it in the ordinary English sense of NOT COMPLEX. And when Orgel said complex, he meant it in the ordinary English sense of NOT SIMPLE. And the evidence keiths offers is... ? Mung
And in the case of 500 heads, there are processes that can lead to them very easily, e.g. the Mabinogion sheep.
And in the case of Mabinogion sheep we have artificial selection. Joe
And probability is still a complexity measure and keith's ignorance still means nothing. And if complex means : not easy to understand or explain , then that cylindrical crystal of pure silicon would be complex, duh. Nice job, chief- you shot yourself in the foot on the way to that own goal Joe
Barry:
In his 1973 book The Origins of Life Leslie Orgel wrote: “Living organisms are distinguished by their specified complexity. Crystals such as granite fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity.” (189).
That's right. Orgel, unlike Dembski, is using 'complex' in the way that English speakers do:
com·plex adjective \käm-?pleks, k?m-?, ?käm-?\ : having parts that connect or go together in complicated ways : not easy to understand or explain : not simple [from merriam-webster.com]
By that definition, crystals lack complexity.
In my post On “Specified Complexity,” Orgel and Dembski I demonstrated that in this passage Orgel was getting at the exact same concept that Dembski calls “specified complexity.”
No, because unlike Orgel, Dembski doesn't use 'complex' in its ordinary English sense. I explained this in the other thread using the example of a cylindrical silicon crystal of the kind used to make integrated circuits:
Barry, By Dembski’s own equation, something exhibits CSI/specified complexity if P(T|H) is sufficiently low. P(T|H) is a probability, not a measure of complexity. A cylindrical crystal of pure silicon is not complex at all, yet it is highly improbable by purely natural processes. That’s why we have to grow them to make silicon wafers instead of just mining them somewhere. Dembski’s equation would therefore attribute CSI/specified complexity to such a crystal, despite its simplicity. “Complex specified information” is really “improbable (under natural processes) specified information”. “CSI” is a misnomer.
Barry:
In a complexity analysis, the issue is not whether the patterns are “highly ordered.” The issue is how the patterns came to be highly ordered. If a pattern came to be highly ordered as a result of natural processes (e.g., the lawlike processes that result in crystal formation), it is not complex.
You are using Dembski's definition, not Orgel's. By Dembski's definition, the cylindrical crystal of pure silicon is complex. By Orgel's definition, which is the ordinary English definition, the silicon crystal is simple, not complex. By Dembski's silly definition, something can be both simple and complex. keith s
Think I may modify this to permit 'coins' with more than one side :) Mung
Bob O'H:
More generally, any stochastic process on the number of heads with all heads and all tails as absorbing boundaries (i.e. once you’re in that state you can’t leave) will inevitably reach one of the absorbing states in finite time (if you have a finite number of heads, and if it’s possible to get from any state to any other).
Do you mean for example a system that tosses all 500 coins at once in repeated attempts to have them show up all heads? How often do you expect to see that in your lifetime? # allh.rb def tosser coin, sequence_length sequence_length.times do |i| return if coin.sample == 'T' puts "#{i+1}: HEADS of #{sequence_length}!" exit if i + 1 == sequence_length end end coin = %w[H T] begin tosser(coin, ARGV[0].to_i) end while true You can put in the number of coins to toss on the command line. I used 20 and it didn't take too long. Try 500 and let us know: $ruby allh.rb 500 Think I may modify this to permit ‘coins’ with more than one side :) Mung
Eric, even in your examples we can exclude chance and law. Both of your examples (rigged coin; stamping machine) implicate design. Barry Arrington
Barry, interesting post. Just one caveat, or perhaps clarification: Everyone needs to realize, or it needs to be made explicit, that (i) you are talking about fair coins (meaning they have a probability of falling heads 50% and tails 50%), and (ii) the example assumes no other law-like process was involved. Specifically, if I saw 500 heads tossed in a row, I might well conclude that there was something specific about the weighting of the coins that caused it. Or if I saw 500 heads lying in a row at the US mint, I might well conclude that it was not due to someone's particular design (though, yes, it could have been), but more likely was simply the outcome of how the machine stamped the coins. Perhaps not the best examples, but you get my point. When we see repetitive, simple order, it is most likely the result of natural laws, rather than design. What allows the coin example to work, is if we assume such natural laws were not in place, thus leaving just design v. chance. This nuance is part of the confusion that sometimes results from the coin-toss examples, which is why I think some of the examples (including Sal's) have not been as effective. Better than 500 heads in a row might be the first x number of prime numbers in binary or something less repetitive and less simple. Anyway, I just want to head (no pun intended) this off at the outset so that no-one jumps on the thread and gets off track on the possibility of necessity causing the 500 coins in a row. Eric Anderson
PS: Neat-o on the new feature, complete with count-down! kairosfocus
BA, the pattern of the individual mineral crystal, but the randomly scattered complex matrix is quite complex. Pardon, I have just a moment, today is even more of an adventure than I thought. KF kairosfocus
In a complexity analysis, the issue is not whether the patterns are “highly ordered.” The issue is how the patterns came to be highly ordered.
And in the case of 500 heads, there are processes that can lead to them very easily, e.g. the Mabinogion sheep. More generally, any stochastic process on the number of heads with all heads and all tails as absorbing boundaries (i.e. once you're in that state you can't leave) will inevitably reach one of the absorbing states in finite time (if you have a finite number of heads, and if it's possible to get from any state to any other). Bob O'H

Leave a Reply