In his 1973 book The Origins of Life Leslie Orgel wrote: “Living organisms are distinguished by their specified complexity. Crystals such as granite fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity.” (189).
In my post On “Specified Complexity,” Orgel and Dembski I demonstrated that in this passage Orgel was getting at the exact same concept that Dembski calls “specified complexity.” In a comment to that post “Robb” asks:
500 coins, all heads, and therefore a highly ordered pattern.
What would Orgel say — complex or not?
Orgel said that crystals, even though they display highly ordered patterns, lack complexity. Would he also say that the highly ordered pattern of “500 coins; all heads” lacks complexity?
In a complexity analysis, the issue is not whether the patterns are “highly ordered.” The issue is how the patterns came to be highly ordered. If a pattern came to be highly ordered as a result of natural processes (e.g., the lawlike processes that result in crystal formation), it is not complex. If a pattern came to be highly ordered in the very teeth of what we would expect from natural processes (we can be certain that natural chance/law processes did not create the 500 coin pattern), the pattern is complex.
Complexity turns on contingency. The pattern of a granite crystal is not contingent. Therefore, it is not complex. The “500 coins; all heads” pattern is highly contingent. Therefore, it is complex.
What would Orgel say? We cannot know what Orgel would say. We can say that if he viewed the “500 coins; all heads” pattern at a very superficial level (it is just an ordered pattern), he might say it lacks complexity, in which case he would have been wrong. If he viewed the “500 coin; all heads” pattern in terms of the extreme level of contingency displayed in the pattern, he would have said the pattern is complex, and he would have been right.
About one thing we can be absolutely certain. Orgel would have known without the slightest doubt that the “500 coin; all heads” pattern was far beyond the ability of chance/law forces, and he would therefore have made a design inference.
And in the case of 500 heads, there are processes that can lead to them very easily, e.g. the Mabinogion sheep. More generally, any stochastic process on the number of heads with all heads and all tails as absorbing boundaries (i.e. once you’re in that state you can’t leave) will inevitably reach one of the absorbing states in finite time (if you have a finite number of heads, and if it’s possible to get from any state to any other).
BA, the pattern of the individual mineral crystal, but the randomly scattered complex matrix is quite complex. Pardon, I have just a moment, today is even more of an adventure than I thought. KF
PS: Neat-o on the new feature, complete with count-down!
Barry, interesting post. Just one caveat, or perhaps clarification:
Everyone needs to realize, or it needs to be made explicit, that (i) you are talking about fair coins (meaning they have a probability of falling heads 50% and tails 50%), and (ii) the example assumes no other law-like process was involved.
Specifically, if I saw 500 heads tossed in a row, I might well conclude that there was something specific about the weighting of the coins that caused it. Or if I saw 500 heads lying in a row at the US mint, I might well conclude that it was not due to someone’s particular design (though, yes, it could have been), but more likely was simply the outcome of how the machine stamped the coins.
Perhaps not the best examples, but you get my point. When we see repetitive, simple order, it is most likely the result of natural laws, rather than design. What allows the coin example to work, is if we assume such natural laws were not in place, thus leaving just design v. chance. This nuance is part of the confusion that sometimes results from the coin-toss examples, which is why I think some of the examples (including Sal’s) have not been as effective. Better than 500 heads in a row might be the first x number of prime numbers in binary or something less repetitive and less simple.
Anyway, I just want to head (no pun intended) this off at the outset so that no-one jumps on the thread and gets off track on the possibility of necessity causing the 500 coins in a row.
Eric, even in your examples we can exclude chance and law. Both of your examples (rigged coin; stamping machine) implicate design.
Bob O’H:
Do you mean for example a system that tosses all 500 coins at once in repeated attempts to have them show up all heads? How often do you expect to see that in your lifetime?
# allh.rb
def tosser coin, sequence_length
sequence_length.times do |i|
return if coin.sample == ‘T’
puts “#{i+1}: HEADS of #{sequence_length}!”
exit if i + 1 == sequence_length
end
end
coin = %w[H T]
begin
tosser(coin, ARGV[0].to_i)
end while true
You can put in the number of coins to toss on the command line. I used 20 and it didn’t take too long.
Try 500 and let us know:
$ruby allh.rb 500
Think I may modify this to permit ‘coins’ with more than one side 🙂
Think I may modify this to permit ‘coins’ with more than one side 🙂
Barry:
That’s right. Orgel, unlike Dembski, is using ‘complex’ in the way that English speakers do:
By that definition, crystals lack complexity.
No, because unlike Orgel, Dembski doesn’t use ‘complex’ in its ordinary English sense. I explained this in the other thread using the example of a cylindrical silicon crystal of the kind used to make integrated circuits:
Barry:
You are using Dembski’s definition, not Orgel’s. By Dembski’s definition, the cylindrical crystal of pure silicon is complex. By Orgel’s definition, which is the ordinary English definition, the silicon crystal is simple, not complex.
By Dembski’s silly definition, something can be both simple and complex.
And probability is still a complexity measure and keith’s ignorance still means nothing. And if complex means : not easy to understand or explain , then that cylindrical crystal of pure silicon would be complex, duh.
Nice job, chief- you shot yourself in the foot on the way to that own goal
And in the case of Mabinogion sheep we have artificial selection.
got to love keiths!:
simple – not complex
complex – not simple
complex – complicated
complicated – complex
Add keiths to the list of critics who haven’t read Orgel.
So when Orgel said simple, he meant it in the ordinary English sense of NOT COMPLEX. And when Orgel said complex, he meant it in the ordinary English sense of NOT SIMPLE.
And the evidence keiths offers is… ?
Set the number of required HEADS to 50 and the program is still running =p
Maybe it’s a flaw in my code.
I should probably add a display of average. But of course if the chance on he first toss is 50/50 = 1/2 then on the second it would be 1/2 x 1/2 and on the third 1/2 x 1/2 x 1/2 and this turns uot to be an exponential scale … gah … I may never see the result!
Perhaps this should be a lesson to me. I can write a program and wait for the result, or I can try to calculate the probability.
Mung @ 11
I don’t know why you are breaking your head over a simple problem. The formula for getting number of required toss is
2*(2^N – 1)
,N is the number of heads, so for 500 Heads in a row you need 6.5*10^150 tosses.Barry @5, not necessarily. They may simply be unintended consequences. The weighting, for example, could be a simple artifact of the creation process. Indeed, maybe the machine that was making them was malfunctioning and acting contrary to its design. In addition, some things can result from a design process, but not necessarily be designed (or indicative of design) themselves — like shavings falling to the floor from a sculptor’s knife, or scrap material from a manufacturing process.
At any rate, I was just making the point clear to everyone that we need to exclude necessity for purposes of the coin examples.
Me_Think,
Some people require empirical evidence. Simply calculating probabilities are not enough. But thank you.
Are you saying that I should terminate my program? No chance in hell of a positive result in my lifetime?
Granted, its’ still running. Not even 50 heads in a row, much less 500. Should I lower the expectation?
Yes. You should.You have not yet reached 10^15, you need to reach 10^150 range before you see 500 Heads.
Definitely.
Me_Think,
I hate giving up. I can throw a more computers at the problem. How many more computers do I need to add?
10? 100? 1000?
crap. that seems to be right around Dembski’s UPB. but I thought Dembski was a nutcase and ID was for loons.
Are you saying that if I could set every atom in the universe to solving this problem that it would still fail?
Meanwhile, a string of 50 heads in a row is still not achieved. ID must be false. It hasn’t shown that 50 heads cannot possibly be achieved. Right?
Barry, honestly, who cares about Orgel and 500 coins. I can’t even get 50 coins to come up all heads!
to Me_think: There are 2^500 possible results in throwing 500 coins, which is about 3.25 x 10^150. However, if you threw the coins that many time you would still have a certain probability of having no cases of 500 heads, a certain probability of 1 case, a certain probability of 2 cases, etc., all according to the binomial probability theorem.
So I don’t understand what you mean when you write,
Why do you have twice the number I have, and how can you make a statement about getting 500 heads without mentioning a probability – it certainly isn’t a certainty that you would get 500 heads in 6.5 x 10^150 throws. Can you explain more.
And, to Bob H:
You write.
More generally, any stochastic process on the number of heads with all heads and all tails as absorbing boundaries (i.e. once you’re in that state you can’t leave) will inevitably reach one of the absorbing states in finite time (if you have a finite number of heads, and if it’s possible to get from any state to any other).
Could you explain more. What I think you might be saying is that after you flip the 500 coins, if a coin is a head it stays a head, and you flip the other coins again In which case, eventually you would approach all heads as the limiting case. Do you mean this, or something else? And how does this relate to flipping 500 coins at once.
and to Mung: yes, absolutely, your program is extremely unlikely to show 500 heads in a row in your lifetime, or in the lifetime of the universe. And you know that.
But flipping 500 coins is not a good model for how things happen in the real world anyway. This is just an interesting discussion, to me, from a purely mathematical point of view.
Aleta:
Just trying to understand how this is known.
Aleta:
Yup. Bob O’H can chime in now.
meanwhile …
50 consecutive heads has still not been reached.
Lowered expectations.
25 heads in a row…
keith @7:
This is an interesting comment and worth thinking about.
Dembski speaks of complexity being cashed out as probability in many cases — in many circumstances they are speaking to the same thing, particularly with functional machines, like the living organisms Orgel referred to.
But more to the point, you seem to be assuming that Dembski’s criteria would spit out “designed” when a cylindrical crystal of pure silicon is examined. I’m not sure this is the case. If we ran across such a crystal on another planet would we be forced, per Dembski’s criteria, to conclude that it was designed? Probably not.
The same goes for any repetitive pattern that is being examined initially. Dembski’s criteria would initially classify it as not designed. This would be an example of a false negative.
This isn’t to say that you aren’t on to something with your broader point about complexity and improbability. It probably partly turns on how we define “complex”.
—–
On a related note, you have stated that Orgel’s definition of “complex” is different than Dembski’s. Do you have any further evidence for that point, other than the single quote from Orgel? Not that it is critical (they may be using the words with slightly different connotations, but that doesn’t demonstrate that Dembski’s use is incorrect), but I’m just curious as to the claim regarding Orgel’s use.
Eric,
The point is that P(T|H) is a probability measure, not a complexity measure. “Complex specified information” and “specified complexity” are misnomers.
Dembski’s equation classifies anything that is specified and sufficiently improbable as exhibiting CSI/specified complexity, whether it is simple or complex.
Again, CSI is a misnomer.
Mung @ 6 –
No.
The two are one in the same, you willfully ignorant little person.
keith:
Q: How did Orgel define “specified complexity”? Specifically, what was his understanding of “complex”?
A: “One can see intuitively that many instructions are needed to specify a complex structure. On the other hand a simple repeating structure can be specified in rather few instructions. Complex but random structures, by definition, need hardly be specified at all.”
This is very much along the lines of what Dembski is talking about.
You seem to be hung up on the idea that some “simple” structures can be designed. Sure they can. And as a result they might not get flagged as “designed” if we apply the concept of CSI and/or the explanatory filter on an initial examination of the structure.
Dembski is not the first to talk about “specified complexity.” And I don’t think the one quote you have repeated from Orgel gives us any indication that he is talking about something meaningfully different than is Dembski.
Regardless, I’m not sure what your larger point is. Do you just not like the name “complex specified information” or do you have a substantive issue with the idea of using complexity or probability as a tool to help recognize potential design?
Also of interest is this quote from Neil Johnson, professor of physics who works in complexity theory and complex systems:
“. . . even among scientists, there is no unique definition of complexity – and the scientific notion has traditionally been conveyed using particular examples . . .”
(Courtesy Wikipedia, “Complexity”)
Eric Anderson:
Not at all. Orgel is talking about Kolmogorov complexity while Dembski is talking about improbability.
No. Not sure where you got that idea.
My point is to refute the silly notion that Barry and KF keep repeating: that Dembski’s “specified complexity” is essentially the same thing as Orgel’s.
It obviously isn’t. Kolmogorov complexity and improbability are not the same thing.
Evolution can get 500 heads, Dawkins proved it.
Evolution flips the coin. If it’s a head, you keep it place it in a row and flip another. If it’s a tails, you flip the same one.
It really doesn’t take that long! And keep in mind, evolution had billions of years.
Evolution just keeps the positive mutations and keeps flipping the coin if it’s a negative.
You guys really don’t understand how evolution works.
🙂 A little Thanksgiving sarcasm for ya.
Barry, thanks for your response.
So Orgel might assess the complexity of the coins by their degree of order or by their degree of contingency (which I assume you intend to be synonymous with improbability). If his usage of the term “complexity” is the same as Dembski’s, as you have claimed it is, he would presumably do the latter, which you claim to be “right”. To not use the term as Dembski does, and instead base the complexity assessment on the degree of order, is “wrong”, you say.
Setting aside the question of what you mean by right and wrong here, I have yet to see an actual defense of the claim that Orgel, previous to Dembski, equated “complexity” with “improbability”. Dembski seems to want us to believe it, but I hope you’ll understand that I don’t accept claims on Dembski’s say-so. And you haven’t given us a single reason to believe it — you’ve only claimed that it’s obvious, even to a casual reader.
Do you have any evidence that anyone previous to Dembski defined “complexity” to mean “improbability”? Is there anything in Orgel’s writings that would give us any reason to believe that Orgel defined the term this way? I see nothing, although I do see him associating the term with disorder and, as keith has pointed out, Kolmogorov complexity.
With regards to that last point, I don’t know why the IDists on this site see Mung’s quotes from Orgel as a good thing. Orgel makes it very clear that when he says “information”, he’s referring to algorithmic information, aka Kolmogorov complexity. Dembski, on the other had, always uses the term “information” to refer to probability measures, a la Shannon. Far from helping your case, Mung’s quotes underscore the fact that Orgel was not talking about probability, but rather complexity vs. simplicity in the ordinary non-Dembskian sense.
Barry, BTW, has your understanding of “specified complexity” changed since you scoffed at the claim that that Dembski’s “examples of specified complexity include simple repetitive sequences, plain rectangular monoliths, and a narrowband signals epitomized by a pure sinusoidals[sic]”?
Barry, also, are you ever going let us in on the secret of determining, without a chance hypothesis, that the coin pattern is improbable?
keith @29:
OK. I don’t really have a dog in that fight, but alright.
Just so everyone is on the same page, though, how would you describe the difference between, say, a string exhibiting Kolmogorov complexity and exhibiting improbability?
Hey keith, when ” Orgel is talking about Kolmogorov complexity”, was he he referring to this Kolmogorov?:
Yeah baby. You may want to rethink your attack
Eric:
Randomly generate a string of 50 English characters. The following string is an improbable outcome (as is every other string of 50 English characters):
But it has low Kolmogorov complexity.
The probability of a string depends on the process (or hypothesized process) that produced it. Kolmogorov complexity does not.
Mung finds the word “information” in Orgel’s work, and Joe finds the words “Kolmogorov” and “probability” in the same sentence. Waterloo!!!!
RObb,
I went back and read the comment you linked to. It is reproduced here:
To answer your question, I continue to believe that Dembski would not believe that a “simple sequence” is a “complex sequence.”
F/N: We already know coins are highly contingent. So, fr coins to be in a position that has 500 H’s, that implies imitation of a low contingency outcome. On a chance process, that would be maximally implausible, but on design, that would be readily understood as a targetted pattern. So, ironically the seemingly simple outcome is the credible product of design as it is a special case of a highly contingent system maximally implausible on blind chance but very reasonable on design. The designer would implement an algorithm, that sets H then increments and does so over and over, requiring a second order complex system to effect the algorithm physically, i.e. controlled coin flipping per design that must recognise H, T — a non-trivial problem — and then manipulate and place the coins in the string. It is only by overlooking that implied process that we can think of setting 500 coins in a row is a simple exercise. By contrast, a system that uses existing electro-chemical and physical forces to crystallise and extend a unit cell of crystal from a solution or the like, has no requirement of an algorithm executing device or a manipulating device. One may make arguments about the underlying physics and its fine tuning relative the requisites of life and questions as to whether the cosmos is designed, but that is a different order of issue on different evidence requiring a cosmos as a going concern and intelligent observers with appropriate technology and instruments, which already implies massive existence of design. KF
After reading a bit on Wikipedia, I can add to what Robb said.
Therefore, the two strings have different Kolmogorov complexity and the same probability. So obviously the two ideas are not the same.
They only have the same probability given the chance hypothesis. However no one would expect chance alone to produce ababababababababab…
rookies
So R0bb is saying the paper is wrong and Kolmogorov wasn’t interested in the foundation of probability theory, even though I can cite several other sources that say he was? Really?
Kolmogorov most definitely was interested in probability theory. He may have been interested in gardening also, for all I know.
Joe, I was addressing the issue that Kolmogorov complexity is not the same as improbability. Specification has nothing to do with that distinction. Eric asked a question in 34, and the answer, as supplied by Robb, seems pretty clear.
Do you think Kolmogorov complexity is a measure of improbability?
keiths: P(T|H) is a probability
keiths: P(T|H) is a probability measure
keiths: Kolmogorov complexity and improbability are not the same thing.
Who thought they were?
This is getting tedious.
to Mung: is there a difference? If I say throwing HHH has a probability of 1/8, is that not a measurement of a probability?
I remember back when some geometry textbooks for high school kids made them continually make a distinction between a line AB and the length of the line mAB, so that lines were congruent but the lengths of the lines were equal. Although the distinction is worth making and understanding, constantly making the distinction is pedantic, I think.
So, to rephrase, is there a significant difference between probability and probability measure?
Silver Asiatic:
ok, so my program, once it encounters a tails, it starts the process all over again. But that’s not evolution?
So I need to flip each coin until it is a heads and then move on to the next coin and repeat, but never ever start over? So programming in a massive meteor strike is out?
ok, I can change my code. But what about the probabilities? How then do we calculate them? And once we do, does that give us the probability that evolution is true?
This is hilarious. Earlier today, Barry posted a mocking thread entitled
Keiths: The Gift that Keeps On Giving to ID
In it he tried to use Jeffrey Shallit to demonstrate that Kolmogorov complexity and “Dembski complexity” were the same thing. I was about to reply a few minutes ago, but the thread was gone.
That’s right.
Barry
1) posted a mocking thread;
2) realized, after reading R0bb’s comments above, that it was going to backfire horribly on him; and
3) tried to erase the evidence by deleting the entire thread, including two comments by Joe.
Here are screenshots of the vanishing OP and the comments bar.
Barry, do you realize how pitiful your behavior is, and how you appear to the onlookers?
R0bb:
Well pardon me for answering your question. I guess I was wrong about you and need to move you over to the “not to be taken seriously” category.
That’s amusing. So the answer to Mung’s question in 46 “Who thought they were? [the same]” is “Barry does”.
Mung,
Will you be scolding Barry for his ignorance?
Mung asks, “But what about the probabilities? How then do we calculate them?”
Earlier I pointed out that flipping 500 coins doesn’t model anything realistic about the world. The reason is that the real world goes from one moment to the next, and probabilities about what might happen in any one moment affect all further calculations about the next moment, and so on through very many moments. Therefore, one needs to use probability trees to calculate the probability of events that take place through a series of steps.
That’s the general answer. In practice, in real world situations, I imagine this is very difficult.
But flipping coins is not a good model for real situations at all, because it doesn’t take the passage of time into account.
R0bb:
Good for you.
Well there I went again. But this time it wasn’t Orgel using the word “information” but Kolmogorov.
Can’t wait to see your snide remark about this one.
keiths, ignorance is not something to be scolded, it’s something to be corrected.
Self-imposed ignorance aka willful ignorance, on the other hand, is different. Are you accusing Barry of willful ignorance?
I find it difficult to think of anything worse on this planet than a willfully ignorant person, with the possible exception of someone who revels in their willful ignorance.
What do you think?
It perhaps would have been better for Barry to explain his mistake, in part to help, as Eric said, “everyone to be on the same page” rather than just deleting the whole thread.
Aleta:
This is just so blatantly wrong. I leave it to you to figure out why. Which is to say that I have you in the category of “capable of self-correction.” I hope I’m right about that.
I’ll flip you for it.
Give me an example, Mung. I’m willing to learn. I explained the kind of things it doesn’t model – things which happens through a series of steps, and I can’t think of any significant things it does model. Can you give me an example?
Barry:
We both know that Dembski believes that a simple sequence can also be complex. For example:
Your response will be that he’s using complex and simple in different senses. And my response is that I was too, of course.
Let me be clearer. Flipping coins is an example of independent events, when the probability of one event isn’t affected by the outcome of some other event. I’m sure there are real world example that this might model, such as electoral polling of a random sample of people. However, what it doesn’t model is situations where things develop through a series of steps where what happens on step 2 is affected by what happened on step 1.
Barry even deleted the thread after Joe had already posted a couple of comments to it.
Poor Joe gets no respect from anybody.
Given abababababababababababababababab and 4c1j5b2p0cv4w1x8rx2y39umgw5q85s7 we would say the first was caused by a deterministic process whereas the second was via a random process (or was designed to appear random).
The probability of the two sequences is not the same.
And R0bb 500 heads in a row is a simple pattern with a small probability
Ooops- Barry- 500 heads in a row is a simple sequence. And by Dembski’s standards it is complex.
You have to watch out for ALL of their little traps, Barry.
Fun with probability – maybe you guys know this one: A family moves into a house across the street. You know they have two children, but you know nothing about their gender. One day you see a boy in the window. Assuming equal probabilities for boys and girls, what is the probability the other child is also a boy?
A simple search shows UD is obsessed with 500 coins. Apparently 500 coin flips are somehow metaphysically linked to evolution of life.
Me_Think,
Now give us the correlation with Salvador Cordoza.
Thanks
If complexity isn’t linked to probability what examples are there of complex objects, structures or events that have a high probability of occurring?
Me Think- We talk about probabilities because you and yours don’t have anything else for us to discuss. So we are providing examples of our methodology but you and your ilk don’t seem to be able to grasp those. It’s kind of difficult to proceed if the examples are troublesome so we keep trying.
Me_Think:
It’s because 500 bits is Dembski’s “universal probability bound”, aka “the UPB”.
He “justifies” it by calculating the maximum number of events that could possibly have happened in the history of the universe, taking the log base 2, and then rounding up to 500 bits.
Joe,
Why do you think Barry deleted the other thread along with your comments?
keiths could perhaps be taken serious if he asserts that there is no maximum number of events that could possibly have happened in the history of the universe.
He realized that it (somehow) violated the ONH of opening posts containing “keith”.
Meanwhile, in the realm of what is actually possible, still no 40 heads in a row.
R0bb @36:
There is a serious problem with the “everything-is-just-as-improbable” line of argumentation when we are talking about ascertaining the origin of something.
Yes, but that is assuming the string is generated by a random generator. However, the way in which an artifact was generated when we are examining it to determine its origin is precisely the question at issue. Saying that every string of that length is just as improbable as any other, in the context of design detection, is to assume as a premise the very conclusion you are trying to reach.
We cannot say, when we see a string of characters (or any other artifact) that exhibits a specification or particular pattern, that “Well, every other outcome is just as improbable, so nothing special to see here.” The improbability, as you point out, is based on the process that produced it. And the process that produced it is precisely the question at issue.
When we come across a string like:
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
or some physical equivalent, like a crystal structure or a repeating pulse from a pulsar, we most definitely do not conclude it was produced by some random process that just happened to produce all a’s this time around, because, hey, every sequence is just as improbable as the other.
R0bb @59:
Quoting Dembski:
Yes, unfortunately this is one of the more misunderstood statements by Dembski. I mean among ID proponents. Nearly everything Dembski said is misunderstood by his detractors. 🙂
His reference to “simple” here needs to be properly understood. He is simply saying that there is some generalized pattern, as opposed to a pure random distribution. Obviously a Shakespearean sonnet is quite complex in terms of its probability, as well as having a specification. Yet is it less complex (more “simple”) than a pure random distribution of English characters, because it follows certain rules of spelling, grammar, punctuation, as well as higher order patterns of word phrases and perhaps even ideas conveyed.
Thus, while not an absolute truism, it is often the case that a designed object will be more “simple” than a pure random distribution. That is all Dembski is referring to.
But this comparative “simplicity” versus a random draw is very different from the kind of simplicity that arises through necessity: repeating patterns with little complexity.
—–
What this means in practice, is that at one end of the spectrum we have a repetitive, non-complex pattern. At the other far end of the spectrum we have a pure random distribution (as random as such a thing can be).
Designed objects can be anywhere along the spectrum, because an agent can purposely produce a simple repetitive pattern or something essentially indistinguishable from a random draw. However, in most cases, designed things will lie somewhere in the middle of the spectrum.
The design filter (or CSI if you prefer) will not pick up a designed object at the first end of the spectrum because it is not complex enough. It will not pick up something designed to look like a random draw at the other end of the spectrum because it lacks a recognizable specification. In both such cases, the design filter will return a false negative. However, in the sweet spot (which is actually quite wide and covers much of the spectrum) it will properly flag designed objects, because they have a recognizable specification plus adequate complexity.
Aleta:
I was waiting to see if any of the IDers would tackle this, but since they haven’t, I will.
The probability that the other child is a boy is 1/3.
Aleta @ 60 –
Coin flipping models do models situations with time too, but the probabilities of a Heads depend on the previous set of coin flips. This is how the Wright-Fisher model in population genetics works, as well as a lot of stochastic process models. The first comment on this thread is trying to engage with Barry on this, but he keeps on ignoring it.
Incidentally, a lot of the early work developing the maths behind stochastic processes was done by Kolmonogorov.
Mung
So ?
?????
keiths #29, to Eric:
Barry disagreed and even posted a mocking OP to that effect which he later surreptitiously deleted.
From the deleted OP:
Once he realized his error, Barry deleted the thread to hide the evidence.
That’s funny enough, but here’s another good one: Dembski himself stresses the distinction between Kolmogorov complexity and improbability:
I look forward to Barry’s explanation of how Dembski is an idiot, and how we should all trust Barry instead when he tells us that Kolmogorov complexity and improbability are the same thing.
keith s @ 69
OK, that makes sense
I just discovered something even funnier: Jeffrey Shallit himself — the very authority that Barry appeals to — confirms that Barry got it completely wrong:
Barry Arrington: A Walking Dunning-Kruger Effect:
Excellent work, Barry.
You’ve shown all of us that:
1. You have strong opinions about things you know nothing about.
2. You’ve attempted to mock someone who understands this stuff far better than you do.
3. The very authority you appealed to confirms that you got it completely wrong, as do Robb and I and Dembski himself, through his book.
4. You tried to erase the evidence by deleting the entire thread.
You look pretty ridiculous right now. Is there anything else you’d like to do to embarrass yourself in front of your audience?
Keith,
Just to make sure, this Jeffrey Shallit you keep talking about, is that the same idiot who claims that a shakespearean sonnet is “more random” than keyboard pounding?
Re: Fun with probability at 64. I knew Keith would know. I figured, however, that the question wouldn’t draw much interest. This one won’t either, but I’ll offer it anyway – maybe someone here will have not seen it and find it an fun problem to think about.
Three players, A, B, and C, are placed at the vertices of an equilateral triangle, armed with “guns”. They are to take turns shooting at each other, one shot per turn. If a player shoots at another player and hits him, the second player is out of the game (i.e., “dead”). On his turn, a player may shoot at any surviving player, or pass and not shoot at anyone. The contest continues until one player wins by being the only survivor.
A has a 1/3 chance of hitting on any shot (33 1/3%), B has a 1/2 chance of hitting on any shot (50%), and C always hits (100%).
A gets to shoot first. If B is still alive, he gets to shoot second. If C is still alive then he gets to shoot next. The rotation continues between the surviving players until only one person is left.
Some Assumptions
We assume that each player knows the accuracy level of each of the other players (e.g., both players know that C is a sure shot, A knows that B is a 50% shooter, and so on.)
We assume that each player will adopt the strategy which maximizes his own chance of survival, and we assume that each player knows that the other players will act so as to maximize their own survival.
The questions are:
a) given that everyone plays to maximize their own chances of survival, who has the best chance of winning?
b) what are the best strategies for each player?
c) what are the exact odds of each person surviving if everyone follows their best strategy?
If there aren’t any cases in which something complex also has a high probability of occurring then it is clear the Kolmogorov complexity and probability go hand in hand.
Earth to Aleta- Instead of playing games why don’t you at least try to support your position? Or is your position so pathetic that it cannot be supported?
And keith blows it again- he quotes Dembski:
All that means is sometimes one needs more information to make an inference wrt randomness- one also needs the context.
Bob writes,
Yes, that’s why I asked the question I did back at 19: if we flip coins, and then do someting else that depends on the first outcomes, we now have a step-by-step situation that is different than just throwing all the coins at once and looking at just that result.
It’s like the game of yahtzee: if I throw five dice, the probability of all five being the same is 1 out of 6^4 = 1 out of 1296. However, if I can leave some behind and throw the remainder, and then do that once more, the probability of getting all five the same would be much less – about 1 out of 21, according to several places on the internet.
This goes to the heart of a statement I have made in this thread: that merely throwing 500 coins is not a good model for things that happen in the real world. Mung says I am blatantly wrong, and I have asked him to give me an example so I can understand why he thinks that.
Aleta unguided evolution cannot be modelled so how can we have a good model for it? Probabilities are all we have wrt unguided evolution yet evos cannot provide those probabilities and they want to blame ID.
Why don’t you find that strange?
Joe writes,
My position is that computing probabilities is more complicated than just simple one-step events composed of a multitude of independent events, such as throwing 500 coins. In particular, models that don’t take into account multiple steps in which each step is dependent on what happened before are not likely to be good models of what happens in the real world.
The 500 coins example is commonly used to illustrate a situation in probability theory. I am offering some more complicated examples from probability theory in order to illustrate some complexities that the 500 coins example doesn’t cover.
My examples and comments are supporting my position, I believe.
Hi Joe. I’m not discussing evolution. I’m discussing probability. I’m interested in the way the world unfolds in general, and in how we can use math to model various aspects of the world, but I’m not very interested in the evolution debate that goes on here.
OK Aleta, good luck with that
BTW your example of 2 children is the same as the “Monty Hall” problem.
To Joe: not exactly the same as the Monte Hall problem, but a similar concept. A key difference in the two problems is that in the Monte Hall problem, Monte knows what is behind each door and chooses a door that he knows does not hide the prize. In the two children problem, the child that shows up in the window is a random choice between the two children. So the two problems are different in that regard.
Umm the Monte Hall problem pertains to the contestant(s), not Monte.
Yes, but Monte is the guy who opens the door before presenting the contestant with the offer of switching his original choice or not.
So what? The contestant is the one getting the boost in odds if she/he chooses to switch. The Monte Hall scenario pertains to the contestants and your scenario pertains to the outside observer.
I’m not sure what we arguing about. In both cases the observer, who is the contestant in the Monte Hall problem, have some beginning knowledge, which includes some beginning probabilities. Then the observer learns something new that changes the probabilities in respect to the original situation. That is what is similar about the two problems, although other aspects of the problems are different.
I am not arguing I was just making an observation that the two scenarios are pretty much the same wrt the contestants, ie the people trying to determine the probability
Barry,
About that thread you deleted yesterday — what is your explanation of your behavior?
Keith,
Your buddy Jeffrey Shallit you keep talking about, is that the same ‘barking mad‘ Jeffrey Shallit who claims that a shakespearean sonnet is “more random” than keyboard pounding?
If so, he is a fine one to talk about “spouting nonsense”.
Box,
Barry is the one who brought Shallit up in support of his argument, not me. Take a look at the vanishing OP.
I’m curious. What do you think of Barry’s behavior?
It is worth saying it again:
If there aren’t any cases in which something complex also has a high probability of occurring then it is clear the Kolmogorov complexity and probability go hand in hand.
Keith:
Now I’m confused … if you did not bring Jeffrey Shallit up, then who is the guy – by the name of Jeffrey Shallit – that you quote extensively in your post #81??
So, here is my question again: are we talking about the same ‘barking mad’ Jeffrey Shallit who claims that a shakespearean sonnet is “more random” than keyboard pounding?
And if so, wouldn’t you agree that Jeffrey Shallit is a fine one to talk about “spouting nonsense”?
to Joe: consider a “coin” that is weighted so it comes up heads 99% of the time, and throw 20 of these coins. These would come up all heads about 82% of the time, which is a pretty high probability. However, 20 heads would have low Kolmogorov complexity because a simple rule could describe them. This is a case where Kolmogorov complexity and probability do not go hand in hand.
Box @ 103,
Barry appealed to Shallit in Barry’s now-disappeared post. Keith was responding to that post.
WRT Shallit and randomness, you have to understand Shallit’s approach to ID discussions. His usage of terms is always technical and rigorous, and he assumes (or pretends) that IDists are using the terms likewise. So when IDists talk about information, randomness, or even specified complexity, Shallit responds as if the IDists have the formal definitions of those terms in mind. One could argue that he’s paying IDists a compliment.
Most people have an informal understanding of the term “random”, which they associate with non-determinism or arbitrariness. But in formal randomness measures, a highly random string may be produced by a deterministic process, or by deliberate design. What matters is the string itself, not the process that produced it.
So while it may seem that the product of arbitrary tapping on the keyboard must be more random than an intentionally crafted sonnet, such is not necessarily the case for formal definitions of “random”.
There isn’t any complexity in your example. The high probability matches the simple rule.
Joe:
First of all, non sequitur. Even if everything that’s complex is also improbable, it could still be the case that some things that are improbable are not complex.
And actually, there are both types of mismatches. To get something that’s complex but highly probable, consider applying a ROT13 to a very complex string. With a probability of 1 you’ll get a particular new string, and that string will be complex.
And for an improbable outcome that isn’t complex, consider the string in #36.
You have got it backwards. We don’t have to understand Jeffrey Shallit, it’s exactly the other way around. And J.S. fails miserably.
Since it was Jeffrey Shallit who commented on an article by Barry.
A quick summary of the article: Barry offered two strings of text. String #1 was created by Barry haphazardly running his hands across his computer keyboard. String #2 was the first 12 lines of Hamlet’s soliloquy.
Now what should be obvious – in the context of an ID-debate – to anyone with half a brain is that string #1 is obviously random and string #2 is obviously DESIGNED (the opposite of random).
So what does Jeffrey Shallit do? He entered the ID-debate but does he understand what it is all about? No, he hasn’t got a clue.
So, Jeffrey Shallit runs both strings through a stupid compression algorithm and states that a shakespearean sonnet is “more random” than keyboard pounding, thereby ‘proving’ that Barry is wrong.
Talk about missing the point ….
R0bb @36:
Was Orgel’s discussion related to the question of the process that produced such features, that is to say, in the origin of such features?
His book, I believe, was called “The Origins of Life“?
Aleta @64:
Good Punnett Square riddle!
And a good example of how information can be used to help narrow a search space.
Here is the follow up riddle:
The neighbor walks across the street with the boy and says: “I’d like you to meet my son. He is our oldest.”
Now, what is the probability that the younger child is a boy? 🙂
Keith S, Aleta:
Sorry, I don’t get it. For me, the probability that the other child is a boy is still 1/2 – isn’t that independent from the sex of the child at the window?
I painted trees, diagrams, filled out charts, but the result is always the same – unless you claim that boys are domineering windows…
Re: 110
1/2. Others might like an explanation
The difference is between knowing that one of the children is a boy and knowing that a particular child (the oldest) is a boy.
There are four possibilities, and assume the first in each pair is the oldest.
BB
BG
GB
GG
In problem #1, at 64, knowing that we saw a boy eliminates the GG possibility. Of the remaining three possibilities, if we saw a boy, only one of the three has another boy, so the probability the other child is a boy is 1/3. In Eric’s problem we are told the first child is a boy, which eliminates both GG and GB. Of the remaining two possibilities, one has the youngest child a boy, so the probability is 1/2.
in 104, I wrote,
Joe replied,
This doesn’t make sense.
First consider what Wikipedia says (and I’m sure other sources would confirm this.)
So all strings have some measure of Kolmogorov complexity. You can’t say that “there isn’t any complexity” in a string.
Consider this. As above, the string HHHHHHHHHH has a probability of 0.99^10 = 82%. The string TTTTTTTTT has a probability or 0.01 ^ 10 = 10^-20, which is extremely small.
HHHHHHHHHH is quite probable, and TTTTTTTTTT is extremely improbable, but both have the same Kolmogorov complexity: both can be described with the same “computability resources”, one as “10 heads” and one as “10 tails”
Thus Kolmogorov complexity and improbability do not “go hand in hand.”
keiths:
Let’s say Barry posted something and then realized he disagreed with what he wrote and so deleted it. So what?
Every user has the opportunity to do just that.
What do folks think of keiths’s behavior?
Should he maybe use that feature and delete most of his posts after submitting them?
Hi Orloog. See my explanation at #112 and see if that makes sense to you, and then see how my problem differs from Eric’s, for which your reasoning applies.
Thanks, Aleta @112. Good explanation.
Orloog, taken together, Aleta’s riddle and mine make a great example of how information helps narrow the range of possibilities. In other words, an infusion of information helps narrow the search space. Very simple example, but quite clear.
Indeed, one possible way of defining information is “the elimination of possibilities.”
Regarding Kolmogorov complexity, I’ve always been of the view (though I am certainly open to being corrected) that Kolmogorov complexity has little to do with what we are interested in for design purposes.
keith @29 commented that Orgel was interested in Kolmogorov complexity, not probability. Much of the back and forth on this thread depends on whether that is in fact the case.
Does anyone have a clear statement from Orgel that he was primarily interested in Kolmogorov complexity and not probability? After all, he wrote a book about the origins of life, so presumably he was interested — one would think — in the origin and source of the specified complexity he observed in living organisms, not so much on the compressibility of that complexity for modern information systems purposes.
It seems strange that Orgel would be focusing only on Kolmogorov complexity and not on the probabilities that relate to the origin of such specified complexity. I’m wondering if the whole discussion has been taken down the garden path by comment #29.
Again, however, while it may not have much relevance to the design inference I’d nevertheless be curious to know whether Orgel was in fact only discussing algorithmic compressibility as opposed to probability in his book on the origins of life. If so, then it seems he may have been off on the wrong track.
Eric Anderson:
No, I didn’t. Please read more carefully, Eric.
Orgel was interested in both complexity and improbability, but unlike Dembski, he didn’t conflate the two.
keith @29:
It is a simple question I am asking:
Is Orgel in his book really focused on Kolmogorov complexity rather than improbability? Did Orgel say he was talking about Kolmogorov complexity in the context of the origin of specified complexity?
That is the question.
If he did, then he was off base. If not, then you have gotten us off track.
Eric,
Orgel was talking about Kolmogorov complexity in the quote you gave us:
Does that mean that he wasn’t interested in probability? Of course not. You can’t do OOL work without taking probability into account.
Orgel was smart enough to keep complexity separate from improbability. Dembski conflated the two.
Eric @ 119: Just to be clear, the dispute in this thread is over the claim that Orgel and Dembski mean the same thing when they say “complexity”. Setting aside issues like origins, design, and the quality of Orgel’s work, what is your take on this claim?
Mung @ 54:
You’ve pointed out that Orgel, Kolmogorov, and Dembski all use the word “information”. I’ll gladly respond when you tell me what conclusion you draw from this.
Eric,
After you’ve answered R0bb, a challenge awaits on your own thread.
Not to be contrarian, but I’m going to have to disagree with those who gave an answer of 1/3 to Aleta’s riddle. The solution in #112 assumes that BB, BG, and GB are all equally likely. But given that we’ve seen a boy, BB is actually twice as likely as each of the others. So the answer is in fact 1/2.
R0bb:
Interesting! I think I understand your logic, and I think I can show where it goes wrong, but let me think about it some more and reply later. In the meantime, I have some other comments to write. 🙂
Thank you, R0bb: I even run a simulation, as I didn’t trust my calculations any longer, the result was 1/2…
Keith S, Orloog
Aleta is right – the probability is 1/3 not 1/2. It is a well-known paradox.
Prior to seeing the boy at the window the four possibilities: BB, GB, BG and GG are all equally probable. Observing the boy eliminates GG but does not change the relative probability of the other three possibilities.
I am interested to know how you did your simulation Orloog. You can be pretty certain there is something wrong with it as the maths is bomb proof as far as I know.
#127
This inspired me to look at the wikipedia article on the paradox. Apparently it is more complex and debateable than I thought. (I just missed the deadline for deleting my comment!)
Mark Frank:
Don’t delete comments! That way lies chaos.
Just think of the 20-minute window as an opportunity to correct typos and add additional comments prefaced by “ETA”=”edited to add”.
(I speak from experience. Internet discussions can become chaotic if people start deleting comments.)
Even worse is when people delete entire threads.
Mark, thank you for the link to the wikipedia article. According to it and the way Aleta phrased the question, the answer is indeed 1/2.
Barry,
When are you going to explain why you deleted that thread the other day?
Re the boy-girl paradox, I think I was wrong and that R0bb and Orloog are right.
But I could be wrong. 🙂
In any case, a good night’s sleep should help clear things up.
This is very interesting. I had forgotten about the Wikipedia article on the problem when I posted it Wednesday, but I had read it before. The article says that the way I stated the problem leads to the answer of 1/2, not 1/3, but that the “at least one boy” formulation leads to 1/3.
I see that, and I think RObb offered a good explanation why this is the case:
The main issue seems to be how you find out that there is at least one boy – whether through a random process by looking in the window (which leads to a probability of 1/2), or by being told there is at least one boy. For instance, if the father walked up to you and said “my boy Bill is sick” and then went into the house, I think the interpretation that leads to an answer of 1/3 might still hold.
R0bb:
Examples please. The string in 36 is not highly improbable as randomness did not produce it.
Aleta:
That is false. Allegedly they have the same probability but that is also false.
You cannot use a totally biased example to make your case. Probabilities only matter on a level playing field.
Try again
And AGAIN:
If there aren’t any cases in which something complex also has a high probability of occurring then it is clear the Kolmogorov complexity and probability go hand in hand.
So far so good…
Joe, I believe you are wrong on both counts.
1. You say that it is false that all strings have some measure of Kolmogorov complexity. I quoted the Wikipedia article that makes it clear that all strings do have some measure of Kolmogorov complexity. Can you provide some evidence or citation to back up your claim that some strings don’t have any measure of Kolmogorov complexity?
2. When I offered the example that, for a coin which turns up heads 99% of the time, P(10 heads) = 82% and P(10 tails) = 10^-20, you replied,
Are you saying that the only place probabilities matter is when all events have equal probability? If so, that is certainly wrong. Many (most) real world problems involving probability involve situations where some event is more likely than 50%. When I taught beginning stats, we had all sorts of problems involving such things as the reliability of medical tests, random sampling of products for defects, etc. where the probability of success and the probability of failure were very far from a 50-50 split – in fact some were even more unbalanced than the 99-1% split in my example.
That is incorrect as the KC is a measure of the description of the thing. The strings have the same probability, however that is given a purely random occurrence.
Wikipedia:
The Kolmogorov complexity … of an object, such as a piece of text, is a measure of the computability resources needed to specify the object.
Geez you can’t even understand your reference.
No. A level playing field is required, though.
Joe:
I’ll repeat what I said before: Apply a ROT13 to a very complex string. The resulting new string has a probability of 1 because ROT13 is a deterministic operation, and the new string is also guaranteed to be very complex.
Of course, you can easily come up with an ad hoc reason to reject this response to your challenge. The problem is that your challenge is so vaguely conceived that the goalposts are highly mobile. Expressing your challenge in mathematical notation would be a good first step toward planting the goalposts.
But Joe, KC has nothing to do with the probability of the string occurring – it is just a measure of a property of the string irrespective of how it came about. The sentence you quoted says nothing about probability – nothing about where the string came from, and the example in the article makes that clear.
And what do you mean by “a level playing field is required.” What is there about my example that is not a “level playing field”?
Aleta:
KC has to do with the string’s description and not all strings are the same. The two in the example are not the same. And your example has a weighted coin- it is not a level playing field.
R0bb:
You are very desperate.
R0bb- Leave the complex strings alone and try to meet my challenge. And your claim of my being vague is laughable.
I conclude that Joe is hopeless – his answers don’t even begin to address my points, and are in fact contradictory.
When I wrote,
Joe replied, “KC has to do with the string’s description and not all strings are the same. The two in the example are not the same.”
Of course not all strings are the same – Joe’s response doesn’t address the point at all.
And when I wrote,
,
Joe replied, “No. A level playing field is required, though.”
But when I asked him to explain what he meant by a level playing field, he replied, “And your example has a weighted coin- it is not a level playing field.”
This directly contradicts what he said earlier where he agreed the all events didn’t need to have equal probability.
I’ll move on to better things in my life.
Aleta:
Yet you said:
Please make up your mind.
And if you don’t know what is meant by a level playing field then perhaps you shouldn’t be having a discussion on probabilities.
Only in your mind. Not having a equal probability does not mean there isn’t a level playing field.
I conclude that Aleta is totally hopeless.
Proof that Aleta is totally hopeless:
I had said:
That is incorrect as the KC is a measure of the description of the thing. The strings have the same probability, however that is given a purely random occurrence.
To which Aleta responded:
Notice that I never said that KC = probability. I never even implied it.
Umm the sentence I quoted was to refute what you said earlier:
Are you that daft that you cannot remember what you posted?
keith @120:
Dembski discussed Kolmogorov complexity in his writings, taking time to show both the relevance and the distinction between Kolmogorov complexity and specified complexity. I’m not sure where your allegation of Dembski’s conflation comes from. Dembski has written an incredible quantity over the years, so there might be some quote someone could find somewhere that can be understood as less than clear on the point. But generally Dembski is quite clear about the distinction. Furthermore, we can be relatively confident that he knows more about the topic than either you or I.
—–
R0bb @121:
Thanks, R0bb. That is a helpful way forward in the discussion.
I’m not sure it makes any sense to set aside issues like origins. Particularly, as you have pointed out, the question of the origin of a structure/sequence/information is linked to the probability side (rather than just the Kolmogorov descriptive side). If Orgel was talking about the origin of complex structures, then he was definitely interested in probability, as that is the only thing that would be relevant (Kolmogorov is essentially irrelevant). That doesn’t mean he wouldn’t discuss a concept like Kolmogorov complexity — just as Dembski does in his writings.
My personal take? I am not familiar enough with Orgel’s work to be able to say precisely what he was driving at. But if he is talking about the origin of complex cellular structures then he is most definitely not just talking about Kolmogorov complexity.
Furthermore, it is quite common for a later researcher to build upon the ideas of an earlier researcher. In doing so, the later researcher will inevitably add a nuance, or a slightly different take, or a clarification, or a new way of looking at things. But we can still see the chain of thought linking the two and still would be justified in saying that the later researcher is building upon the ideas of the former, or that the former was describing essentially the same thing as the later, albeit the former would obviously not have included the later’s additional thoughts or nuances on the topic.
Dembski himself, makes the tie:
So Dembski himself says that Orgel’s concept is not a “precise analytic account” like Dembski’s effort. He also says that Orgel “used specified complexity loosely,” while Dembski feels he as “formalized it.”
This means, obviously, that Dembski has added to or developed Orgel’s concept.
Thus, is Dembski talking about exactly the same thing as Orgel, in the sense of simply repeating verbatim Orgel’s thoughts on the topic? Of course not; he says he is going further and developing the concept beyond Orgel’s discussion. Is Dembski, in developing his own take, talking about largely the same thing as Orgel? Yes.
My take on the “dispute in this thread” is that people are straining at gnats. Dembski is clearly building on Orgel and they both talk about specified complexity in the origins context. Those seem to be undisputable facts. Unfortunately, some people seem so obsessed with bashing Dembski that they refuse to see the practical realities and have gotten into a dispute that turns on a single quote here or a phrase there. Together with what appears to be a false allegation by keith that Dembski conflates concepts, the usefulness of the discussion may be less than it otherwise could have been.
I think it would be useful for us all to better understand Orgel’s approach, as well as how Dembski has built upon it in his work on the design inference. Unfortunately, we’re stuck with a take-no-prisoners battle by some who are intent on discrediting Dembski at all costs, even with unfounded allegations.
Orgel was smart enough not to blindly apply probability theory and he found/ developed a methodology to help distinguish those circumstances in which probability theory (alone?) does not apply.
Kolmogorov, nor Orgel, duh
Eric:
In the passages where Orgel talks about specified complexity, he is not discussing origins. He presents specified complexity as a characteristic property of life vs. non-life, not an indicator of design vs. non-design. So there is no need to bring up probabilities, and indeed he doesn’t. He makes it very clear that he referring to Kolmogorov complexity.
keiths #120:
Eric #147:
It’s simple, and it’s right there in
1) the name itself: complex specified information; and
2) the equation, which includes P(T|H), a probability; and
3) the fact that Dembski attributes CSI when the probability becomes small enough.
Barry,
When will you explain why you deleted an entire thread, along with two of Joe’s comments?
Barry,
A reminder. See above.
Barry,
Before you leave for your trip, I hope you’ll explain to us why you deleted an entire thread, including comments.
keith s, not that I hold much hope you will acknowlege it, but there is an empirical falsification for the materialistic, neo-Darwinian, claim that information is emergent from a material basis. A falsification of neo-Darwinism that does not rely on probabilistic calculations, but instead relies on observational evidence.
Contrary to materialistic thought, information is now shown to be its own independent entity which is separate from matter and energy. In fact, information is now shown to be physically measurable.,,
Moreover, the total information content of the bacterial cell, when it is calculated from this now ‘measurable’ thermodynamic perspective, is far larger than just what is encoded on the DNA,,
As well, it is important to note that, counter-intuitive to materialistic thought (and to every kid who has ever taken a math exam), a computer does not consume energy during computation but will only consume energy when information is erased from it. This counter-intuitive fact is formally known as Landauer’s Principle.
It should be noted that Rolf Landauer himself, despite the counterintuitive fact that information is not generated by an expenditure of energy but can only be erased by an expenditure of energy, presumed that the information in a computer was merely ‘physical’, i.e. merely emergent from a material basis, because the information in a computer required energy to be spent for the information to be erased from it. Landauer held this materialistic position in spite of objections from people like Roger Penrose and Norbert Weiner who held that information is indeed real and has its own independent existence separate from matter-energy.
Yet the validity of Landauer’s materialistic contention that ‘Information is physical’ has now been overturned, because information is now known to be erasable from a computer without consuming energy.
Moreover, if physically measuring information, and/or erasing information from a computer without using energy, were not bad enough for the Darwinian belief that information is merely emergent from a material basis, it is now shown, by using quantum entanglement as a ‘quantum information channel’, that material reduces to information instead of information reducing to material as is believed in Darwinian materialistic presuppositions.
And, as mentioned previously, by using this ‘measurable’ quantum information channel of entanglement, matter-energy has been reduced to quantum information: (of note: energy is completely reduced to quantum information, whereas matter is semi-completely reduced, with the caveat being that matter can be reduced to energy via e=mc2).
In fact an entire human can, theoretically, be reduced to quantum information and teleported to another location in the universe:
Thus not only is information not reducible to a energy-matter basis, as is presupposed in the reductive materialism of Darwinism, but in actuality both energy and matter ultimately reduce to a information basis as is presupposed in Christian Theism (John1:1-4).
Moreover, this ‘spooky action at a distance’, i.e. beyond space and time, quantum entanglement/information, by which energy and matter are reducible to a material basis, is now found in molecular biology on a massive scale. i.e. Beyond space and time, i.e. ‘non-local’, quantum entanglement is now found in every DNA and protein molecule.
That quantum entanglement, which conclusively demonstrates that ‘information’ in its pure ‘quantum form’ is completely transcendent of any time and space constraints (Bell, Aspect, Leggett, Zeilinger, etc..), should be found in molecular biology on such a massive scale is a direct empirical falsification of Darwinian claims, for how can the ‘non-local’ quantum entanglement ‘effect’ in biology possibly be explained by a material (matter/energy) cause when the quantum entanglement effect falsified material particles as its own causation in the first place? Appealing to the probability of various ‘random’ configurations of material particles, as Darwinism does, simply will not help since a timeless/spaceless cause must be supplied which is beyond the capacity of the material particles themselves to supply!
In other words, to give a coherent explanation for an effect that is shown to be completely independent of any time and space constraints one is forced to appeal to a cause that is itself not limited to time and space! i.e. Put more simply, you cannot explain a effect by a cause that has been falsified by the very same effect you are seeking to explain! Improbability arguments of various ‘special’ configurations of material particles, which have been a staple of the arguments against neo-Darwinism, simply do not apply since the cause is not within the material particles in the first place!
And although Naturalists/Materialists have proposed various, far fetched, naturalistic scenarios to try to get around the Theistic implications of quantum non-locality, none of the ‘far fetched’ naturalistic solutions, in themselves, are compatible with the reductive materialism that undergirds neo-Darwinian thought.
Thus, as far as empirical science itself is concerned, Neo-Darwinism is falsified in its claim that information is ‘emergent’ from a material basis.
Of related interest to ‘non-local’, beyond space and time, quantum entanglement ‘holding life together’, in the following paper, Andy C. McIntosh, professor of thermodynamics and combustion theory at the University of Leeds, holds that non-material information is what is constraining the cell to be so far out of thermodynamic equilibrium. Moreover, Dr. McIntosh holds that regarding information as independent of energy and matter ‘resolves the thermodynamic issues and invokes the correct paradigm for understanding the vital area of thermodynamic/organisational interactions’.
Here is a recent video by Dr. Giem, that gets the main points of Dr. McIntosh’s paper over very well for the lay person:
Of related interest, here is the evidence that quantum information is in fact ‘conserved’;,,,
Besides providing direct empirical falsification of neo-Darwinian claims as to the generation of information, the implication of finding ‘non-local’, beyond space and time, and ‘conserved’, quantum information in molecular biology on a massive scale is fairly, and pleasantly, obvious:
Verse and Music:
Do the materialists have any explanation for non-locality, BA77? It just seems to me to knock materialism on the head, like an angler’s ‘priest’ does to a fish he’s caught. Giving it the last rites….
Axel, as stated before, although naturalists have postulated some far fetched scenarios, such as many worlds, etc…, to deal with quantum mechanics, none of those scenarios in themselves are compatible with reductive materialism that under-girds neo-Darwinian thought.
“[while a number of philosophical ideas] may be logically consistent with present quantum mechanics, …materialism is not.”
Eugene Wigner
Quantum Physics Debunks Materialism – video playlist
https://www.youtube.com/watch?list=PL1mr9ZTZb3TViAqtowpvZy5PZpn-MoSK_&v=4C5pq7W5yRM
Why Quantum Theory Does Not Support Materialism By Bruce L Gordon, Ph.D
Excerpt: The underlying problem is this: there are correlations in nature that require a causal explanation but for which no physical explanation is in principle possible. Furthermore, the nonlocalizability of field quanta entails that these entities, whatever they are, fail the criterion of material individuality. So, paradoxically and ironically, the most fundamental constituents and relations of the material world cannot, in principle, be understood in terms of material substances. Since there must be some explanation for these things, the correct explanation will have to be one which is non-physical – and this is plainly incompatible with any and all varieties of materialism.
http://www.4truth.net/fourtrut.....8589952939
Thank you, BA77. As I thought. How could it be otherwise? Given the nature of matter and the concept of non-locality.