Uncommon Descent Serving The Intelligent Design Community

“Conservation of Information Made Simple” at ENV

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Evolution News & Views just posted a long article I wrote on conservation of information.

EXCERPT: “In this article, I’m going to follow the example of these books, laying out as simply and clearly as I can what conservation of information is and why it poses a challenge to conventional evolutionary thinking. I’ll break this concept down so that it seems natural and straightforward. Right now, it’s too easy for critics of intelligent design to say, ‘Oh, that conservation of information stuff is just mumbo-jumbo. It’s part of the ID agenda to make a gullible public think there’s some science backing ID when it’s really all smoke and mirrors.’ Conservation of information is not a difficult concept and once it is understood, it becomes clear that evolutionary processes cannot create the information required to power biological evolution.” MORE

TEASER: The article quotes some interesting email correspondence that I had with Richard Dawkins and with Simon Conway Morris, now going back about a decade, but still highly relevant.

Comments
DiEB:
The probability for a success is 1/6 * 1/2 + 5/6 * 1/10 = 1/6. So the problem didn’t become more difficult.
But you only get one choice. Either you pick the 1/2 machine OR you pick a 1/10th machine. So it wouldn't be changing the "+" to a "*", you just delete the "+" and the rest of the stuff on the right of it. What that says is that either pick will give you the same odds at achieving your goal. Which means if you had two picks you would double your chances, as DiEB said.Joe
August 31, 2012
August
08
Aug
31
31
2012
12:17 PM
12
12
17
PM
PDT
RE: #56 "I see a tiny problem at Dr. Dembski’s toy example: Could you please correct your miscalculation, Dr. Dembski?" The total probability of securing item 6 is indeed 1/6, but Dembski's qualification is clear:
"The probability of finding item 6 using this machine, once we factor in the probabilistic cost of securing the machine, therefore ends up being 1/6 x 1/2 = 1/12."
This is correct. We wanted to specifically secure item 6, so we incurred the probabilistic cost of finding the correct machine that would increase our chances to 1/2. This gives us 1/12 probability of securing item 6. P(A) = 1/6 and P(B|A) = 1/2. So given that event A occurred, at a cost of 1/6, event B costs an additional 1/2. If we pay 1/6 for the required machine, our chances of getting item 6 are 1/12. Dembski's not referring to the total probability of securing item 6; he's made a clear qualification that our chances have been reduced once we have already paid the cost of securing the desired machine.Chance Ratcliff
August 31, 2012
August
08
Aug
31
31
2012
11:57 AM
11
11
57
AM
PDT
I wrote an email to Dr. Dembski, using the address listed at evoinfo.org/people . Unfortunately wdembski [at] swbts [dot] edu doesn't work any longer.DiEb
August 31, 2012
August
08
Aug
31
31
2012
09:57 AM
9
09
57
AM
PDT
Joe:
Please explain why your “+” should not by a “*”.
It's called the Law of total probability. Consider the events: T: the target 6 is identified and S: the better machine is chosen Then P(S) = 1/6, P(T|S) = 1/2, P(T|not(S)) = 1/10 and by said law we get: P(T) = P(T|S)*P(S) + P(T|not(S))*P(not(S)) = 1/2 * 1/6 + 1/10 * 5/6 = 1/6DiEb
August 31, 2012
August
08
Aug
31
31
2012
09:52 AM
9
09
52
AM
PDT
R0bb: First, I suggest you take a look at the second law of thermodynamics, from the statistical end. You will see that it is about fluctuations, and it is about populations and what happens to fluctuations as pop goes up. Second, there are statistical laws that are perfectly valid being stated as expected outcomes. Then, bring to best the fluctuations issues as pop size goes up enough and you see where something that is strictly mathematically/ logically/ physically possible becomes observationally so utterly implausible as to be reliably not the case. If you will look at comment 70 of the 18 Q's thread, here, you will see a clip from WmAD's recent made simple article:
Mathematically speaking, search always occurs against a backdrop of possibilities (the search space), with the search being for a subset within this backdrop of possibilities (known as the target). Success and failure of search are then characterized in terms of a probability distribution over this backdrop of possibilities, the probability of success increasing to the degree that the probability of locating the target increases . . . . Take an Easter egg hunt in which there’s just one egg carefully hidden somewhere in a vast area. This is the target and blind search is highly unlikely to find it precisely because the search space is so vast. But there’s still a positive probability of finding the egg even with blind search, and if the egg is discovered, then that’s just how it is. It may be, because the egg’s discovery is so improbable, that we might question whether the search was truly blind and therefore reject this (null) hypothesis. Maybe it was a guided search in which someone, with knowledge of the egg’s whereabouts, told the seeker “warm, warmer, no colder, warmer, warmer, hot, hotter, you’re burning up.” Such guidance gives the seeker added information that, if the information is accurate, will help locate the egg with much higher probability than mere blind search — this added information changes the probability distribution . . . . The Easter egg hunt example provides a little preview of conservation of information. Blind search, if the search space is too large and the number of Easter eggs is too small, is highly unlikely to successfully locate the eggs. A guided search, in which the seeker is given feedback about his search by being told when he’s closer or farther from the egg, by contrast, promises to dramatically raise the probability of success of the search. The seeker is being given vital information bearing on the success of the search. But where did this information that gauges proximity of seeker to egg come from? Conservation of information claims that this information [the guide for the search] is itself as difficult to find as locating the egg by blind search, implying that the guided search is no better at finding the eggs than blind search once this information must be accounted for . . .
The language above is clearly about expectations under reasonable circumstances, and about empirical reliability of a principle. It is explicit that it is logically and physically possible for blind search to succeed. But on the scale of the space to be searched and the relative isolation of the hot zone, there is a maximal implausibility of blind search succeeding to the point where an alleged blind search that is successful is suspect for cheating. The matter then moves to the issue of guiding the search that enhances the probability of success, and shows how if the search is to be found blindly -- i.e without intelligence -- then it is subject to a search itself that is comparably difficult as the direct search or worse. It seems to me that the example given, albeit a toy, aptly shows that. So, WmAD's remarks seem to me unexceptional in what hey are affirming and the cautions they point to. I think your critiques need to be rebalanced in that light. There are such things as expected outcomes that are so weighted by the balance of probabilities given the scope of a relevant space that there is no good reason to expect to observe a truly improbable outcome on the relevant scope of resources, lab, planet, solar system or observed cosmos. And when something is cast in such fundamentally thermodynamics terms, to point out what is mathematically possible or what happens with toy examples to the contrary of the overwhelming expectation, is distractively irrelevant to the point where it can easily become a red herring, strawman fallacy. KF PS: I suggest you look here on in my always linked note, on the relevant thermodynamics perspective.kairosfocus
August 31, 2012
August
08
Aug
31
31
2012
09:39 AM
9
09
39
AM
PDT
DiEB:
Can you spot the error in his calculation? The probability to find the correct machine and then the target is indeed 1/12, but the probability to find the target via chosing a machine at random at first is 1/6, thanks to the symmetry of the problem: The probability for a success is 1/6 * 1/2 + 5/6 * 1/10 = 1/6. So the problem didn't become more difficult.
Please explain why your "+" should not by a "*".Joe
August 31, 2012
August
08
Aug
31
31
2012
09:05 AM
9
09
05
AM
PDT
If the LCI fails in mathematically valid cases, is it a true law?
Can something be mathematically valid when applied to something that is invalid, such as your mangled sets? If so then how does it apply to the real world? And if it doesn't apply then why even bring it up unless you are not interested in a civil discussion?Joe
August 31, 2012
August
08
Aug
31
31
2012
08:50 AM
8
08
50
AM
PDT
R0bb:
I did: Section 4.1.1 of the Search for a Search paper. |?| is 16, 3 of which have zero probability.
Please quote the part that says that. I cannot find anything that comes close to saying that. But obvioulsy we see things differently so I need your help.Joe
August 31, 2012
August
08
Aug
31
31
2012
08:47 AM
8
08
47
AM
PDT
There are 5 final states.
There are 5 POSSIBLE final states if and only if the ONE item can be in all three of the initial starting points at the same time. So yes, we definitely have a communication problem. Gotta go....Joe
August 31, 2012
August
08
Aug
31
31
2012
08:18 AM
8
08
18
AM
PDT
Joe, we're having some serious communication problems. I've asked several questions throughout our discussion in an effort to find the points of communication breakdown. I realize that it would take some work to answer all of those questions, but I think that answering them is necessary in order for us to understand each other. So I have two more very important questions: 1. Are you interested in doing the work it takes for us to understand each other? 2. Are you interested in having a civil discussion, free of taunts?R0bb
August 31, 2012
August
08
Aug
31
31
2012
07:55 AM
7
07
55
AM
PDT
Joe:
I never said there are 5 choices.
You are by saying there are 5 possible outcomes. That is wrong. YOU said:
Finally, p2 is 1/5 since the target consists of only one of the final five states.
There are 5 final states. For any given initial state, only 2 of the 5 final states are accessible.
Omega = 2 as that is the number of possibilities.
By "possibilities" I assume you mean outcomes with non-zero probability. But why must Ω contain only outcomes with non-zero probability? It's common in probability theory for samples spaces to contain zero probability outcomes.
For the LCI to qualify as a law, it has to hold up to any mathematically valid case that we throw at it.
LoL! No R0bb, something can be mathematically valid and be totally senseless to the real world.
I can't tell whether or not you agree with the statement to which you're responding, so I'll ask again the question I asked in #45: If the LCI fails in mathematically valid cases, is it a true law?
16 squares there R0bb. How did you arrive at 13?
Only 13 squares have non-zero probability in the alternate search.
Yes R0bb, you can mangle what Dembski said. Are you proud of that?
Where exactly did I mangle what Dembski said?
Mathematically valid but not properly stated. Obvioulsy you have having problems following along.
I don't know what your criterion is for deeming something "properly stated", or exactly how I failed to meet it, so I'll settle for it being mathematically valid.
Can you provide ONE example in which Dembski/ Marks include zero-probability outcomes from omega? If not then you don’t have a point other than demonstrating dishonesty.
I did: Section 4.1.1 of the Search for a Search paper. |Ω| is 16, 3 of which have zero probability. Consider some of their other examples of active information, like Marks' example of finding a good recipe for boiling an egg. There are 66 possibilities, 22 of which are zero probability in the alternate search. If we don't allow zero-probability outcomes in Ω, then the active information is zero. Do you think Marks would agree that the active info is zero? Also consider Dembski's oft-used example of a treasure map. Them map eliminates all outcomes except for one. Does the map have zero active information?R0bb
August 31, 2012
August
08
Aug
31
31
2012
07:51 AM
7
07
51
AM
PDT
I see a tiny problem at Dr. Dembski's toy example: Could you please correct your miscalculation, Dr. Dembski? DiEb
August 31, 2012
August
08
Aug
31
31
2012
07:01 AM
7
07
01
AM
PDT
F/N: Since some objectors may be tempted to be dismissive on what I mean by needle in haystack searches on steroids, cf. the estimate here at IOSE. (Cf also the remarks on islands of function here.) The linked shows in outline that the number of possible search operations of our solar system to the number of possibilities for just 500 bits, is as a straw sized sample to a haystack 1,000 light years on the side, using the time-tick of the fastest chemical interactions, and the typical estimate for the solar system's age and number of atoms. Such a blind search runs into the implications of sampling theory, that a reasonable sized but relatively small sample will by overwhelming probability pick up the BULK of the distribution, not special, specifically and independently describable isolated zones. This, for the same reasons in effect as were already outlined for looking at frames of sampling. To see why, imagine the exercise of making up a large bell shaped curve from bristol board or the like, dividing it into even stripes, carrying the tails out to say +/- 6 SD's. Now, go high enough that dropping darts would be essentially evenly distributed and drop a dart repeatedly. After about 30 hits, we would begin to see a picture of the bulk of the distribution, and after about 100 - 1,000 it would be pretty good. But the far tails will very seldom come up as the hits in the board will tend overwhelmingly to go where there is a lot of space to get hit. That, in a nutshell is the whole issue of how hard it is to get to CSI by chance based contingency. And oh yes there is a debate as to whether that dropped dart "actually" pursues a deterministic trajectory on initial and intervening circumstances. So, how could the result be a chance based random pretty flat distribution yielding a result proportionate to relative area? Let's go back to how my Dad and colleagues in Statistics 50 years ago would use a phone book as a poor man's random number table. The assignment of phone numbers is absolutely deterministic, based on the technology used. Surnames and given names are not random either. But, within a given local switching office [the first three digits of a 7-digit number] there is no credible correlation between names and local loop numbers on the whole [the last four digits]. So, by going to a page at random and picking the first number there to guide to another number that then sets the number of pages forward and back respectively to pick a number there will be a succession of effectively random 4-digit digit numbers. Similarly, the digits of pi in succession are absolutely deterministic, but since there is no correlation between pi and the decimal numbers, the successive bits are essentially randomly distributed. So, clashing, uncorrelated deterministic streams of events can easily give rise to random distributions. Dart dropping has the same effect and gives a good enough result. This BTW, is also why there will be some mutations that may well be random in effect, though the evidence that mutations may be functionally incorporated into the system should be reckoned with. Or, do you think this is just for the immune system? Indeed, to promote robust adaptability it would make sense to build in a mechanism to do adaptations by chance incremental variation and niche exploitation. But that adaptation is not to be confused with how to get to the underlying body plan in the first place. That puts us into search space challenge territory, easily. KFkairosfocus
August 31, 2012
August
08
Aug
31
31
2012
06:10 AM
6
06
10
AM
PDT
No, it means that if we always exclude zero-probability outcomes from ?, then in some cases active information will decrease when a search improves.
Can you provide ONE example in which Dembski/ Marks include zero-probability outcomes from omega? If not then you don't have a point other than demonstrating dishonesty.Joe
August 31, 2012
August
08
Aug
31
31
2012
05:30 AM
5
05
30
AM
PDT
As I said in the first sentence you quoted, I’m simply reiterating a point that Dembski used to make often.
Yes R0bb, you can mangle what Dembski said. Are you proud of that?
Are my “senseless” examples mathematically valid or not?
Mathematically valid but not properly stated. Obvioulsy you have having problems following along.Joe
August 31, 2012
August
08
Aug
31
31
2012
05:28 AM
5
05
28
AM
PDT
But if you think that endogenous probability = |T|/(number of choices), then reading through their examples will correct that misconception. For example, in section 4.1.1 of the Search for a Search paper, the endogenous probability is 1/16, but the number of choices is 13. How do you reconcile that with your objection to my example?
16 squares there R0bb. How did you arrive at 13?Joe
August 31, 2012
August
08
Aug
31
31
2012
05:24 AM
5
05
24
AM
PDT
R0bb:
I never said there are 5 choices.
You are by saying there are 5 possible outcomes. That is wrong. YOU said:
Finally, p2 is 1/5 since the target consists of only one of the final five states.
There arev only 5 final states if all three of the initial states are taken by items that move.
As I pointed out in #31, endogenous probability is defined as |T|/|?|, not |T|/(number of choices).
Omega = 2 as that is the number of possibilities.
For the LCI to qualify as a law, it has to hold up to any mathematically valid case that we throw at it.
LoL! No R0bb, something can be mathematically valid and be totally senseless to the real world.Joe
August 31, 2012
August
08
Aug
31
31
2012
05:15 AM
5
05
15
AM
PDT
corrected link: Rascal Flatts – “Bless The Broken Road” – Official Music Video http://www.youtube.com/watch?v=8-vZlrBYLSUbornagain77
August 31, 2012
August
08
Aug
31
31
2012
04:09 AM
4
04
09
AM
PDT
Moreover, from our greater understanding of the nature of physical reality, the argument for God from consciousness can now be framed like this:
1. Consciousness either preceded all of material reality or is a 'epi-phenomena' of material reality. 2. If consciousness is a 'epi-phenomena' of material reality then consciousness will be found to have no special position within material reality. Whereas conversely, if consciousness precedes material reality then consciousness will be found to have a special position within material reality. 3. Consciousness is found to have a special, even central, position within material reality. 4. Therefore, consciousness is found to precede material reality. Three intersecting lines of experimental evidence from quantum mechanics that shows that consciousness precedes material reality https://docs.google.com/document/d/1G_Fi50ljF5w_XyJHfmSIZsOcPFhgoAZ3PRc_ktY8cFo/edit
i.e. Materialism had postulated for centuries that everything reduced to, or emerged from material atoms, yet the correct structure of reality is now found by modern science to be as follows:
1. material particles (mass) normally reduces to energy (e=mc^2) 2. energy and mass both reduce to information (quantum teleportation) 3. information reduces to consciousness (geometric centrality of conscious observation in universe dictates that consciousness must precede quantum wave collapse to its single bit state)
Of related interest, In the following video, at the 37:00 minute mark, Anton Zeilinger, a leading researcher in quantum teleportation with many breakthroughs under his belt, humorously reflects on just how deeply determinism has been undermined by quantum mechanics by saying such a deep lack of determinism may provide some of us a loop hole when they meet God on judgment day.
Prof Anton Zeilinger speaks on quantum physics. at UCT - video http://www.youtube.com/watch?v=s3ZPWW5NOrw
Personally, I feel that such a deep undermining of determinism by quantum mechanics, far from providing a 'loop hole' on judgement day, actually restores free will to its rightful place in the grand scheme of things, thus making God's final judgments on men's souls all the more fully binding since man truly is a 'free moral agent' as Theism has always maintained. And to solidify this theistic claim for how reality is constructed, the following study came along a few months after I had seen Dr. Zeilinger’s video:
Can quantum theory be improved? - July 23, 2012 Excerpt: Being correct 50% of the time when calling heads or tails on a coin toss won’t impress anyone. So when quantum theory predicts that an entangled particle will reach one of two detectors with just a 50% probability, many physicists have naturally sought better predictions. The predictive power of quantum theory is, in this case, equal to a random guess. Building on nearly a century of investigative work on this topic, a team of physicists has recently performed an experiment whose results show that, despite its imperfections, quantum theory still seems to be the optimal way to predict measurement outcomes., However, in the new paper, the physicists have experimentally demonstrated that there cannot exist any alternative theory that increases the predictive probability of quantum theory by more than 0.165, with the only assumption being that measurement (*conscious observation) parameters can be chosen independently (free choice, free will, assumption) of the other parameters of the theory.,,, ,, the experimental results provide the tightest constraints yet on alternatives to quantum theory. The findings imply that quantum theory is close to optimal in terms of its predictive power, even when the predictions are completely random. http://phys.org/news/2012-07-quantum-theory.html
So just as I had suspected after watching Dr. Zeilinger’s video, it is found that a required assumption of ‘free will’ in quantum mechanics is what necessarily drives the completely random (non-deterministic) aspect of quantum mechanics. Moreover it was shown in the paper that one cannot ever improve the predictive power of quantum mechanics by ever removing free will, or conscious observation, as a starting assumption in Quantum Mechanics! of note: *The act of ‘conscious observation’ in quantum mechanics is equivalent to 'measurement',,
What drives materialists crazy is that consciousness cannot be seen, tasted, smelled, touched, heard, or studied in a laboratory. But how could it be otherwise? Consciousness is the very thing that is DOING the seeing, the tasting, the smelling, etc… We define material objects by their effect upon our senses – how they feel in our hands, how they appear to our eyes. But we know consciousness simply by BEING it! - APM - UD Blogger
Of somewhat related interest, it is interesting to point out where I picked up the notion of 'empirically deprived mathematical fantasy' from: This following quote, in critique of Hawking's book 'The Grand Design', is from Roger Penrose who worked closely with Stephen Hawking in the 1970's and 80's:
'What is referred to as M-theory isn’t even a theory. It’s a collection of ideas, hopes, aspirations. It’s not even a theory and I think the book is a bit misleading in that respect. It gives you the impression that here is this new theory which is going to explain everything. It is nothing of the sort. It is not even a theory and certainly has no observational (evidence),,, I think the book suffers rather more strongly than many (other books). It’s not a uncommon thing in popular descriptions of science to latch onto some idea, particularly things to do with string theory, which have absolutely no support from observations.,,, They are very far from any kind of observational (testability). Yes, they (the ideas of M-theory) are hardly science." – Roger Penrose – former close colleague of Stephen Hawking – in critique of Hawking’s new book ‘The Grand Design’ the exact quote in the following video clip: Roger Penrose Debunks Stephen Hawking's New Book 'The Grand Design' - video http://www.metacafe.com/watch/5278793/
Also of related interest, here is a constraining factor that argues very strongly against the Darwinian notion of gradualism:
Poly-Functional Complexity equals Poly-Constrained Complexity Excerpt: Scientists Map All Mammalian Gene Interactions – August 2010 Excerpt: Mammals, including humans, have roughly 20,000 different genes.,,, They found a network of more than 7 million interactions encompassing essentially every one of the genes in the mammalian genome. http://www.sciencedaily.com/releases/2010/08/100809142044.htm https://docs.google.com/document/d/1xkW4C7uOE8s98tNx2mzMKmALeV8-348FZNnZmSWY5H8/edit
Music and verse:
Rascal Flatts - "Bless The Broken Road" - Official Music Video http://www.youtube.com/watch?v=kkWGwY5nq7A Romans 13:11 And do this, understanding the present time. The hour has come for you to wake up from your slumber, because our salvation is nearer now than when we first believed.
bornagain77
August 31, 2012
August
08
Aug
31
31
2012
03:50 AM
3
03
50
AM
PDT
Robb you ask:
Are my “senseless” examples mathematically valid or not?
Exactly right question to ask! In order to establish validity for your mathematics, that they are in the realm of reality and are not in the realm of 'empirically deprived mathematical fantasy', I, once again, request that you present real world empirical evidence to show that functional information can be generated by material processes. Then you can, as far as empirical science is concerned, kill two birds with one stone. 1. You can falsify Dembski, Marks's, and companies, LCI, and 2., you can falsify Abel, Trevors Null hypothesis for functional information generation:
Three subsets of sequence complexity and their relevance to biopolymeric information - Abel, Trevors Excerpt: Three qualitative kinds of sequence complexity exist: random (RSC), ordered (OSC), and functional (FSC).,,, Shannon information theory measures the relative degrees of RSC and OSC. Shannon information theory cannot measure FSC. FSC is invariably associated with all forms of complex biofunction, including biochemical pathways, cycles, positive and negative feedback regulation, and homeostatic metabolism. The algorithmic programming of FSC, not merely its aperiodicity, accounts for biological organization. No empirical evidence exists of either RSC of OSC ever having produced a single instance of sophisticated biological organization. Organization invariably manifests FSC rather than successive random events (RSC) or low-informational self-ordering phenomena (OSC).,,, Testable hypotheses about FSC What testable empirical hypotheses can we make about FSC that might allow us to identify when FSC exists? In any of the following null hypotheses [137], demonstrating a single exception would allow falsification. We invite assistance in the falsification of any of the following null hypotheses: Null hypothesis #1 Stochastic ensembles of physical units cannot program algorithmic/cybernetic function. Null hypothesis #2 Dynamically-ordered sequences of individual physical units (physicality patterned by natural law causation) cannot program algorithmic/cybernetic function. Null hypothesis #3 Statistically weighted means (e.g., increased availability of certain units in the polymerization environment) giving rise to patterned (compressible) sequences of units cannot program algorithmic/cybernetic function. Null hypothesis #4 Computationally successful configurable switches cannot be set by chance, necessity, or any combination of the two, even over large periods of time. We repeat that a single incident of nontrivial algorithmic programming success achieved without selection for fitness at the decision-node programming level would falsify any of these null hypotheses. This renders each of these hypotheses scientifically testable. We offer the prediction that none of these four hypotheses will be falsified. http://www.tbiomed.com/content/2/1/29
It is interesting to note that Dembski and Marks's LCI is a bit more nuanced in its required empirical validation, and/or falsification, than Abel and Trevor's Null Hypothesis is,,,
"LIFE’S CONSERVATION LAW: Why Darwinian Evolution Cannot Create Biological Information": Excerpt: Though not denying Darwinian evolution or even limiting its role in the history of life, the Law of Conservation of Information shows that Darwinian evolution is inherently teleological. Moreover, it shows that this teleology can be measured in precise information-theoretic terms. http://evoinfo.org/publications/lifes-conservation-law/
,,, in that Dembski and Marks's LCI requires us, since it does not falsify gradual Darwinian evolution straight out, to ask if physical reality is either materialistic in its basis, as the atheist holds, or is physical reality theistic in its basis, as the theist holds. It forces us to empirically validate, positively or negatively, the primary question that has been at the heart of this debate since the ancient Greeks. And in that most crucial of questions, 'Is physical reality materialistic or theistic in its basis?', modern science, after all these centuries of heated debate between materialists and Theists, has finally shed light on what that answer is:
Quantum Evidence for a Theistic Universe https://docs.google.com/document/d/1agaJIWjPWHs5vtMx5SkpaMPbantoP471k0lNBUXg0Xo/edit Quantum Entanglement – The Failure Of Local Realism - Materialism - Alain Aspect on Einstein, Bohr, Bell - video http://www.metacafe.com/w/4744145
The falsification for local realism (materialism) was recently greatly strengthened:
Physicists close two loopholes while violating local realism - November 2010 Excerpt: The latest test in quantum mechanics provides even stronger support than before for the view that nature violates local realism and is thus in contradiction with a classical worldview. http://www.physorg.com/news/2010-11-physicists-loopholes-violating-local-realism.html
In fact, Quantum Mechanics has now been extended by Anton Zeilinger, and team, to falsify local realism (reductive materialism) without even using quantum entanglement to do it:
‘Quantum Magic’ Without Any ‘Spooky Action at a Distance’ – June 2011 Excerpt: A team of researchers led by Anton Zeilinger at the University of Vienna and the Institute for Quantum Optics and Quantum Information of the Austrian Academy of Sciences used a system which does not allow for entanglement, and still found results which cannot be interpreted classically. http://www.sciencedaily.com/releases/2011/06/110624111942.htm
i.e. The materialist's cornerstone postulation, which had been that material particles (atoms) are self sustaining 'eternal' entities, is now shown to be false. i.e. A non-local, beyond space and time, cause must be appealed to in order to explain the continued existence of material particles within physical reality. Materialists simply have no rational solution to appeal to whereas Theists have always maintained that Almighty God, who is transcendent of space and time, is upholding/sustaining all of physical reality in its continued existence.
Revelation 4:11 NIV "You are worthy, our Lord and God, to receive glory and honor and power, for you created all things, and by your will they were created and have their being." "The 'First Mover' is necessary for change occurring at each moment." Michael Egnor - Aquinas’ First Way http://www.evolutionnews.org/2009/09/jerry_coyne_and_aquinas_first.html Not Understanding Nothing – A review of A Universe from Nothing – Edward Feser - June 2012 Excerpt: But Krauss simply can’t see the “difference between arguing in favor of an eternally existing creator versus an eternally existing universe without one.” The difference, as the reader of Aristotle or Aquinas knows, is that the universe changes while the unmoved mover does not, or, as the Neoplatonist can tell you, that the universe is made up of parts while its source is absolutely one; or, as Leibniz could tell you, that the universe is contingent and God absolutely necessary. There is thus a principled reason for regarding God rather than the universe as the terminus of explanation. http://www.firstthings.com/article/2012/05/not-understanding-nothing
Although the preceding evidence from quantum mechanics should be more than enough for any reasonable person to see that the primary claim of materialism (self sustaining atoms) is now rendered completely false, the empirical evidence for a theistic universe, that modern science has finally revealed, certainly goes far deeper than the brief overview I presented:
Centrality of Each Individual Observer In The Universe and Christ’s Very Credible Reconciliation Of General Relativity and Quantum Mechanics Excerpt: I find it extremely interesting, and strange, that quantum mechanics tells us that instantaneous quantum wave collapse to its 'uncertain' 3-D state is centered on each individual observer in the universe, whereas, 4-D space-time cosmology (General Relativity) tells us each 3-D point in the universe is central to the expansion of the universe. These findings of modern science are pretty much exactly what we would expect to see if this universe were indeed created, and sustained, from a higher dimension by a omniscient, omnipotent, omnipresent, eternal Being who knows everything that is happening everywhere in the universe at the same time. These findings certainly seem to go to the very heart of the age old question asked of many parents by their children, “How can God hear everybody’s prayers at the same time?”,,, i.e. Why should the expansion of the universe, or the quantum wave collapse of the entire universe, even care that you or I, or anyone else, should exist? Only Theism offers a rational explanation as to why you or I, or anyone else, should have such undeserved significance in such a vast universe: https://docs.google.com/document/d/17SDgYPHPcrl1XX39EXhaQzk7M0zmANKdYIetpZ-WB5Y/edit?hl=en_US Psalm 33:13-15 The LORD looks from heaven; He sees all the sons of men. From the place of His dwelling He looks on all the inhabitants of the earth; He fashions their hearts individually; He considers all their works.
bornagain77
August 31, 2012
August
08
Aug
31
31
2012
03:40 AM
3
03
40
AM
PDT
PS: Boltzmann actually simply used W. It is on his tombstone.kairosfocus
August 31, 2012
August
08
Aug
31
31
2012
03:03 AM
3
03
03
AM
PDT
F/N 2: Earlier, I pointed out that when one searches in a space or samples it, one faces the issue of sampling frame, with potential for bias. In the search context, if one's sampling frame is a type-F, one may drastically improve the conditional probability of finding the target sub-set of space W, T, given sample frame F, on a search-sample of scope s. But also, if the frame is a type-G instead, then one has reduced the conditional probability of successful search given sample frame G, to zero, as T is not in G. I then raised the issue that searching for a sample frame is a major challenge. I should note on a reasonable estimate of that challenge. W is the population, the set of possible configs here. The possible F's (obviously a frame is non-unique) and G's are obviously sub-sets of W. So, we are looking at the set of possible subsets of W, perhaps less the empty set {} in practical terms, as if one is in fact taking on a search, one will have a frame of some scope. But, for completeness that empty set would be in, and takes in the cases of no-sample. The power set of a given set of n members, of course, has 2^n members. In the case of a set of the possible configs for 500 bits, we are looking at the power set for 2^500 ~ 3.27*10^150. Then, raise 2 to that power: 2^(3.27*10^150). The scope of such a set overwhelmingly, way beyond merely astronomically, dwarfs the original set. To estimate it, observe that log x^n = n* log x. 3.27*10^150 times log 2 ~ 9.85*10^149. That is the LOGARITHM of the number. Going to the actual number, we are talking here of essentially 10 followed by 10^150 zeros, which we could not write out with all the atoms of our observed cosmos, not by a long, long, long shot. Take away 1 for eliminating the empty set, and that is what we are looking at. So, first and foremost, we should not allow toy examples that do not have anywhere near the relevant threshold scope of challenge on complexity, mislead us into thinking that the search for a successful search strategy -- remember that boils down to being a framing of the sampling process -- is an easy task. So, absent special information, the blind search for a good frame will be much harder than the direct blind search for the hot zone T in W. So also, if searching blindly by trial and error on W is utterly unlikely to succeed, searching blindly in the power set less 1: (2^W) - 1, will be vastly more unlikely to succeed. And, since -- by virtue of the applicable circumstances that sharply constrain configs to get them to function in relevant ways -- T is small and isolated in W, by far and away most of the search frames in that set will be type-G not type-F. Consequently, if a framing "magically" transforms the likelihood of search success, the reasonable best explanation for that is that it is because the search framing was intelligently selected on key information. And it is not unreasonable to define a quantity for the impact of that information, on the gap between blind search on W and search on F. Hence the concept and metrics for active information are not unreasonable on the whole, never mind whatever particular defects may be found with specific models and proposed metrics. One last point. In thermodynamics, it is notorious that for small, toy samples, large fluctuations are quite feasible. But, as the number of particles in a thermodynamic system rises to more realistic levels, the fact that he overwhelming bulk of the distribution of possibilites tends to cluster on a peak, utterly dominates behaviour. So, yes, for toy examples, we can easily enough find large fluctuations from the "average" -- more properly expected, outcome. But once we go up to realistic scale, spontaneous, stochastic behaviour will normally tightly cluster on the bulk of the distribution of possibilities. Or, put another way, not all lotteries are winnable, especially the naturally occurring ones. Those that are advertised all around are very carefully designed to be profitable and winnable as the announcement of a big winner will distract attention from the typical expectation: loss. So, to point to the abstract possibility of fluctuations, especially on toy examples is distractive and strawmannish relative to the real challenge: hitting a tiny target zone T in a huge config space W, usually well beyond 2^500 in scope. As we can easily see, on the scope of resources in our solar system, the possible sample size relative to the scope of possibilities is overwhelmingly unfavourable, leading to the problem of a chance based needle in a haystack blind search exercise on steroids. (Remember, mechanical necessity does not generate high contingency, it is chance or choice that do that.) The result of that challenge is obvious all around us: the successful creation of entities that are functional, complex and dependent on specific config or a cluster of similar configs to function is best explained on design by skilled and knowledgeable intelligence, not blind chance and mechanical necessity. The empirical evidence and the associated needle in haystack or monkeys at keyboards challenges are so overwhelmingly in favour of that point that the real reason for the refusal to accept this as even "obvious," is prior commitment to and/or indoctrination in the ideology that blind chance and necessity moved us from molecules to Mozart. KFkairosfocus
August 31, 2012
August
08
Aug
31
31
2012
03:01 AM
3
03
01
AM
PDT
Again just because R0bb can muddle his target set doesn’t mean someone else can’t come along, take the muddled set and make simple sense out of it. For example instead of defining ? as the muddled {1, higher than 1}, we would properly define it as {1,2,3,4,5,6}.
As I said in the first sentence you quoted, I'm simply reiterating a point that Dembski used to make often. If I'm wrong about it, then so was Dembski. A probability distribution is mathematically valid iff every probability is between 0 and 1, and the sum of the probabilities is 1. Can you provide definitions for your terms "muddled" and "properly defined"? A distribution is muddled iff __________________________. A distribution is properly defined iff __________________________. Are any of my sample spaces or distributions mathematically invalid?
So this second example goes to my point about R0bb’s first example- that yes, if you get to do whatever you want, no matter how senseless it is, you can seem to violate LCI.
Are my "senseless" examples mathematically valid or not? Do some of them violate the LCI or only "seem" to violate it? If the LCI fails in mathematically valid cases, is it a true law?R0bb
August 30, 2012
August
08
Aug
30
30
2012
10:55 PM
10
10
55
PM
PDT
Joe:
Any one of the 6 outcomes can be had on any ONE roll of the dice. Not so with your first example.
Pardon my thick skull, but are you referring to my random walk example? If so, what aspect of the random walk are you describing as a roll of the die? Or are you talking about the die example in my second post at TSZ?
Consistently excluding zero-probability outcomes from Ω would yield bizarre results.
So with the dice example I quoted above does that mean we should also include numbers 7 – infinity?
No, it means that if we always exclude zero-probability outcomes from Ω, then in some cases active information will decrease when a search improves. Do you agree? As for including numbers 7 - infinity (by which I assume you mean all integers greater than 6, none of which is actually infinity), is there any reason not to do so, other than inconvenience?
And of course a coin toss would then have more than two outcomes- in zero G.
Same question as above.
Wow R0bb, thanks. That clears up my misunderstanding. If we just do whatever we want we can violate the LCI.
For the LCI to qualify as a law, it has to hold up to any mathematically valid case that we throw at it. "Doing whatever we want", so long as it's mathematically valid, is how we test purported laws. Are any of my examples mathematically invalid? If so, then can you please show me where?R0bb
August 30, 2012
August
08
Aug
30
30
2012
10:37 PM
10
10
37
PM
PDT
Joe:
There aren’t 5 choices, ever. The item only has two choices, three if it can stay put. You can spew your rhetoric all you want it ain’t ever going to change that fact.
I never said there are 5 choices. I know there are 2. This is a two-dimensional random walk, where every transition goes one of two ways. I said so in the model description and showed it in the model diagram. Let's recap. Your disagreement, as stated in #24, is with my value of 1/5 for the log unscaled endogenous info (I'll call it endogenous probability). You say it should be 1/2 because there are only two choices. As I pointed out in #31, endogenous probability is defined as |T|/|Ω|, not |T|/(number of choices). Do you agree that this is how it's defined? You might be of the opinion that Ω is supposed to be defined such that |Ω| = number of choices, and therefore endogenous probability = |T|/|Ω| = |T|/(number of choices). Is that your position? If that is your position, do you believe that Dembski and Marks always define Ω such that |Ω| = number of choices? If that is not your position, why do you think that the endogenous probability is |T|/(number of choices)?
Perhaps you can tell us which one of Dembski & Marks’ examples your example 1 is copying. The point is I say you pulled your example from your _______ and it has nothing to do with what they are saying.
I'm not copying any of their examples. What would be the point of that? But if you think that endogenous probability = |T|/(number of choices), then reading through their examples will correct that misconception. For example, in section 4.1.1 of the Search for a Search paper, the endogenous probability is 1/16, but the number of choices is 13. How do you reconcile that with your objection to my example? Consider the concept of "Brillouin active information", defined in section III.B of this paper. If endogenous probability is always |T|/(number of choices), then Brillouin active information is always zero. Why would Dembski and Marks define a measure that's always zero? Bottom line: We both agree that there are 2 choices. You claim that the endogenous probability must therefore be 1/2. Why?R0bb
August 30, 2012
August
08
Aug
30
30
2012
09:40 PM
9
09
40
PM
PDT
Joe, try here: http://htmlhelp.com/reference/html40/entities/symbols.html You'll have to enter the codes manually but they work. ΩChance Ratcliff
August 30, 2012
August
08
Aug
30
30
2012
07:44 PM
7
07
44
PM
PDT
font face="Symbol" doesn't seem to be supportedJoe
August 30, 2012
August
08
Aug
30
30
2012
07:07 PM
7
07
07
PM
PDT
test 2- W omega symbolJoe
August 30, 2012
August
08
Aug
30
30
2012
07:03 PM
7
07
03
PM
PDT
testing- ? from a cut-n-paste from a .doc insert symbol ?Joe
August 30, 2012
August
08
Aug
30
30
2012
06:57 PM
6
06
57
PM
PDT
F/N: Wiki, on sampling frame vs. population:
In statistics, a sampling frame is the source material or device from which a sample is drawn.[1] It is a list of all those within a population who can be sampled, and may include individuals, households or institutions . . . . In the most straightforward case, such as when dealing with a batch of material from a production run, or using a census, it is possible to identify and measure every single item in the population and to include any one of them in our sample; this is known as direct element sampling.[1] However, in many other cases this is not possible; either because it is cost-prohibitive (reaching every citizen of a country) or impossible (reaching all humans alive).
In short, a population of possibilities is often sampled, and that sample may come from a defined subset that may or may not bias outcomes. In the case of a config space W [Omega will not print right], we may set up a frame F, that contains a zone of interest, T. If it does so, the odds of a sample of size s hitting T in F will be very different from that of s in W. That is simple to see. It may be harder to see that, say, a warmer/colder set of instructions, is such a framing. But obviously, this is telling whether one is trending right or wrong. That is, hill-climbing reframes a search task in ways that make it much easier to hit T. Now, multiply by three factors:
a: s is constrained by accessible resources, in such a way that a blind, random search on W is maximally unlikely to hit T. b: by suitably reframing to a suitable F, s is now much more likely to hit T. c: But by reframing to G, s is now even more unlikely to hit T than a blind random search on W, as T is excluded from G,
Now, obviously, moving from W to F is significant. In effect F maps a hot zone that drastically enhances the expected outcome of s. But, that implies that picking your F is itself a result of a higher order challenge. For if T is small and isolated in W, if we pick a frame at random, a type-G is far more likely than a type-F. So, the search for a frame is a highly challenging search task itself. Indeed, in the case of interest, comparable to the search for T in W itself. The easiest way to get a type-F is to use accurate information. For instance, those who search for sunken Spanish Treasure fleet ships often spend more time in the Archive of the Indies in Spain, than in the field; that is how significant finding the right frame can be. Where also, that information that gets us to a type-F search rather than the original type-W one. Indeed, the Dembski-Marks model boils down to measuring the typical improvement provided by advantageous framing. This, by in effect converting the jump in estimated probability in moving frame from W to F into an information metric. (Probabilities are related to information, as standard info theory results show.) That, contrary to dismissive remarks, is reasonable. The relevance of all this to the debates over FSCO/I is obvious. When we have a functional object that depends for functionality on the correct arrangement of well-matched parts, this object can be mapped in a field of possibilities W, in zones of interest T. One way to reduce this to information, is to set up a nodes-arcs specification that WLOG can be converted into a structured set of strings. (AutoCad is used for this all the time, and the DWG file size serves as a good rule of thumb metric of the degree of complexity.) Obviously not any config of components will work. Just think about trying to put a car engine back together and getting it to work at random, or turning a random configuration of alphanumeric characters back into a functioning computer program. That is where the concept of islands of function comes from. A simple solar system level threshold for enough complexity to make the isolation of T significant is 500 bits. At that level, the 10^57 atoms of our solar system, across its lifespan of about 10^17 s on the typical timeline, at the fastest rates of chemical reactions would be able to look at maybe the equivalent to a one straw sized sample to a cubical hay bale 1,000 light years thick. That is how the frame would be naturally constrained as to scope. Even if such a bale were superposed on the Galaxy, centred on Earth -- about as thick -- a sample at random would (per sampling theory) be overwlelmingly likely to reflect the bulk of the distribution, straw. That is the issue of FSCO/I, and it is why the most credible causal source for it is design. KFkairosfocus
August 30, 2012
August
08
Aug
30
30
2012
06:21 PM
6
06
21
PM
PDT
1 4 5 6 7 8

Leave a Reply