Uncommon Descent Serving The Intelligent Design Community

Optimus, replying to KN on ID as ideology, summarises the case for design in the natural world

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
arroba Email

The following reply by Optimus to KN in the TSZ thread, is far too good not to headline as an excellent summary of the case for design as a scientifically legitimate view, not mere  “Creationism in a cheap tuxedo”  ideology motivated and driven by anti-materialism and/or a right-wing, theocratic, culture war mentality commonly ascribed to “Creationism” by its objectors:

______________

>> KN

It’s central to the ideological glue that holds together “the ID movement” that the following are all conflated:Darwin’s theories; neo-Darwinism; modern evolutionary theory; Epicurean materialistic metaphysics; Enlightenment-inspired secularism. (Maybe I’m missing one or two pieces of the puzzle.) In my judgment, a mind incapable of making the requisite distinctions hardly deserves to be taken seriously.

I think your analysis of the driving force behind ID is way off base. That’s not to say that persons who advocate ID (including myself) aren’t sometimes guilty of sloppy use of language, nor am I making the claim that the modern synthetic theory of evolution is synonymous with materialism or secularism. Having made that acknowledgement, though, it is demonstrably true that (1) metaphysical presuppostions absolutely undergird much of the modern synthetic theory. This is especially true with regard to methodological naturalism (of course, MN is distinct from ontological naturalism, but if, as some claim, science describes the whole of reality, then reality becomes coextensive with that which is natural). Methodological naturalism is not the end product of some experiment or series of experiments. On the contrary it is a ground rule that excludes a priori any explanation that might be classed as “non-natural”. Some would argue that it is necessary for practical reasons, after all we don’t want people atributing seasonal thunderstorms to Thor, do we? However, science could get along just as well as at present (even better in my view) if the ground rule is simply that any proposed causal explanation must be rigorously defined and that it shall not be accepted except in light of compelling evidence. Problem solved! Though some fear “supernatural explanation” (which is highly definitional) overwhelming the sciences, such concerns are frequently oversold. Interestingly, the much maligned Michael Behe makes very much the same point in his 1996 Darwin’s Black Box:

If my graduate student came into my office and said that the angel of death killed her bacterial culture, I would be disinclined to believe her…. Science has learned over the past half millenium that the universe operates with great regularity the great majority of the time, and that simple laws and predictable behavior explain most physical phenomena.
Darwin’s Black Box pg. 241

If Behe’s expression is representative of the ID community (which I would venture it is), then why the death-grip on methodological naturalism? I suggest that its power lies in its exclusionary function. It rules out ID right from the start, before even any discussions about the emprical data are to be had. MN means that ID is persona non grata, thus some sort of evolutionary explanation must win by default. (2) In Darwin’s own arguments in favor of his theory he rely heavily on metaphysical assumptions about what God would or wouldn’t do. Effectively he uses special creation by a deity as his null hypothesis, casting his theory as the explanatory alternative. Thus the adversarial relationship between Darwin (whose ideas are foundational to the MST) and theism is baked right into The Origin. To this very day, “bad design” arguments in favor of evolution still employ theological reasoning. (3) The modern synthetic theory is often used in the public debate as a prop for materialism (which I believe you acknowledged in another comment). How many times have we heard the famed Richard Dawkins quote to the effect that ‘Darwin made it possible to be an intellectually fulfilled atheist’? Very frequently evolutionary theory is impressed into service to show the superfluousness of theism or to explain away religion as an erstwhile useful phenomenon produced by natural selection (or something to that effect). Hardly can it be ignored that the most enthusiastic boosters of evolutionary theory tend to fall on the atheist/materialist/reductionist side of the spectrum (e.g. Eugenie Scott, Michael Shermer, P.Z. Meyers, Jerry Coyne, Richard Dawkins, Sam Harris, Peter Atkins, Daniel Dennett, Will Provine). My point simply stated is that it is not at all wrong-headed to draw a connection between the modern synthetic theory and the aforementioned class of metaphysical views. Can it be said that the modern synthetic theory (am I allowed just to write Neo-Darwinism for short?) doesn’t mandate nontheistic metaphysics? Sure. But it’s just as true that they often accompany each other.

In chalking up ID to a massive attack of confused cognition, you overlook the substantive reasons why many (including a number of PhD scientists) consider ID to be a cogent explanation of many features of our universe (especially the bioshpere):

-Functionally-specified complex information [FSCI] present in cells in prodigdious quantities
-Sophisticated mechanical systems at both the micro and macro level in organisms (many of which exhibit IC)
-Fine-tuning of fundamental constants
-Patterns of stasis followed by abrupt appearance (geologically speaking) in the fossil record

In my opinion the presence of FSCI/O and complex biological machinery are very powerful indicators of intelligent agency, judging from our uniform and repeated experience. Also note that none of the above reasons employ theological presuppositions. They flow naturally, inexorably from the data. And, yes, we are all familiar with the objection that organisms are distinct from artificial objects, the implication being that our knowledge from the domain of man-made objects doesn’t carry over to biology. I think this is fallacious. Everyone acknowledges that matter inhabiting this universe is made up of atoms, which in turn are composed of still other particles. This is true of all matter, not just “natural” things, not just “artificial” things – everything. If such is the case, then must not the same laws apply to all matter with equal force? From whence comes the false dichotomy that between “natural” and “artificial”? If design can be discerned in one case, why not in the other?

To this point we have not even addressed the shortcomings of the modern synthetic theory (excepting only its metaphysical moorings). They are manifold, however – evidential shortcomings (e.g. lack of empirical support), unjustified extrapolations, question-begging assumptions, ad hoc rationalizations, tolerance of “just so” stories, narratives imposed on data instead of gleaned from data, conflict with empirical data from generations of human experience with breeding, etc. If at the end of the day you truly believe that all ID has going for it is a culture war mentality, then may I politely suggest that you haven’t been paying attention.>>

______________

Well worth reflecting on, and Optimus deserves to be headlined. END

Comments
To solidify the claim that,,,
,,,if Theism is not held as unconditionally true prior to scientific investigation then nothing else can ever be held as unconditionally true there afterwards!,,,
I would like to offer, besides niwrad's "Comprehensibility of the world" post which I've already referenced, and Alvin Plantinga's Evolutionary Argument Against Naturalism (EAAN) which I also just referenced, I would like to offer Dr. Torley's post from February, in which Dr. Torley, in his usual meticulous style, searched high and low for a basis of rationality within Darwinism and found none,,,
Macroevolution, microevolution and chemistry: the devil is in the details – Dr. V. J. Torley – February 27, 2013 Excerpt: After all, mathematics, scientific laws and observed processes are supposed to form the basis of all scientific explanation. If none of these provides support for Darwinian macroevolution, then why on earth should we accept it? Indeed, why does macroevolution belong in the province of science at all, if its scientific basis cannot be demonstrated? https://uncommondesc.wpengine.com/intelligent-design/macroevolution-microevolution-and-chemistry-the-devil-is-in-the-details/
I was particularly stuck by Dr. Torley's finding of the lack of any rigid mathematical basis in neo-Darwinism so to make concrete predictions. A particularly troubling 'scientific' dilemma as is highlighted in this quote from Dr. Berlinski that Mr. Arrington highlighted on March 6 shortly after Dr. Torley's 'devil is in the details' post :
“On the other hand, I disagree that Darwin’s theory is as `solid as any explanation in science.; Disagree? I regard the claim as preposterous. Quantum electrodynamics is accurate to thirteen or so decimal places; so, too, general relativity. A leaf trembling in the wrong way would suffice to shatter either theory. What can Darwinian theory offer in comparison?” (Berlinski, D., “A Scientific Scandal?: David Berlinski & Critics,” Commentary, July 8, 2003)
And then on March 24 niwrad posted "The Equations Of Evolution" in which I came to the realization that,,,
,,"neo-Darwinism can have no mathematical basis because of the atheistic insistence for the ‘random’ variable postulate at the base of its formulation (which prevents any ‘mathematical certitude’ from ever being achieved)" https://uncommondesc.wpengine.com/evolution/the-equations-of-evolution/#comment-450540
I go on in that entry to point out that the 'random variable postulate', that Atheists absolutely insist on using as a 'Designer substitute' (so as to be 'scientific' in their minds), is what in fact drives their preferred materialistic version of 'science' into irreconcilable epistemological failure. This epistemological failure is driven home not only in Plantinga's EAAN but also at the beginning of the universe with 'Boltzmann's Brain' in which it is found that on materialism,,
,,it is immeasurably more likely that our consciousness is associated with a brain that has spontaneously fluctuated into existence in the quantum vacuum than it is that we have parents and exist in an orderly universe with a 13.7 billion-year history.
Music and Verse:
Where The Spirit Of The Lord Is - Chris Tomlin , Christy Nockels , Nathan Nockels http://worshiptogether.com/songs/songdetail.aspx?iid=1794631 Job 12:13 "But true wisdom and power are found in God; counsel and understanding are his.
bornagain77
LT: I was just making sure there would be no continuing misrepresentation, given a widespread and pernicious false narrative on the history of the rise of science that needs correction. KF kairosfocus
One of those neat 'coincidental surprises': video - In Two Minutes or Less: Plantinga on Naturalistic Evolution as a Self-Defeating Proposition April 5, 2013 http://www.evolutionnews.org/2013/04/in_two_minutes070881.html bornagain77
I particularly liked the dictum of David Ben-Gurion: 'Anyone who doesn't believe in miracles is not a realist.' If his name was unfamiliar, you could be forgiven for thinking he was probably a molecular biologist or nuclear physicist. At least, in the earlier age of the giants of relativity, quantum physics and maths, of the last century, before the rise to power of the corporate-driven dirt-worshippers, who couldn't shift a paradigm with a forklift. Axel
The continuum of faith and reason are given a mighty impetus by the book of Mary Read, the orthopaedic surgeon, who experienced a remarkable 'posse' of miracles, in relation to the accident and subsequent NDE she experienced, while kayaking in Chile. Better than any of the videos, much as I love finding and watching the best of them. Axel
bornagain77, kairosfocus, You are both right, of course. While science should not smuggle in foreign philosophical concepts into its methodology, it owes it rationality to reason's rules (Philosophy) and its existence to the Biblical teaching that God created a rational universe ripe for discovery (Theology)--and, at a deeper level still, a philosophical/theological truth arrived at a few centuries earlier-----faith and reason are compatible and mutually reinforcing. StephenB
Hi Larry! One of your skeptic ink buddies owes me money. Andy Schueler is blathering on about nested hierarchies. I told him that if all the transitional forms still existed that we wouldn’t have a strict, objective nested hierarchy. He called me a moron. So to support my claim I offered:
Extinction has only defined the groups: it has by no means made them; for if every form which has ever lived on this earth were suddenly to reappear, though it would be quite impossible to give definitions by which each group could be distinguished, still a natural classification, or at least a natural arrangement, would be possible.- Charles Darwin chapter 14
Denton agrees with me:
There is another stringent condition which must be satisfied if a hierarchic pattern is to result as the end product of an evolutionary process: no ancestral forms can be permitted to survive. This can be seen by examining the tree diagram on page 135. If any of the ancestors X, Y, or Z, or if any of the hypothetical transitional connecting species stationed on the main branches of the tree, had survived and had therefore to be included in the classification scheme, the distinctness of the divisions would be blurred by intermediate or partially inclusive classes and what remained of the hierarchic pattern would be highly disordered.- Denton, “Evolution: A Theory in Crisis” page 136 (X, Y and Z are hypothetical parental node populations)
We have a $10,000 bet on who knows more about nested hierarchies. He will never pay me though. What do you think about that? Joe
KF- I have not said one word to the effect that Christianity retarded or obstructed the development of science. LarTanner
PS: A really weird captcha game popped up just now. kairosfocus
LT: Allow me to draw your attention to Nancey Pearcey's thoughts summarised here. Let me clip: ____________ >> Christianity Is a Science-Starter, Not a Science-Stopper By Nancy Pearcey [ . . . . ] Most historians today agree that the main impact Christianity had on the origin and development of modern science was positive. Far from being a science stopper, it is a science starter. One reason this dramatic turn-around has not yet filtered down to the public is that the history of science is still quite a young field. Only fifty years ago, it was not even an independent discipline. Over the past few decades, however, it has blossomed dramatically, and in the process, many of the old myths and stereotypes that we grew up with have been toppled. Today the majority view is that Christianity provided many of the crucial motivations and philosophical assumptions necessary for the rise of modern science.[6] In one sense, this should come as no surprise. After all, modern science arose in one place and one time only: It arose out of medieval Europe, during a period when its intellectual life was thoroughly permeated with a Christian worldview. Other great cultures, such as the Chinese and the Indian, often developed a higher level of technology and engineering. But their expertise tended to consist of practical know-how and rules of thumb. They did not develop what we know as experimental science–testable theories organized into coherent systems. Science in this sense has appeared only once in history. As historian Edward Grant writes, “It is indisputable that modern science emerged in the seventeenth century in Western Europe and nowhere else.”[7]. . . . The church fathers taught that the material world came from the hand of a good Creator, and was thus essentially good. The result is described by a British philosopher of science, Mary Hesse: “There has never been room in the Hebrew or Christian tradition for the idea that the material world is something to be escaped from, and that work in it is degrading.” Instead, “Material things are to be used to the glory of God and for the good of man.”[19] Kepler is, once again, a good example. When he discovered the third law of planetary motion (the orbital period squared is proportional to semi-major axis cubed, or P[superscript 2] = a [superscript 3]), this was for him “an astounding confirmation of a geometer god worthy of worship. He confessed to being ‘carried away by unutterable rapture at the divine spectacle of heavenly harmony’.”[20] In the biblical worldview, scientific investigation of nature became both a calling and an obligation. As historian John Hedley Brooke explains, the early scientists “would often argue that God had revealed himself in two books—the book of His words (the Bible) and the book of His works (nature). As one was under obligation to study the former, so too there was an obligation to study the latter.”[21] The rise of modern science cannot be explained apart from the Christian view of nature as good and worthy of study, which led the early scientists to regard their work as obedience to the cultural mandate to “till the garden”. . . . Today the majority of historians of science agree with this positive assessment of the impact the Christian worldview had on the rise of science. Yet even highly educated people remain ignorant of this fact. Why is that? The answer is that history was founded as a modern discipline by Enlightenment figures such as Voltaire, Gibbon, and Hume who had a very specific agenda: They wanted to discredit Christianity while promoting rationalism. And they did it by painting the middle ages as the “Dark Ages,” a time of ignorance and superstition. They crafted a heroic saga in which modern science had to battle fierce opposition and oppression from Church authorities. Among professional historians, these early accounts are no longer considered reliable sources. Yet they set the tone for the way history books have been written ever since. The history of science is often cast as a secular morality tale of enlightenment and progress against the dark forces of religion and superstition. Stark puts it in particularly strong terms: “The ‘Enlightenment’ [was] conceived initially as a propaganda ploy by militant atheists and humanists who attempted to claim credit for the rise of science.”[22] Stark’s comments express a tone of moral outrage that such bad history continues to be perpetuated, even in academic circles. He himself published an early paper quoting the standards texts, depicting the relationship between Christianity and science as one of constant “warfare.” He now seems chagrined to learn that, even back then, those stereotypes had already been discarded by professional historians.[23] Today the warfare image has become a useful tool for politicians and media elites eager to press forward with a secularist agenda . . . [The whole article is well worth the read, here.]>> Nancy Pearcey, author of Total Truth, is editor at large of The Pearcey Report and the Francis A. Schaeffer Scholar at World Journalism Institute. This article appears, with minor changes, in Areopagus Journal 5:1 (January-February 2005): pp. 4-9 (www.apologeticsresctr.org). Copyright © Nancy Pearcey. >> ____________ There are a few secularist myths concerning the roots and nature of science that need to be popped, so that we can see a bit more clearly and without a lot of the silly "warfare" baggage that dates to particularly bad reporting of history from C18 and 19. KF kairosfocus
that ‘science’ would have never gotten off the ground without ‘improperly’ injecting the Theistic philosophy into science.
Having a class of people with sufficient time, education, and inclination to devote themselves to performing scientific activities was also important to getting science off the ground. See Aristotle's Physics:
When the objects of an inquiry, in any department, have principles, conditions, or elements, it is through acquaintance with these that knowledge, that is to say scientific knowledge, is attained. For we do not think that we know a thing until we are acquainted with its primary conditions or first principles, and have carried our analysis as far as its simplest elements. Plainly therefore in the science of Nature, as in other branches of study, our first task will be to try to determine what relates to its principles. The natural way of doing this is to start from the things which are more knowable and obvious to us and proceed towards those which are clearer and more knowable by nature; for the same things are not 'knowable relatively to us' and 'knowable' without qualification. So in the present inquiry we must follow this method and advance from what is more obscure by nature, but clearer to us, towards what is more clear and more knowable by nature. Now what is to us plain and obvious at first is rather confused masses, the elements and principles of which become known to us later by analysis. Thus we must advance from generalities to particulars; for it is a whole that is best known to sense-perception, and a generality is a kind of whole, comprehending many things within it, like parts. Much the same thing happens in the relation of the name to the formula. A name, e.g. 'round', means vaguely a sort of whole: its definition analyses this into its particular senses. Similarly a child begins by calling all men 'father', and all women 'mother', but later on distinguishes each of them.
See also Epicureus, who saw the study of nature as driven by the desire to banish fear of the world and to increase happiness:
10. If the objects which are productive of pleasures to profligate persons really freed them from fears of the mind, -- the fears, I mean, inspired by celestial and atmospheric phenomena, the fear of death, the fear of pain; if, further, they taught them to limit their desires, we should never have any fault to find with such persons, for they would then be filled with pleasures to overflowing on all sides and would be exempt from all pain, whether of body or mind, that is, from all evil. 11. If we had never been molested by alarms at celestial and atmospheric phenomena, nor by the misgiving that death somehow affects us, nor by neglect of the proper limits of pains and desires, we should have had no need to study natural science. 12. It would be impossible to banish fear on matters of the highest importance, if a person did not know the nature of the whole universe, but lived in dread of what the legends tell us. Hence without the study of nature there was no enjoyment of unmixed pleasures.
LarTanner
Nightlight - As everybody at UD knows, y'all are welcome to join in the discussion at The Skeptical Zone. You, Nightlight, are personally invited by Lizzie and I am extending this invitation on her behalf (since she has long since been banned by the authorities at UD and can't post here). Of course, Lizzie has also, many times, invited Kairosfocus to join the discussion on an open forum, and that invitation is still stands. hotshoe
Looked at from the another angle one could rightly argue, as niwrad did so eloquently yesterday,,,
Comprehensibility of the world Excerpt: ,,,Bottom line: without an absolute Truth, (there would be) no logic, no mathematics, no beings, no knowledge by beings, no science, no comprehensibility of the world whatsoever. https://uncommondesc.wpengine.com/mathematics/comprehensibility-of-the-world/
,,, that 'science' would have never gotten off the ground without 'improperly' injecting the Theistic philosophy into science. Sure science is dependent on empirics for validating various competing 'interpretations' of the Theistic philosophy that science is dependent on to be rationally practiced, but we must never forget that unless Theism is held as unconditionally true throughout investigation then the entire enterprise of science winds up in epistemological failure. It is not that Theists are demanding that Theism is the only answer allowed to be considered true prior to investigation, as atheist demand with their artificial imposition of methodological naturalism, it is that if Theism is not held as true prior to investigation then nothing else can be held as true afterwards! Notes:
Epistemology – Why Should The Human Mind Even Be Able To Comprehend Reality? – Stephen Meyer - video – (Notes in description) http://vimeo.com/32145998 The Heretic - Who is Thomas Nagel and why are so many of his fellow academics condemning him? - March 25, 2013 Excerpt: Neo-Darwinism insists that every phenomenon, every species, every trait of every species, is the consequence of random chance, as natural selection requires. And yet, Nagel says, “certain things are so remarkable that they have to be explained as non-accidental if we are to pretend to a real understanding of the world.” Among these remarkable, nonaccidental things are many of the features of the manifest image. Consciousness itself, for example: You can’t explain consciousness in evolutionary terms, Nagel says, without undermining the explanation itself. Evolution easily accounts for rudimentary kinds of awareness. Hundreds of thousands of years ago on the African savannah, where the earliest humans evolved the unique characteristics of our species, the ability to sense danger or to read signals from a potential mate would clearly help an organism survive. So far, so good. But the human brain can do much more than this. It can perform calculus, hypothesize metaphysics, compose music—even develop a theory of evolution. None of these higher capacities has any evident survival value, certainly not hundreds of thousands of years ago when the chief aim of mental life was to avoid getting eaten. Could our brain have developed and sustained such nonadaptive abilities by the trial and error of natural selection, as neo-Darwinism insists? It’s possible, but the odds, Nagel says, are “vanishingly small.” If Nagel is right, the materialist is in a pickle. The conscious brain that is able to come up with neo-Darwinism as a universal explanation simultaneously makes neo-Darwinism, as a universal explanation, exceedingly unlikely.,,, ,,,Fortunately, materialism is never translated into life as it’s lived. As colleagues and friends, husbands and mothers, wives and fathers, sons and daughters, materialists never put their money where their mouth is. Nobody thinks his daughter is just molecules in motion and nothing but; nobody thinks the Holocaust was evil, but only in a relative, provisional sense. A materialist who lived his life according to his professed convictions—understanding himself to have no moral agency at all, seeing his friends and enemies and family as genetically determined robots—wouldn’t just be a materialist: He’d be a psychopath. http://www.weeklystandard.com/articles/heretic_707692.html?page=3 Design Thinking Is Hardwired in the Human Brain. How Come? - October 17, 2012 Excerpt: "Even Professional Scientists Are Compelled to See Purpose in Nature, Psychologists Find." The article describes a test by Boston University's psychology department, in which researchers found that "despite years of scientific training, even professional chemists, geologists, and physicists from major universities such as Harvard, MIT, and Yale cannot escape a deep-seated belief that natural phenomena exist for a purpose" ,,, Most interesting, though, are the questions begged by this research. One is whether it is even possible to purge teleology from explanation. http://www.evolutionnews.org/2012/10/design_thinking065381.html
bornagain77
F/N: I see SB has remarked. I will just add, that when people load materialism into science, they are improperly injecting philosophy. Science should be driven by empirical evidence, not materialist ideological a priori's or a more or less imposed "consensus." KF kairosfocus
WJM, I was responding to nightlight's application the term "philosophical narrative." The event that prompted the exchange was his claim that Stephen Meyer is a sloppy thinker and writer on the grounds that he uncritically interchanges the philosophical concept of "mind" with the scientific construct of "intelligent agent," contaminating the scientific hypothesis. StephenB
WJM: Pardon an interjection (as I don't know when SB will pass by again), but it seems to me that a scientific hyp fits into the more or less plain vanilla, generally accepted and widely used framework of observations and pattern detection, abductive inference as to a candidate best simple explanation, predictions of future discoveries, testing based on experiment or observation studies, and the like techniques. On such, provisional general explanatory frameworks -- models, theories, etc -- can be built and are recognised as provisional but so far empirically reliable. When the discussion shifts to challenging that framework and/or the question of world view level a prioris being injected, the issues are now in phil of sci and possibly general phil. On that basis, intelligent designers who make contrivances showing choice contingency towards functionality of systems, are empirically observed entities. But, debating on how such come to have intelligence and what intelligence and mind are as "stuff" or for that matter what matter is as stuff, increasingly shifts into worldview and epistemological considerations. A test is, that because of the canon of empirical observability, the discussion is in principle relatively independent of the worldview brought to the table. KF kairosfocus
StephenB:
I think that each of us, kairosfocus first, agrees that ID should not offer a philosophical narrative in the place of a scientific hypothesis.
Just so I understand you here, how are you differentiating a "philosophical narrative" from a "scientific hypothesis" since, as far as I can tell, any scientific hypothesis must also be part of a philosophical narrative. William J Murray
F/N: intelligence [?n?t?l?d??ns] n 1. (Psychology) the capacity for understanding; ability to perceive and comprehend meaning 2. good mental capacity a person of intelligence 3. Old-fashioned news; information 4. (Military) military information about enemies, spies, etc. 5. (Military) a group or department that gathers or deals with such information 6. (often capital) an intelligent being, esp one that is not embodied 7. (Military) (modifier) of or relating to intelligence an intelligence network [from Latin intellegentia, from intellegere to discern, comprehend, literally: choose between, from inter- + legere to choose] intelligential adj Collins English Dictionary – Complete and Unabridged © HarperCollins Publishers 1991, 1994, 1998, 2000, 2003 --> I see no good reason to infer that biochemical networks etc have intelligence, or are capable of choice contingency towards a purpose. At best they may be programmed. (Recall, a computer has no intelligence of its own, it is programmed.) KF kairosfocus
F/N: For a basic and uncontroversial case, consider signal to noise power ratio in communications. Intelligent signals have known expected and/or observed characteristics, and noise has reasonably known characteristics shaped by stochastic processes. So one may measure and distinguish the two theoretically and practically. The ratio, S/N, is a major quality metric in comms systems. One, that rests on a design inference, right in its heart. KF PS: This is an example I have long used, and noted on in discussions, including the briefing note linked through my handle. kairosfocus
NL:
natural science can’t do anything to start research on “intelligent cause” since that’s not an object that natural science can recognize as any part of its models. ‘Intelligence’ and ’cause’ are only an attribute and a role (in causal chain of biological artifacts) of something, but neither is an object that can be researched without hypothesis as to what might have that attribute and play that role.
As just pointed out, intelligent designers are a fact of life all around us. The subject for design theory is not designers but candidate designs and the features thereof that may potentially reliably indicate the type of cause acting. That is, a fundamentally origins/"historical" science oriented investigation along the lines pioneered by Lyell and Darwin etc. That is, to reconstruct the credible past -- unobservable -- causal roots of phenomena we can see. To do so, the process is to identify acting causes in the present and their effects, thence testable, reliable signs pointing to the processes. Just as we seek to reconstruct the life cycle of a star on the model of what happens with a ball of hydrogen rich gas of sufficient scale, and compare with the HR diagram based on observations [e.g. reconstructing as a model the life of the sun, or explaining branching off to the Giants bands for clusters], we can seek to ask, what happens when a designer acts, and what traces are commonly left. Then we can note things such as FSCO/I and see that by intentional choice towards functional purpose, designers often create FSCO/I. Such as with posts in this thread -- this isn't rocket science! On testing and analysis -- needle in haystack -- we can see that FSCO/I is a good sign of design as cause, with literally billions of test cases. then, we look in the heart of the living cell and behold: DNA with complex, functionally specific digital code that makes proteins, with the help of an organised cluster of molecular nanomachines. A simple exploration has now gone right to the heart of the world of life, with startling implications. Sort of like an apple falling from a tree on a farm in Lincolnshire, c. 1664, while the crescent moon swings by in orbit. As in, a simple connexion, with startling consequences. KF kairosfocus
NL: It seems to me that the inductively arrived at conclusion that there are certain empirically observable signs (such as FSCO/I) that -- on inductive investigation -- reliably point to the material role of design as causal process is sufficiently scientific to be a basis for investigations. And there is a growing body of peer reviewed work on such. Where the study of the methodology of science (which routinely crops up in cases of debate) is inherently a matter of logic and epistemology, showing how philosophical matters are inextricable from sciences once we ask hard questions. But, we can be confident that something like FSCO/I can be explored in light of the empirical fact of intelligent designers and the traces they often leave. I would believe it is a fair and reasonable step in science to seek to study such on empirical terms. KF kairosfocus
nightlight
My critique (a) which was used until this morning, is under assumption that ID has a “scientific hypothesis” about the nature of “intelligent cause”.
I am pleased to hear that you did not understand ID methodology since that would indicate that you were not consciously misrepresenting the facts. Indeed, ID does not hypothesize about the nature of the intelligent cause. This is fundamental to ID's minimalist approach, which is based on abductive reasoning--an inference to the best of two or more competing explanations.
So critique (a) argues that “intelligent mind” which appeared to play that role, cannot be a valid scientific hypothesis in natural science since there is no counterpart for “mind” or “consciousness” in present natural science (that may be a gap in the present science, but that is what it is now).
Since ID does not, at least for now, hypothesize an "intelligent mind," your comment does not seem relevant. It seems as if we have been down this road before.
There is essentially only one objection to KF’s defense which is: ID cannot get away without making a proper scientific hypothesis, by mere offering of philosophical narrative as a substitute (of for whatever other purpose).
I think that each of us, kairosfocus first, agrees that ID should not offer a philosophical narrative in the place of a scientific hypothesis.
Namely, natural science can’t do anything to start research on “intelligent cause” since that’s not an object that natural science can recognize as any part of its models.
It is always a mistake to make dogmatic statements from an incomplete knowledge base.
‘Intelligence’ and ’cause’ are only an attribute and a role (in causal chain of biological artifacts) of something, but neither is an object that can be researched without hypothesis as to what might have that attribute and play that role.
Try to appreciate the fact that you are not yet familiar enough with ID methodology to hold court on the matter. Among other things, it would help if you could learn something about the methods of historical science, the meaning of causal adequacy, and the nature of abductive logic. StephenB
StevenB, #259, #260: Well, it seems that we have added yet another incompatible piece to the puzzle: [a] Mind is part of the ID hypothesis, which disqualifies it as a scientific enterprise [b] Mind is not really a part of the ID hypothesis, but it is a philosophical add on that shouldn't be there. [c] Even if Mind is not part of the ID hypothesis, it doesn't matter since ID fails to explain the nature of the designer. There is no contradiction, since those conclusions arise from different starting assumptions. My critique (a) which was used until this morning, is under assumption that ID has a "scientific hypothesis" about the nature of "intelligent cause". So critique (a) argues that "intelligent mind" which appeared to play that role, cannot be a valid scientific hypothesis in natural science since there is no counterpart for "mind" or "consciousness" in present natural science (that may be a gap in the present science, but that is what it is now). This morning KF suggested that "mind" was only a part of general philosophical discussion surrounding the subject, not a scientific hypothesis. Since I didn't state (b) and (c) which is your rephrasing of my quotes, I'll defend below what I said. There is essentially only one objection to KF's defense which is: ID cannot get away without making a proper scientific hypothesis, by mere offering of philosophical narrative as a substitute (of for whatever other purpose). Namely, natural science can't do anything to start research on "intelligent cause" since that's not an object that natural science can recognize as any part of its models. 'Intelligence' and 'cause' are only an attribute and a role (in causal chain of biological artifacts) of something, but neither is an object that can be researched without hypothesis as to what might have that attribute and play that role. As far as present natural science knows, only humans can be characterized with an attribute and a role "intelligent cause" (perhaps some animals, too), but humans (or animals) obviously cannot be hypothesized to be the ID's "intelligent cause" since the cause has to precede its effects (humans and animals). Hence some other scientific hypothesis needs to be made, so science can do something constructive with it. Mind or consciousness won't do, since they don't have a counterpart in present natural science. Natural science is not going to absorb sterile elements that it cant do anything with. The only scientist involved in this debate I have seen trying to fill in the missing hypothesis is James Shapiro, who is suggesting that cellular biochemical networks might be the source of the intelligence behind the evolutionary innovations. That still leaves the origin of life and fine tuning as open questions (which is where Planckian networks were aimed at), but at least he provides scientifically legitimate hypothesis that can be followed up. nightlight
NL: Re:
ID cannot get away with offering philosophy or religion as a substitute for legitimate scientific hypothesis as to what “intelligent cause” might be and how can it be researched. Natural science is just not going to walk over that edge without seeing the next foothold, the falsifiable scientific hypothesis about the “intelligent cause”.
1 --> What part of the argument I summarised here today constitutes anything beyond empirically warranted inductive inferences on observable and often measurable phenomena? 2 --> Given that we observe and experience intelligent causes in action, and given that we have placed on the table specific, observable phenomena on the table as proposed reliable signs [on billions of test cases] how can you suggest that you do not know what an intelligent cause is? Can it and its suggested signs not be investigated on the exact same inductive criteria that lie at the heart of science? 3 --> What part of FSCI (given the just linked discussion) is an empirically reliable sign of design as cause, is not subject to empirical test and potential falsification? 4 --> As in, is it in principle impossible to show a counter example to the inductive generalisation? If not, just why? KF kairosfocus
nightlight
As explained in post #250, this is not a strawman argument, but observation that the ID refusal to provide scientifically legitimate hypothesis about the nature of the “intelligent cause”, and offering instead the ontological mind-matter debate is as unwise strategy as if Spartans had picked to battle Persians in the widest planes they could find.
Well, it seems that we have added yet another incompatible piece to the puzzle: [a] Mind is part of the ID hypothesis, which disqualifies it as a scientific enterprise [b] Mind is not really a part of the ID hypothesis, but it is a philosophical add on that shouldn't be there. [c] Even if Mind is not part of the ID hypothesis, it doesn't matter since ID fails to explain the nature of the designer. This is all very entertaining, but it certainly tugs away at my perception of what it means to have a rational discussion. StephenB
nightlight
Not in those words....,
The difficulty, it seems to me, is what appears to be your two contradictory arguments. On the one hand, you argue (falsely) that "mind" is a part of the ID hypothesis
It is what is attached to that link (“intelligent mind” or “mental agency”) that is vacuous as a hypothesis within the present natural science.
On the other hand, you also argue that "mind" is an extraneous philosophical add on to the hypothesis that comes from sloppy writing and careless public communication.
ID cannot get away with offering philosophy or religion as a substitute for legitimate scientific hypothesis
Do you grasp the problem. Mind is either part of the ID hypothesis or it is not. If you could affirm one position and negate the other, there might be some potential for a rational discussion. StephenB
PS: Such a hyp has been on the table all along, indeed we could have started from what Crick said in 1953. All they have done is play tricks to duck it and distort it to change the subject to play dirty politics. Try here, just today for a summary. kairosfocus
NL: If they hold the institutional keys, and are so ruthless as we have seen, then they are not going to be troubled over mere niceties of duties of care to correct terminology, or truth, or to fairness, etc. When we don't line up with their gotcha tactics, they will make up stuff, like in how I am supposedly a Nazi. And that is coming from the ilk who are playing outing tactic games and have harboured a man who has threatened my family. We just have to make sure that we operate on a sound basis, and then expose the tricks and nasty power games. KF kairosfocus
kairosfocus 254: Are you unaware that a priori ideological materialists have set out to redefine science on their metaphysical assumptions, and are busily censoring and expelling those who do not go along with such tactics? I am not saying they are playing fair. But they are the guys holding the keys, and ID has got to offer what they are looking for, a legitimate scientific hypothesis about "intelligent cause". There is no way around it via philosophy since they are not going to give you half written signed check, for you to fill in the amount. nightlight
StephenB #252: No, that is not what you have been saying. Your argument has been that in the scientific context alone, ID injects "mind" into its methodology. Not in those words, since my focus was on following up with next steps. As explained in post #253 above, ID cannot get away with offering philosophy or religion as a substitute for legitimate scientific hypothesis as to what "intelligent cause" might be and how can it be researched. Natural science is just not going to walk over that edge without seeing the next foothold, the falsifiable scientific hypothesis about the "intelligent cause". nightlight
NL: Are you unable to see that I have made a very careful distinction between an empirically grounded inference on empirically reliable observable sign, and onward debates that may happen in other circles on such plainly scientific findings? Are you unaware that a priori ideological materialists have set out to redefine science on their metaphysical assumptions, and are busily censoring and expelling those who do not go along with such tactics? Have you forgotten that they love to dress up in the holy lab coat and pretend that the science that they have biased into begging ontological questions by that imposition, somehow proves their worldview? If you are concerned about begging metaphysical questions, that is where you really need to be expending ammunition. It is clearly well grounded that FSCO/I is a good and reliable sign of design as cause. So, let us stand on that inductive evidence and let us then challenge the materialist ideologues that they are suppressing the evidence and where it points, then pretending that "there is no evidence." And, if that then opens up worldview level questions that materialists are uncomfortable with, tough luck. They have been using "science" in making worldview claims for years. Only, this time they have been caught out with a Victorian era positivist ideology in a C21 information age with 60 years of evidence of CODE, digital code in copious quantities, in the heart of life. Remember, this is what Crick wrote to his son March 19, 1953:
"Now we believe that the DNA is a code. That is, the order of bases (the letters) makes one gene different from another gene (just as one page of print is different from another)" . . .
What is the observed, empirically warranted cause of large amounts of digital code and associated execution machines and systems? Just fill in the blanks: ______________ Let them answer to that evidence. I assure you, no amount of parsing of terms in light of oh so delicate sensibilities -- remember, we are here dealing with people who do not hesitate to imply that we are Nazis by making utterly ungrounded invidious comparisons [I had to deal with that over the past few days . . . ], and who resort to outing tactics and smears on the web to damage personal reputations and economic prospects, as well as in some cases outright making threats against families -- that does not challenge that a priori materialism and does not publicly expose it to the point where it is undeniable, is going to make a dime's worth of difference. That is why they get so hot under the collar and falsely cry "quote mining" when the following key admission by Lewontin in the NYRB is cited:
It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes [[--> another major begging of the question . . . ] to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute [[--> i.e. here we see the fallacious, indoctrinated, ideological, closed mind . . . ], for we cannot allow a Divine Foot in the door . . . [NYRB, Billions and Billions of demons, Jan 1997. For a refutation of the "quote mining" false accusation, cf. the above linked.]
KF kairosfocus
kairosfocus 249: Do you not see that you are erecting a strawman caricature and twisting the empirically evident fact into a pretended unjustified ontological discussion on the nature of mind and its ultimate roots? As explained in post #250, this is not a strawman argument, but observation that the ID refusal to provide scientifically legitimate hypothesis about the nature of the "intelligent cause", and offering instead the ontological mind-matter debate is as unwise strategy as if Spartans had picked to battle Persians in the widest planes they could find. Namely, the natural science can't just say, OK, we agree, the "intelligent cause" is the best explanation for biological artifacts, the end of the story. The science has a built in Promethean drive to go further, to find out what is that cause, how does it work, how can it be corroborated more directly,... Natural science is just not going to walk along with ID to the edge of abyss with only philosophical and religious tar pits ahead, without knowing where the next solid foothold is. That's what the scientifically legitimate hypothesis would do -- serve as the provisional falsifiable foothold. Whatever the fate of that initial hypothesis turns out to be, the ID has entered the scientific tent, and its basic finding, the "intelligent cause" has become a part of legitimate science. That's the time to sit down, light up a pipe and philosophize all night long. But offering the philosophy for scientific hypothesis is not going to do it. nightlight
That’s precisely what I am saying and then taking the next step, asking: why would you drag debate into the philosophical swamps of mind-matter debate which has been going on for thousands of years and which cannot be won?
No, that is not what you have been saying. Your argument has been that in the scientific context alone, ID injects "mind" into its methodology. Whether or not science should interact with philosophy, which I think is a good idea, is a totally different discussion and unrelated to your claim. StephenB
nightlight, OK, thank you for that citation. We now have something to work with. What then, can we make of Dembski's quote. “Thus mind or intelligence or what philosophers call “agent causation” now stands as the only cause known to be capable of creating an information-rich system, including the coding of DNA, functional proteins and the cell as a whole." Again, context is critical. Recall the three known causes under consideration, namely, law, chance, or agency. This triad appears in both the scientific and philosophical realms. In philosophical discussions about causation, discussions about the origin of life often breaks down into an either/or dichotomy, that is, either mind arose from matter or matter arose from mind. In this context, mind, as a philosophical construct, is synonymous with agency, as a scientific construct insofar as it is understood as the counterpoise to matter (a designing mind vs mindless matter). I don't agree with those who stump for a "Non Overlapping Magisteria" or the idea that various disciplines cannot interact in a meaningful way. Indeed, each discipline can illuminate the other. In this case, Dembski is not saying that we can extract the existence of a mind from functionally specified complex information, which is the false charge you are trying to defend. He is saying that, as a second order question, that same intelligence that is inferred by the process of design detection is often characterized as a mind by philosophers. This, then, cannot qualify as an example of an ID proponent injecting mind into design detection methodology. Sorry, but you are barking up the wrong tree. StephenB
@248 kairosfocus: As to the ontological nature of such intelligent causes, that is a different level of discussion. Indeed, the observation of the reliability of that sign of FSCO/I in terms of the organisation of the cosmos we observe is evidence that then points to a cause sufficient to explain such an observed cosmos* - and even a speculative multiverse - that has regions suitable for C-chemistry, cell based life, given evident fine tuning for that. That does raise issues that lead on to a discussion of cause in a mind beyond matter, a mind that is an ontologically necessary being. However, that onward discussion is most emphatically not a scientific discussion but a broad philosophical one. That's precisely what I am saying and then taking the next step, asking: why would you drag debate into the philosophical swamps of mind-matter debate which has been going on for thousands of years and which cannot be won? Why would one expand the battlefield, when ID already has a perfectly legitimate, narrow but scientifically solid argument (design detection in biological artifacts), which is winnable, provided the battle is limited to the scientific terrain. But once Meyer, Dembski and others extend the battlefield onto the wider philosophical tar pits, being already perceived as suspicious outsiders, ID is sure to lose the whole battle. The strategy is like Spartans picking to battle Persians on the widest open planes they could find in ancient Greece. The approach which will win, due to wiser selection of the battlefield terrain, is that of James Shapiro, which attributes the inferred intelligence to biochemical networks, which is a valid scientific hypothesis about the nature of the immediate intelligent cause. I suspect, when ultimately cornered by the shear volume of clear facts from molecular biology, provided ID sticks to its current strategy, neo-Darwninians will semantically rejigger (again) the meanings of "random mutation" and "natural selection" and embrace it, re-branding it as more detailed variant of what they have always been saying. But if ID were to change its strategy and get there first (by producing a scientifically legitimate hypothesis about the nature of the 'intelligent cause'), it could already occupy that position, leaving neo-Darwinian no option but to agree with ID. What is more likely to happen it seems, is that after neo-Darwinians shift gears sooner and get there first, ID will concede on biological artifacts and evolution, and shift the remaining battle to problems of origin of life and fine tuning of physical laws, while still refusing to produce a scientifically legitimate hypothesis about the intelligent cause. Of course, they will lose eventually those battles as well, to something like Planckian networks I discussed in this thread. nightlight
NL: This attempted "gotcha" is silly:
“Thus mind or intelligence or what philosophers call “agent causation” now stands as the only cause known to be capable of creating an information-rich system, including the coding of DNA, functional proteins and the cell as a whole. “ + lot more elaborations of the same kind pp. 137-138. Clearly, he equivocates between terms “intelligence” and “mind”, just as Meyer does. Other sites linked from UD also have hundreds of hits of the same kind.
Do you not see that Meyer is using "mind" in the obvious sense of an OBSERVED conscious choosing intelligence capable of causing FSCO/I, without any particular reference to the ontological nature of such a "mind" or "intelligence" or "agent"? Do you not see that you are erecting a strawman caricature and twisting the empirically evident fact into a pretended unjustified ontological discussion on the nature of mind and its ultimate roots? Do you not see that this is actually leading to a begging of the question on allowing the reliable evidence and warranted inference of observed causal patterns of intelligent designers acting to create FSCO/I, to point to the origin of FSCO/I in the world of life? And onwards, do you not see that you are similarly begging the question on the broader issue of asking what best explains the observed finely tuned cosmos, by trying to shut out reliable induction on cause of functionally specific complex organisation? Please, think again. KF kairosfocus
NL: Sorry, all that is used in the design inference proper is that we empirically observe intelligent causes and use that to ground that intelligence is the only known and observed adequate cause of FSCO/I. Where mind and agency appear int6hat context, they should be taken as synonymous to intelligent action or cause. That is, the evidence is that FSCO/I is a good sign of intelligent cause by choice contingency. And such is patently an empirical reality as familiar as the decisions we make in composing and posting comments in this thread, which themselves reflect FSCO/I as a reliable sign of the known cause of that observable pattern. As to the ontological nature of such intelligent causes, that is a different level of discussion. Indeed, the observation of the reliability of that sign of FSCO/I in terms of the organisation of the cosmos we observe is evidence that then points to a cause sufficient to explain such an observed cosmos* -- and even a speculative multiverse -- that has regions suitable for C-chemistry, cell based life, given evident fine tuning for that. That does raise issues that lead on to a discussion of cause in a mind beyond matter, a mind that is an ontologically necessary being. However, that onward discussion is most emphatically not a scientific discussion but a broad philosophical one. But, the inference that matter, energy, time, space, blind necessity and chance are capable of sufficiently causing the world of life and the wider world and that we may not infer beyond that circle is equally in that wider context. That many of those who argue the latter wear lab coats does not change that any one whit. KF *PS: Onlookers, see why I so often have to use even clumsy expressions to make sure they don't lead into all sorts of side tracks? (How many times would I have had to explain at length that I mean "observed cosmos" if I did not use this term repeatedly, hundreds and hundreds of times? How many times have objectors tried to twist something so in principle simple as FSCO/I -- functionally specific complex information and/or associated organisation -- into pretzels because they do not want to face what it is and is saying? Do you remember the weeks of silly talking points over the term, "arbitrary," which UB used in a perfectly acceptable sense? And so forth? At any moment, some pretty hostile objectors are waiting in the wings, hoping to pounce and latch on to any perceived gap in the case made by design supporters; and to run off elsewhere announcing triumphantly that they have a gotcha. Why, they do so all the time, even when they repeatedly have to make up strawmen laced with ad hominems, invidious associations, twisted about false accusations and the like. And BTW, when corrected, they do not retract or apologise they go on to the next attempt and if one does not look carefully, after a time they will recirculate the old and so laboriously rebutted talking points. Of course, that is why it is helpful to look through the weak argument correctives, as they summarise the main cycle of long since cogently answered accusations and objections. Except for the latest one, your'e a Nazi. Which, OM et al, is an outright lie. Indeed -- surprise [not) -- it is an exercise in big lie propaganda tactics; which were championed by guess who and who . . .) kairosfocus
@244 StephenB Discovery.org site is mostly quotes from Meyer (similar to what was already discussed so I won't recycle them). For others, such as Dembski, here is quote from his book "Mere Creation:..." "Thus mind or intelligence or what philosophers call "agent causation" now stands as the only cause known to be capable of creating an information-rich system, including the coding of DNA, functional proteins and the cell as a whole. " + lot more elaborations of the same kind pp. 137-138. Clearly, he equivocates between terms "intelligence" and "mind", just as Meyer does. Other sites linked from UD also have hundreds of hits of the same kind. Hence, this particular leap from scientifically legitimate attribute of the cause, intelligence, to additional attributes (of the cause) which have no counterpart in natural science, such as "mind" or "conscious" permeates the ID writings. nightlight
I am not the one deciding officially whether ID is a legitimate natural science or not. Those who do point precisely at these kinds of leaps (common in ID literature & talks) as indicators of the ulterior motives behind ID, which gives them excuse to reject the whole proposal and make propaganda points out of it.
I understand the strategy very well, and only remind you that the argument for ID can be made on purely material grounds without ambiguity. The issue is not what excuses will be used to discount ID, the issue is that ID will be discounted regardless of the words used, solely because it's consistent with theism. - - - - - - - - by the way....our AI friends have spent thousands of words on this blog arguing that "intelligence" cannot be used (and indeed is not used) as a causal explanation anywhere in science. You see? It doesn't matter what the words are. All one can do is not violate material findings or logical contraint. To babysit the ideologue is a losing proposition. Upright BiPed
kairosfocus #238: Summing up: FSCO/I is an empirically reliable sign of design as cause. I have no quarrel with that part. The problem is the leap that comes after that which assigns additional properties to that cause, such as mind or consciousness or mental. My point is that these extra attribute don't have a counterpart in natural science (check the so called "hard problem of consciousness"), hence the combinations such as "intelligent mind" or "conscious intelligence" or "intelligent mental agency" ... etc, don't have any counterparts in natural science either. Hence, while the attribute "intelligent" of that cause was inferred properly and is a perfectly solid science (no less so than archeology), the remaining attributes (mind, consciousness, mental) are wishful leaps outside the natural science. It is this needless weakening of otherwise valid inference of "intelligence" which I have problem with. nightlight
nightlight
Search of discovery.org finds 839 articles combining terms (conscious OR mind) AND agency (or 937 if you include ‘OR mental’ in the parens). This thread itself also illustrates the prevalence of the same position.
Please do not ask me to search out evidence for your claim. Just provide the appropriate quotes in the appropriate context to show that ID proponents inject "mind" into their design detection methodology. StephenB
F/N: AmHD, summarising the term mind and its link to intelligence, in response to yet another mountain out of a molehill objection: mind (mnd) n. 1. The human consciousness that originates in the brain and is manifested especially in thought, perception, emotion, will, memory, and imagination. 2. The collective conscious and unconscious processes in a sentient organism that direct and influence mental and physical behavior. 3. The principle of intelligence; the spirit of consciousness regarded as an aspect of reality. 4. The faculty of thinking, reasoning, and applying knowledge: Follow your mind, not your heart. 5. A person of great mental ability: the great minds of the century. 6. a. Individual consciousness, memory, or recollection: I'll bear the problem in mind. b. A person or group that embodies certain mental qualities: the medical mind; the public mind. c. The thought processes characteristic of a person or group; psychological makeup: the criminal mind. 7. Opinion or sentiment: He changed his mind when he heard all the facts. 8. Desire or inclination: She had a mind to spend her vacation in the desert. 9. Focus of thought; attention: I can't keep my mind on work. 10. A healthy mental state; sanity: losing one's mind. v. mind·ed, mind·ing, minds v.tr. 1. To bring (an object or idea) to mind; remember. 2. a. To become aware of; notice. b. Upper Southern U.S. To have in mind as a goal or purpose; intend. 3. To heed in order to obey: The children minded their babysitter. 4. To attend to: Mind closely what I tell you. 5. To be careful about: Mind the icy sidewalk! 6. a. To care about; be concerned about. b. To object to; dislike: doesn't mind doing the chores. 7. To take care or charge of; look after. v.intr. 1. To take notice; give heed. 2. To behave obediently. 3. To be concerned or troubled; care: "Not minding about bad food has become a national obsession" (Times Literary Supplement). 4. To be cautious or careful. [Middle English minde, from Old English gemynd; see men-1 in Indo-European roots.] minder n. Synonyms: mind, intellect, intelligence, brain, wit1, reason These nouns denote the capacity of thinking, reasoning, and acquiring and applying knowledge. Mind refers broadly to the capacities for thought, perception, memory, and decision: "No passion so effectually robs the mind of all its powers of acting and reasoning as fear" (Edmund Burke). Intellect stresses knowing, thinking, and understanding: "Opinion is ultimately determined by the feelings, and not by the intellect" (Herbert Spencer). Intelligence implies solving problems, learning from experience, and reasoning abstractly: "The world of the future will be an ever more demanding struggle against the limitations of our intelligence" (Norbert Wiener). Brain suggests strength of intellect: We racked our brains to find a solution. Wit stresses quickness of intelligence or facility of comprehension: "There is no such whetstone, to sharpen a good wit and encourage a will to learning, as is praise" (Roger Ascham). Reason, the capacity for logical, rational, and analytic thought, embraces comprehending, evaluating, and drawing conclusions: "Since I have had the full use of my reason, nobody has ever heard me laugh" (Earl of Chesterfield). See Also Synonyms at tend2. The American Heritage® Dictionary of the English Language, Fourth Edition copyright ©2000 by Houghton Mifflin Company. Updated in 2009. Published by Houghton Mifflin Company. All rights reserved. kairosfocus
nightlight
Those who do point precisely at these kinds of leaps (common in ID literature & talks) as indicators of the ulterior motives behind ID, which gives them excuse to reject the whole proposal and make propaganda points out of it.
Other than you, who else is pointing to these alleged "leaps" from intelligence to "mind" in the context of ID methodology? StephenB
NL: Further to all of the above, you have now been repeatedly corrected that intelligent designers are facts of observation. So also, without reference to any ontological theory of mind, we can see that FSCO/I is an inductively reliable sign of design as cause, where we can directly confirm, on billions of cases without any significant exception, despite a lot of claims and attempts to the contrary. Such grounds the inference that FSCO/I is a reliable sign of intelligent design as most credible causal explanation in cases where we do not have the opportunity to directly observe the cause. Kindly explain where in that there is any imposition of "mind" or more precisely any injection of a theory of "mind" beyond that what many people would call "minded" creatures would be typical examples of intelligent ones -- taking us and beavers as cases of "minded" creatures, however such CONSCIOUS INTELLIGENCE CAPABLE OF CHOOSING AND SHAPING CONTINGENCIES TO REFLECT SOME SORT OF INTENT comes to be. I take it, your own experience of posting in the thread should suffice to show the point, by self reference. At this point your objections are therefore coming across as a bit contrived. As for the ones who have made any number of irresponsible and provably false accusations and assertions concerning design theory and what it is about, I simply say, we should go to the merits instead of relying on demonstrably biased, often plainly wrong and sometimes outright dishonest advocates. KF kairosfocus
@236 StephenB Search of discovery.org finds 839 articles combining terms (conscious OR mind) AND agency (or 937 if you include 'OR mental' in the parens). This thread itself also illustrates the prevalence of the same position. nightlight
nightlight,
The tragedy of it is that with plenty of scientifically perfectly valid nouns to use and that fit the ID finding, such as “process”, “computation”… to apply his correctly inferred attribute “intelligent” to, why would one pick something that is a scientifically meaningless noun and squander it all.
You're using semantics to try and deny the obvious. That an intelligent Designer made DNA (and the universe). Other anti IDists on this blog have made the point that if DNA implies an intelligent "process" or "computation", then evolution fits the bill. But it's not going to fly. We all know that DNA has a ton of CSI/FSCI/dFSC/IC, and we also know it's a code (just like a human made code, except so much more complicated that it's really nothing like it). And since humans have minds, then the Designer of DNA had a mind too. Some might say that minds are natural things, made up of processes and computations. And that we can study minds scientifically to understand how those processes and computations work. And that just because we may understand how a mind works, doesn't mean it ceases being a mind. But those people are all atheist-materialist-darwinists, which means they're just a bunch of quarks and gluons bouncing around randomly. So we don't pay them any attention. lastyearon
NL: Implication: P IMPLIES Q means that P being true is sufficient for Q to be true and Q being true is necessary for P to be true. This is an objective state or claim. That is, on whatever reasonable grounds, one cannot have P true and Q false or Q false and P true. It has no underlying claim that P is true. As modelling theory shows, false antecedents routinely imply true consequences. But a true antecedent will only and can only properly imply true consequents, Q. Inference: the ACT of drawing out a conclusion, on some species of warrant or another. The pivotal issue, is grounds and the degree of warrant provided. The grounds for the design inference, I have already linked on. Where, this is an exercise in abductive, inductive reasoning. That is the logic runs P => Q, but the empirical evidence is for Q being true. Strictly, to reason Q, so P is to affirm the consequent, if we use deductive logic. That is why scientific reasoning is inescapably provisional, and critically pivots on having a broad and exceptionless base of observations, that in every case q1, q2, . . . qn . . . we see p1, p2 . . . pn . . . So, as Newton observed, we provisionally infer there is a general pattern. As this becomes strong enough, we infer that the pattern is reliable and is summarised as a law of nature, subject to some future possible counter example that shows limitations. Much as happened with Newtonian dynamics. In the case of design theory, the base of observations is billions of cases deep, all around us, and indeed posts in this thread add to the base. Summing up: FSCO/I is an empirically reliable sign of design as cause. In addition, we have a needle in the haystack analysis as to why that is plausible. KF kairosfocus
@Upright BiPed #234 You need to spit the hook out. I am not the one deciding officially whether ID is a legitimate natural science or not. Those who do point precisely at these kinds of leaps (common in ID literature & talks) as indicators of the ulterior motives behind ID, which gives them excuse to reject the whole proposal and make propaganda points out of it. Whatever his motives may be, it is as unwise to wear them on his sleave as it would be for a chess player to point his finger at what his last move is meant for. If you care to win, you just don't do that. nightlight
nightlight,
He is using the same inductive reasoning of equal strength in both cases. The only difference is that the first one is a personal form of induction “(royal) we infer B from A” while second one is impersonal form of induction “A implies B”.
I disagree for the reasons stated earlier. In any event, you have provided your interpretation of Meyer's words and I have provided mine. Since you ignore all counter arguments and correctives, there is no reason to go over that territory yet a third and fourth time. Meanwhile, you have made the general claim (and the false claim) that ID proponents do, by definition, inject "mind" into their methodology for design detection. Putting aside your dubious interpretation of Meyer's words, can you point to any other writers, either among the ID luminaries or writers on this blog that exemplify this trait that is supposed to be so prominent among ID thinkers. StephenB
F/N: NL, kindly cf. here. KF kairosfocus
Nightlight, ID isn't "unscientific" because of the words that Stephen Meyer wrote in a book intended for the popular audience, its "unscientific" for the singular reason that rational interpretation of empirical evidence is consistent with theism. You need to spit the hook out. Upright BiPed
@229 Upright BiPed: So your summary of ID arguments is flawed to the extent that ID is already firmly on the table without reference to "mind-stuff" or "consciousness", and your assertion that ID cannot support its claims materially is just simply false. Not only are they materially supported, they remain unrefuted by those same means. As explained in post #232 above, I agree that link identified by ID design detection methods is perfectly solid and as good a science as any (such as archeology). It is what is attached to that link ("intelligent mind" or "mental agency") that is vacuous as a hypothesis within the present natural science. The present natural science can't tell you the difference between "intelligent mind" and "intelligent tooth fairy" since neither "mind" nor "tooth fairy" have a counterpart in present natural science. His inductive inference of "intelligent" as the attribute of the generator of biological artifacts is dismissible because it is applied to a scientifically vacuous noun, not because the inductive inference of the noun's attribute "intelligent" is invalid. The tragedy of it is that with plenty of scientifically perfectly valid nouns to use and that fit the ID finding, such as "process", "computation"... to apply his correctly inferred attribute "intelligent" to, why would one pick something that is a scientifically meaningless noun and squander it all. In the "fly & soup" analogy at the end of the post #232, you were explaining to me how delicious the soup is and how good the chef who prepared it is, while I am wondering what possessed Meyer (and others within ID making the same leap) to drop that big fat fly into that delicious soup. nightlight
StephenB #228 This is quite a stretch. As I pointed out, there are, indeed, parallels, so the structure makes sense, but those parallels are not perfectly similar, which is why he uses two different words (infer vs. imply). I disagree. He is using the same inductive reasoning of equal strength in both cases. The only difference is that the first one is a personal form of induction "(royal) we infer B from A" while second one is impersonal form of induction "A implies B". Impersonal form is used to add more weight to the inductive connection (making it sound as logical implication used in math and physics, such as "eq. A implies eq. B"), since the personal form can be seen as subjective. Normal gradation of inductive strengths from weakest to strongest would be: 1) "a fool infers B from A", 2) "I infer B from A" 3) "expert X infers B from A" 4) "(royal) we infer B from A" 5) "A implies B" or "B is inferred from A" (or "inferable"). Since passive voice of the alternative variants in #5 sound discordant, he used the first variant of impersonal induction. Hence it is obvious that via the impersonal form of induction used in his second sentence he deliberately sought to make induction "DNA => action of intelligent mind (mental agency)" appear stronger than the first induction "human artifacts => human actions". If you or others here know him, you are welcome to ask him what he meant there. Note though, that neither interpretation helps him. Namely, if he wanted a weaker induction strength he could have said "DNA suggests intelligent mind" or even weaker "DNA hints at intelligent mind". Both of these forms are less unscientific than the impersonal forms only because they say less, not because the scientifically vacuous clause "intelligent mind" (or "mental agency") has become less vacuous or more scientific or. There is no more "mind" (or "consciousness") in natural science than there is a tooth fairy. For example, if you substitute 'mind' with 'tooth fairy' (since natural science can say nothing about either), it becomes obvious that using "DNA suggests intelligent tooth fairy" or "DNA hints at intelligent tooth fairy" doesn't help him it all -- it's less scientifically flawed merely because it says even less, not because it is more scientific. Hence, neither interpretation of what he meant, stronger or weaker induction, helps him bring ID closer to becoming a legitimate natural science, since what is hanging on that inductive link is equivalent to 'intelligent tooth fairy' as far as natural science can tell. The unfortunate part is that the link itself is perfectly good, but what is attached to that link is what allows Darwinian opponents to dismiss it all. If he and others were to say, "implies intelligent process" or "implies intelligent algorithms" which are perfectly valid concepts in natural science (e.g. in the sense of AI or a computational process), the opponents wouldn't have that kind of cheap excuse. "Intelligent mind" or "mental agency" or "consciousness"... are simply not valid hypotheses (as causes or as anything at all) in natural science. That's why I cringe any time I hear it -- it is like watching someone gratuitously dropping a big fat fly into a delicious bowl of soup prepared by a master chef. Why? nightlight
NL: I interleave comments as indicated, re your 211 - 12: ____________ >> I think two of us have for whatever reason hit a semantic wall and there was no progress over several exchanges. Hence I chose to leave the last word with you, if only to avoid boring the rest of the guys here with what was increasingly turning into a semantic nitpick tiff.>> 1 --> "Semantics" is about meaning. Meaning is very important to understand. In this case, it is pivotal and ought not to be dismissed as in effect a mere difference of views leading to a "tiff." >>For example, on “redness” I was talking about qualia of red (“hard problem of consciousness”), while you were talking about physiology of color perception.>> 2 --> Nope. I spoke to the fact that there is a reasonable meaning to the claim that an object is objectively red. That is, there is no reason to impose an ugly gulch between the internal world of subjective phenomena, and the external one of things in themselves. Knowledge per warranted, credibly true belief, provides such a bridge in analysis, and in everyday experience as well as science. The redness of the cup next to me is as objective as the reddishness of the berries I just had from it in search of flavinoids etc. 3 --> To remind, yes, there is a physiological response that is traceable and that can be studied using protocols that exploit the fact that volunteer experimenters will often try to be precise and accurate, so their reporting of internal states and perceptions can be used in reasonable scientific investigations. But that does not mean that the results of such are simply subjective, the investigations provide reasonable warrant regarding evident and objective states of affairs in the external world, redness being one of them. 4 --> In particular, redness of objects is associated strongly with properties of such that reflect, preferentially transmit or emit light in the band, roughly 600+ - 700+ nm. So, there is reason to accept that redness is an objective reality, never mind the fuzzy borders that seem inevitable in ever so many things. 5 --> The qualia of being appeared to redly is important, but does not change that fact. You may wish to note my 101 level discussion of a model framework, here, which discusses brains and minds in a cybernetic context, using the suggestion of a two-tier controller: one in the loop, one in a supervisory role for the loop. >> Hence there was nothing to concede or stand corrected and change in what I was saying.>> 6 --> You have side-slipped the point I have again outlined, that redness is not reducible to a perception. >>Similarly, on self-evident schema>> 7 --> Something is self evident when it is not only seen to be true, but is seen to be necessarily true on pain of patent and prompt absurdity if one attempts to deny. 8 --> the matter you are again raising is NOT of this class. >> of any natural science S: (M) – Model space (formalism & algorithms) (E) – Empirical procedures & facts of the “real” world (O) – Operational rules mapping between (M) and (E) The (M) component is a generator of meaningful statements in S. The “statements” can be numbers, words, symbols, pictures,… The generator must follow the rules of logic (e.g. it shouldn’t produce mutually contradictory statements).>> 9 --> I pointed out above, e.g. at 112, why the scheme fails, fails as in effect a definition, which you erroneously perceive to be "self-evident." 10 --> Not all of science is reducible to mathematical and/or algorithmic models [and an algorithm is a finite, step by step procedure that effects an outcome, you have used the term idiosyncratically], there is a process of doing science that in many stages and aspects will not involve especially the sort of model as you have highlighted. >>Output obtained within (M) by one practitioner of S should be reproducible within (M) by any other practitioner of S, i.e. the procedures of (M) are algorithmic (one could conceive a computer checking the output or generating it i.e. the operation in (M) should be in principle programmable and executable on a computer).>> 11 --> Trivially not so, e.g in astronomy, volcanology and other observational sciences, we often deal with unique events that cannot be reproduced, so that we rely on the accuracy and reliability of record. Which brings the methods of history into science. Experiments, in genral terms are often reproducible, but observations and circumstances of real world as a going concern events are not. 12 --> To underscore, we are not in a position to reproduce experimentally the formation of the cosmos, the galaxies, the solar system, the planets within it, the actual origin of life, the actual origin of body plans, the actual origin of humans, actual origin of geological and geographic features, etc etc. All of these can be and are studied scientifically, and so your attempt at definition trivially fails. 13 --> It so happens that design theory is essentially about just such circumstances, and therefore uses the appropriate investigatory logic, abductive reasoning. 14 --> That is [and kindly cf. CR at 224],
a: we observe and desire to investigate causally traces of the remote, unobservable past of origins. b: We cannot directly observe that past or its events, which we cannot replicate. c: However, we can in the present investigate forces and factors that give rise to closely similar phenomena as we see in the traces from the past. d: For material instance, we may consider functionally specific, complex organisation and associated information, FSCO/I. e: We can then see patterns of cause that are inductively strong, and empirically reliable, e.g. that FSCO/I is routinely and only observed to be caused by intelligent design. f: We may analyse, e.g on infinite monkeys and/or needle in haystack searches, and see why it is that blind forces of chance and necessity acting on the gamut of the solar system and/or the observed cosmos across reasonable timelines, cannot sufficiently sample the space of possibilities, to be plausible as alternative explanations. g: We are then epistemically entitled to draw the scientific conclusion that FSCO/I is an empirically reliable sign of design as credible cause, of course subject to correction or clarification in light of further observations, as is standard for scientific work. h: So, even though it is controversial in a day where some have attempted to redefine science as applied evolutionary materialism, it is a well-warranted empirically grounded conclusion that where we see FSCO/I the most credible, empirically reliable explanation of its cause is design.
. . . Of course, such is in principle subject to empirical refutation by counter-example, but it is plain that such is not in reasonable prospect, for reasons connected to analyses quite similar to those that give us high confidence in our estimation of the reliability of key conclusions of thermodynamics. 15 --> You are also confusing experimental investigations with computer simulations. Computers are effecting a model world indeed, but one that is not equal to reality. >>Component (E) is analogously, a procedural system and technology for extracting the data relevant to S from the “real” world. Component (O) are the procedures (algorithms) which map between statements produced by (M) and empirical facts produced by (E), allowing for falsification of statements by (M). In most cases the mappings by (O) are implicit, accomplished by simply using the same name for the corresponding elements of (M) and (E). There is nothing of substance that can be argued about this basic schema which consists mostly of definitions and labels (perhaps priority of various requirements may be reordered), since it is self-evident and the only issues one may have are a matter of taste.>> 16 --> Some of the more obvious gaps have been pointed out above, and have again been outlined just now. >> I happen to like it since is very useful analytical tool for troubleshooting and disentangling otherwise perplexing semantic tangles such as those often encountered in interpretations of Quantum Theory (that’s the literature where I picked this scheme from).>> 17 --> In very limited and highly mathematical contexts, it will have some utility. But this reminds me of an error in my reasoning on basic mechanics that I picked up some years ago. I had unconsciously substituted location for displacement, and found that it gave the same effective results for many things. But location is not at all the same, and for some things it will not be equivalent at all. >>You objected to “algorithmic” attribute of (M), a term which you define more narrowly than I do (my semantics classify as algorithmic any procedure which can be programmed into a computer or an android robot or any other computer controlled devices if it requires actions in the real world).>> 18 --> This error has already been corrected. I have given the standard meaning of algorithm, and have pointed out areas of investigation in which algorithms, and computers or robots etc cannot even in principle be programmed to do an investigation. Creative abductive inferences dependent on judgement and intuition or highly instructive analogies is a major example which has often played a pivotal role in sciences. Similarly, investigations such as the colour one, will rely on interactions between subjects, trust, trustworthiness, judgement etc. >>Hence, there wasn’t anything to correct or retract about any of that either. If you wish to have the last word, that’s fine with me, I will leave it at what was said above. >> 19 --> I invite you to reassess your thinking in light of the points of concern I have again pointed out. _____________ I trust the above will be helpful. KF kairosfocus
BA77,
"Sorry CR, I didn’t know you were setting up for Behe’s (and BI’s) work."
Lol, no worries mate, your comments were both timely and appropriate, and I couldn't have done better. :D Chance Ratcliff
Just a quick comment on the reply to Chance at 182,
nighlite: So what you are saying is that his (E) space has “E-mind” element and his (M) space has some element M-mind...
There are two “E” empiricals: the physical conditions imposed by the translation of recorded information inside the genome, and those imposed by recorded information everywhere but the genome. And the “M” that ties them together is a coherent physical model of semiosis. And if the infinite examples of semiosis outside the genome are each a universal result of an organized agent, then the semiosis observed inside the genome implies a similar class of origin, particularly since the physical conditions are identical in both instances. The implication stems directly from the principle of uniformity in matter, as well as the parsimony of a single class of origin for a manifestly single material phenomenon. But the observation doesn't stop there. The information recorded within the genome is recorded by the use of iterative code structures, each being independent of the lowest potential energy state of the medium. So along with the already demonstrated implication of an organized agent, there is an even more narrow subset, where the only other examples of iterative ("energy degenerate") code structures are those established by choice contingency. In other words, there are two independent and universal indicators of agency involvement, demonstrated by physics. So your summary of ID arguments is flawed to the extent that ID is already firmly on the table without reference to “mind-stuff” or “consciousness”, and your assertion that ID cannot support its claims materially is just simply false. Not only are they materially supported, they remain unrefuted by those same means. Upright BiPed
nightlight
On page 93, he first gives example in which we “infer” “intelligent agents” when seeing human artifacts, then follows up saying in the next sentence: “Similarly, the specifically arranged nucleotide sequences,[...] imply the past action of an intelligent mind, even if such mental agency cannot be directly observed.”
Yes, he is making the perfect distinction between infer (science) and imply (philosophy) that I made earlier.
Both sentences are deliberately structured to exactly parallel each other in form and phrasing in order to amplify the equivalence of the two conclusions.
This is quite a stretch. As I pointed out, there are, indeed, parallels, so the structure makes sense, but those parallels are not perfectly similar, which is why he uses two different words (infer vs. imply). The two ideas cannot be totally separated nor can they be perfectly unified; they simply intersect. That's the point and Meyer makes it very well. Why you would choose to misread him is a mystery. He is certainly not a sloppy writer or a sloppy thinker. On the contrary, you seem to be going out of your way to invent problems that aren't there. ID does not, for example, conflate evolutionary explanations or intrude unscientific concepts in its methodology. One wonders how you manage to come up with these novel interpretations. On the latter point, you have made a generalized accusation to the effect that ID posits "mind" as a scientific explanation for design, indicating several times that this is a widespread problem among ID thinkers. Just to provide a few examples, I cite the following:
For example, scientific postulates can make no use of concepts such as ‘mind’ or ‘consciousness’ or ‘god’ or ‘feeling’ or ‘redness’ since no one knows how to formalize any of these...,
and again
The ID proponents unfortunately don’t seem to realize this “little” requirement. Hence, they need to get rid of “mind” and “consciousness” talk
and again
So my earlier point about ID is that it is to become part of legitimate natural science it would be better served by algorithmically effective elements, such as computer-like intelligent agency like Plnackian networks, rather than by algorithmically undefined concepts such as ‘mind’ or ‘deity’.
and again
Namely, the point of that scheme was to explain how the ID proponents often violate the key necessary conditions for a natural science. Violating the necessary conditions, such as algorithmic effectiveness of postulates, suffices to disqualify a proposal from clams on becoming a science (see post #117 on why that is so). Since they have tripped already on the necessary conditions, there is no need to analyze further as to whether their proposal is sufficient.
and again
As in the case of analogous injection of ‘consciousness’ into (M), these are parasitic elements belonging to some other agenda foreign to the discipline,
and again
The ID proponents unfortunately don’t seem to realize this “little” requirement. Hence, they need to get rid of “mind” and “consciousness” talk,,,
This is a troubling pattern. StephenB
Sorry CR, I didn't know you were setting up for Behe's (and BI's) work. I should have smelled it coming. I like the Chesterson 'just right' reflection you brought up.,,, As is exactly how it should be. Science done right should ruffle feathers and continually lead towards that 'narrow path' that many may find uncomfortable to what they would prefer to go. bornagain77
nightlght @219
"Dembski is surely playing it safe there."
I believe the reason for this is essentially to rule out chance in principle, by setting a threshold which exceeds all Planck-time events for all subatomic particles for the entire age of the universe. Chance Ratcliff
BA77 @220, thanks for bringing up Behe, although you stole some of my thunder. ;) I was intending to mention Behe in reference to The Edge of Evolution, and also The First Rule of Adaptive Evolution. The work he's done, as presented in EoE, as well as the work of the Biologic Institute, has sought to explore the causal limits of neo-Darwinism. This is necessary work, although I notice that ID is criticized by some for focusing too much on the insufficiency of Darwinian-compatible processes, and by nightlight of being too generous (See #191). It reminds me of something that G.K. Chesterton said about Christianity:
Christianity was accused, at one and the same time, of being too optimistic about the universe and of being too pessimistic about the world. The coincidence made me suddenly stand still. Chesterton, G. K. (Gilbert Keith) (1994-05-01). Orthodoxy (Kindle Locations 1024-1025). Public Domain Books. Kindle Edition.
He recounted his experience of noticing that in many ways, Christianity was accused of being contradictory things at the same time, giving him pause to realize that it was possible to consider Christianity of being, to paraphrase, just right. Perhaps ID, by focusing both too much and too little on the limits of neo-Darwinism, is doing it just right. :) Chance Ratcliff
Regarding Stephen Meyer's design inference formulation, Inference to the Best Explanation, or abductive reasoning, it involves accounting for a sufficient cause currently in operation, comparing it to competing causes, and choosing the one with the best capacity to explain the effect in question. Doing so does not open up an explanatory flaw just because the cause, in this case intelligence, cannot be explained by current scientific procedures or areas of study. It would be quite convenient if every cause we identified for a given effect was a simplification all the way up the causal chain, and that the mechanics of each cause could be discovered, but this is not necessarily the case. The burrow of a trapdoor spider is relatively simple, compared to the spider itself, which is fundamentally complex. But if we can discover by observation that the cause of the trapdoor burrow is indeed the trapdoor spider, have we introduced an unneeded complexity or gap in the causal chain for the burrow itself, just because we cannot explain the burrow via a simple cause, and the spider by an even simpler one? Certainly not. The best explanation for the trapdoor burrow is a trapdoor spider, even if we cannot explain the spider itself. To explain the burrow by identifying its designer is an epistemic success, even if it complicates our understanding of the causal chain. But what if we never observed a spider, but sought an explanation for a plethora of trapdoor burrows? I daresay it would be acceptable to infer an intelligent cause, if some sort of intelligent activity could produce the effect in question, and no natural cause could be identified. We wouldn't know the designer, and we wouldn't know the specifics of how the design was brought about (a silk hinge on a clump of soil covering a hole) but we could infer design, at least provisionally, barring the discovery of a mechanistic explanation. Doing so has the epistemic benefit of adding to knowledge, causes capable of producing features in the effect, and does not subtract from the set of other known causes, namely physical law. Meyer says of his own argument,
The central argument of my book is that intelligent design—the activity of a conscious and rational deliberative agent—best explains the origin of the information necessary to produce the first living cell. I argue this because of two things that we know from our uniform and repeated experience, which following Charles Darwin I take to be the basis of all scientific reasoning about the past. First, intelligent agents have demonstrated the capacity to produce large amounts of functionally specified information (especially in a digital form). Second, no undirected chemical process has demonstrated this power. Hence, intelligent design provides the best—most causally adequate—explanation for the origin of the information necessary to produce the first life from simpler non-living chemicals. In other words, intelligent design is the only explanation that cites a cause known to have the capacity to produce the key effect in question.
If we can reasonably infer an intelligent cause for some effect, by identifying a sufficient causal element, we have added to our understanding, even if we are unable at present to explain the nature of the cause. This takes me back to presuming that known physical laws are not solely capable of causing certain observed events, such as the construction of jet airplanes. If we take into account all events that occur on the planet, and in principle rule out intelligent agency acting purposefully toward a goal, we have left a gaping hole in our ability to account for observed effects, such as the aforementioned jet airplane. In actuality we have a root node of explanation: Was this object the product of design or physical law, with two edges proceeding from it: product of design; and product of physical law. Upon that node is a partition which is both mutually exclusive and jointly exhaustive. Each branch leads us to ask different questions about the nature of the event in question. If the object is identified as the product of design, we might then ask who, how, when, etc. If the object is explicable by physical laws, we might ask which process, over what period, etc. Since intentional acts by intelligent agents are a known causal force, they are needed to explain all observed effects, even if we can't formulate an explanation for the agent himself. Otherwise there is a gigantic hole in our epistemology. In other words, we are hopeless to explain jet airplanes by reference to physical law. /soapbox Chance Ratcliff
@BA77 #221 It's still there, second paragraph "intelligent cause" as I typed it this morning and as his article [2] states. Indeed, the ID and evolution articles on the wiki take some effort to read through all that emotional intensity drowning any information article may have. nightlight
Eric Anderson #218 So either 1 or 3 would be appropriate in this context. Seems like he is using the word in a reasonable way. He is using it in the meaning 3, as logical implication, especially in the context of the previous sentence, meant to parallel this one, where he uses term infer. In any case, it doesn't imply "intelligent mind" or "mental agency" in either meaning. He is gratuitously embellishing the intelligent agency with mental facilities, which he can't know. In a strict sense, he can't even know that for anything or anyone beyond himself. Hence in that sentence, the phrasing is doubly sloppy, since he doesn't even known what that unseen entity is. That kind of needless sloppy leaps to obviously unwarranted conclusions diminish the credibility of the rest of his argument, since a reader will wonder what else is he embellishing. nightlight
NL: "I just corrected his (Dembski's) wiki article bio" And have you checked your correction since then? Maybe it will stick, but wiki is notorious for spreading false propaganda about ID, and then fighting tooth and nail to prevent corrections from being made:
Wikipedia's Tyranny of the Unemployed - David Klinghoffer - June 24, 2012 Excerpt: PLoS One has a highly technical study out of editing patterns on Wikipedia. This is of special interest to us because Wikipedia's articles on anything to do with intelligent design are replete with errors and lies, which the online encyclopedia's volunteer editors are vigilant about maintaining against all efforts to set the record straight. You simply can never outlast these folks. They have nothing better to do with their time and will always erase your attempted correction and reinstate the bogus claim, with lightning speed over and over again. ,,, on Wikipedia, "fact" is established by the party with the free time that's required to wear down everyone else and exhaust them into submission. The search for truth (on wikipedia) yields to a tyranny of the unemployed. http://www.evolutionnews.org/2012/06/wikipedias_tyra061281.html
bornagain77
NL and CR, to focus in on this:
“Just because one can observe beneficial mutations in the lab or in nature, that doesn’t mean the pick of DNA alteration was random among all possible alterations. It only means that DNA transformed in a beneficial manner in a given amount of time, but implies nothing about nature of guidance (intelligently guided or random).”
NL, it seems to me you are laboring under the illusion that there is far more evidence for 'beneficial' mutations than there actually is. Dr. Behe, a short while back, did a survey of literature over the past for decades and found:
“The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain - Michael Behe - December 2010 Excerpt: In its most recent issue The Quarterly Review of Biology has published a review by myself of laboratory evolution experiments of microbes going back four decades.,,, The gist of the paper is that so far the overwhelming number of adaptive (that is, helpful) mutations seen in laboratory evolution experiments are either loss or modification of function. Of course we had already known that the great majority of mutations that have a visible effect on an organism are deleterious. Now, surprisingly, it seems that even the great majority of helpful mutations degrade the genome to a greater or lesser extent.,,, I dub it “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain. http://behe.uncommondescent.com/2010/12/the-first-rule-of-adaptive-evolution/
And Dr. Behe, in his book 'Edge Of Evolution', and in a 'updated' lecture of which he recently gave here,,,
What are the Limits of Darwinism? A Presentation by Dr. Michael Behe at the University of Toronto - November 15th, 2012 - video http://www.youtube.com/watch?v=V_XN8s-zXx4
,,,points out that in a survey of all HIV and Malaria adaptations in the wild, which greatly outclass the opportunities for adaptations (mutational firepower) of all microorganisms seen in the lab, or the mutational firepower for all higher lifeforms on earth combined for millions of years,,,
A review of The Edge of Evolution: The Search for the Limits of Darwinism The numbers of Plasmodium and HIV in the last 50 years greatly exceeds the total number of mammals since their supposed evolutionary origin (several hundred million years ago), yet little has been achieved by evolution. This suggests that mammals could have "invented" little in their time frame. Behe: ‘Our experience with HIV gives good reason to think that Darwinism doesn’t do much—even with billions of years and all the cells in that world at its disposal’ (p. 155). http://creation.com/review-michael-behe-edge-of-evolution
Dr. Behe states in The Edge of Evolution on page 135:
"Generating a single new cellular protein-protein binding site (in other words, generating a truly beneficial mutational event that would actually explain the generation of the complex molecular machinery we see in life) is of the same order of difficulty or worse than the development of chloroquine resistance in the malarial parasite." "The immediate, most important implication is that complexes with more than two different binding sites-ones that require three or more proteins-are beyond the edge of evolution, past what is biologically reasonable to expect Darwinian evolution to have accomplished in all of life in all of the billion-year history of the world. The reasoning is straightforward. The odds of getting two independent things right are the multiple of the odds of getting each right by itself. So, other things being equal, the likelihood of developing two binding sites in a protein complex would be the square of the probability for getting one: a double CCC, 10^20 times 10^20, which is 10^40. There have likely been fewer than 10^40 cells in the world in the last 4 billion years, so the odds are against a single event of this variety in the history of life. It is biologically unreasonable." - Michael Behe - The Edge of Evolution - page 146
It is also important to note that no limit was placed on the type of mutations that were allowed to be considered:
Michael Behe, The Edge of Evolution, pg. 162 Swine Flu, Viruses, and the Edge of Evolution "Indeed, the work on malaria and AIDS demonstrates that after all possible unintelligent processes in the cell--both ones we've discovered so far and ones we haven't--at best extremely limited benefit, since no such process was able to do much of anything. It's critical to notice that no artificial limitations were placed on the kinds of mutations or processes the microorganisms could undergo in nature. Nothing--neither point mutation, deletion, insertion, gene duplication, transposition, genome duplication, self-organization nor any other process yet undiscovered--was of much use." http://www.evolutionnews.org/2009/05/swine_flu_viruses_and_the_edge.html
Moreover, it is found that combining supposedly beneficial mutations leads to what is called 'negative epistasis':
Epistasis between Beneficial Mutations - July 2011 Excerpt: We found that epistatic interactions between beneficial mutations were all antagonistic—the effects of the double mutations were less than the sums of the effects of their component single mutations. We found a number of cases of decompensatory interactions, an extreme form of antagonistic epistasis in which the second mutation is actually deleterious in the presence of the first. In the vast majority of cases, recombination uniting two beneficial mutations into the same genome would not be favored by selection, as the recombinant could not outcompete its constituent single mutations. https://uncommondesc.wpengine.com/epigenetics/darwins-beneficial-mutations-do-not-benefit-each-other/ Mutations : when benefits level off - June 2011 - (Lenski's e-coli after 50,000 generations; which is approx. equivalent to 1 million years of supposed human evolution) Excerpt: After having identified the first five beneficial mutations combined successively and spontaneously in the bacterial population, the scientists generated, from the ancestral bacterial strain, 32 mutant strains exhibiting all of the possible combinations of each of these five mutations. They then noted that the benefit linked to the simultaneous presence of five mutations was less than the sum of the individual benefits conferred by each mutation individually. http://www2.cnrs.fr/en/1867.htm?theme1=7
In fact, I have yet to see any unambiguous evidence that even a single novel functional protein has been created in life (whether or not the mutations are considered to be intelligently guided or randomly guided). It is from such consistent findings for all adaptations considered (many of which I have not discussed here) that the foundational overriding principle, in life sciences, for explaining the sub-speciation is Genetic Entropy. Genetic Entropy, is a rule (Much like Behe's 'First Rule', which draws its foundation in science from the twin pillars of the Second Law of Thermodynamics and from the Law of Conservation of Information (Dembski, Marks, Abel), and the principle can be stated something like this:
"All beneficial adaptations away from a 'parent' species for a sub-species, which increase fitness to a particular environment, will always come at a loss of the optimal functional information that was originally present in the parent species genome."
Wolf-Ekkehard Lönnig, due to his vastly greater knowledge in the life sciences than I, states the principle, or 'first rule', for all biological adaptations much more succinctly than I can here:
A. L. Hughes's New Non-Darwinian Mechanism of Adaption Was Discovered and Published in Detail by an ID Geneticist 25 Years Ago - Wolf-Ekkehard Lönnig - December 2011 Excerpt: The original species had a greater genetic potential to adapt to all possible environments. In the course of time this broad capacity for adaptation has been steadily reduced in the respective habitats by the accumulation of slightly deleterious alleles (as well as total losses of genetic functions redundant for a habitat), with the exception, of course, of that part which was necessary for coping with a species' particular environment....By mutative reduction of the genetic potential, modifications became "heritable". -- As strange as it may at first sound, however, this has nothing to do with the inheritance of acquired characteristics. For the characteristics were not acquired evolutionarily, but existed from the very beginning due to the greater adaptability. In many species only the genetic functions necessary for coping with the corresponding environment have been preserved from this adaptability potential. The "remainder" has been lost by mutations (accumulation of slightly disadvantageous alleles) -- in the formation of secondary species. http://www.evolutionnews.org/2011/12/a_l_hughess_new053881.html
NL, I could much deeper into this particular area pointing out interesting stuff, but I just wanted to, for now, point out the main fact to you that you, nor Darwinists, have ANY evidence for any non-trivial functional complexity being generated in life, whether or not these processes for adaptation are presupposed to be random, or to be intelligently guided!,,, It appears, at least to me, that God wants no confusion whatsoever as to where life originates from! Verse and music:
John 1:1-4 In the beginning was the Word, and the Word was with God, and the Word was God. The same was in the beginning with God. All things were made by him; and without him was not any thing made that was made. In him was life; and the life was the light of men. Hillsong - Mighty to Save - With Subtitles/Lyrics http://www.youtube.com/watch?v=-08YZF87OBQ
bornagain77
Chance ratcliff #216: You rightly suggest that given the probabilities involved, intelligent involvement is a better explanation than random chance, with a ~95% likelihood of a guided fix over a lucky set of throws. Dembski is much more cautious, suggesting a partition of 10^-150 for a design inference. I was wondering whether anyone would notice, since the example was deliberately picked to just cross the 95% threshold, which is considered good enough in epidemiology, psychology,... Dembski is surely playing it safe there. Not that it helped much. I just corrected his wiki article bio in which someone was putting words in his mouth by attributing him "intelligent mind", when he actually wrote "intelligent cause" (which is fine, although "intelligent process" would be even better for advancing ID into a legitimate science). His choice of words is more guarded than Meyer's, though. nightlight
nightlight @217: Much ado about nothing perhaps? imply (?m?pla?) — vb , -plies , -plying , -plied 1. to express or indicate by a hint; 2. to suggest or involve as a necessary consequence 3. logic to enable (a conclusion) to be inferred ----- So either 1 or 3 would be appropriate in this context. Seems like he is using the word in a reasonable way. Eric Anderson
StephenB #207: Given the distinction I have just made, why would you continue to conflate the scientific demonstration with the philosophical implication of that demonstration by using the word "scientific demonstration" to characterize the implication? As far as I can see, he is conflating terms infer and imply, using them interchangeably. Here is Google books link, to his article with highlighted terms "intelligent mind" and "infer" (extending into the follow up pages). On page 93, he first gives example in which we "infer" "intelligent agents" when seeing human artifacts, then follows up saying in the next sentence: "Similarly, the specifically arranged nucleotide sequences,[...] imply the past action of an intelligent mind, even if such mental agency cannot be directly observed." Both sentences are deliberately structured to exactly parallel each other in form and phrasing in order to amplify the equivalence of the two conclusions. In any case, that seems a very weak attempt at a defense on finer semantic ambiguities, at best. For example, in physics and math papers (his major was physics, after all), "A implies B" is used to indicate that B follows logically from A (equations, theorems, etc). I can't think of an instance I have seen "infer" used in such context. Using "infer" wouldn't even make sense here since with "infer" the subject is the agency doing the inference, while "equation A implies eq. B" is an impersonal form of logical implication, which is common in scientific literature, meaning precisely that B logically follows from A. That's how I read the cited sentence and apparently, so does he, where "implies" was used in the impersonal form of logical implication (you can't use "infer" in his sentence, it wouldn't make sense since he has no subject who could do the "inferring" in that sentence). nightlight
nightlight, At #191 you wrote,
"You calculate the size of event space N when rolling 3 dice, which is N=6^3=216 combinations 1=(1,1,1), 2=(1,1,2),… 216=(6,6,6). The odds of not getting (1,1,1) in 1 throw are 215/216. The odds of not getting (1,1,1) in 2 throws are (215/216)^2,… the odds of not getting (1,1,1) in 10 throws are (215/216)^10 = 95.47 %, hence the chance of achieving (1,1,1) in 10 of fewer tries is 100-95.47=4.53 %. So, a random process couldn’t be getting (1,1,1) in 10 or fewer throws 50% of the time, but would get it only 4.54 % of time. Hence, the process was intelligently guided."
I just wanted to point out that here you are making a design inference. ;) You rightly suggest that given the probabilities involved, intelligent involvement is a better explanation than random chance, with a ~95% likelihood of a guided fix over a lucky set of throws. Dembski is much more cautious, suggesting a partition of 10^-150 for a design inference. This is done to rule out chance events, and is very conservative with respect to inferring design. Chance Ratcliff
nightlight @191,
"Just because one can observe beneficial mutations in the lab or in nature, that doesn’t mean the pick of DNA alteration was random among all possible alterations. It only means that DNA transformed in a beneficial manner in a given amount of time, but implies nothing about nature of guidance (intelligently guided or random)."
I agree with that. A genetic change occurring, even simple ones, does not imply that the change was random with respect to the entire genome. This touches on doubts I've been having about replication errors in general, that the genome space in prokaryotes, specifically e.coli, is vast at 4.6 million base pairs. Unless my math is wrong, this indicates that any specific point mutation occurs in a space of 20^(4.6*10^6) possible substitutions at the expression level. I may well be missing something important, but a point mutation caused by a replication error, say a single one that confers some selectable advantage given a specific environmental factor, should not be expected to occur twice in the age of the universe, if even once. So yes, this seems to present a problem for neo-Darwinism where a single point mutation occurs independently more than once in different strains. At least to me, this is suggestive of targeted mutations as opposed to strictly random ones. This doesn't mean randomness isn't a factor, it just suggests that perhaps some areas of the genome can vary at higher rates than others, and yes, that is indicative of intelligence, because it hints at a targeted search. I don't think it is a problem for ID though. Just because ID doesn't take issue with the possibility of a random mutation introducing a net positive effect, does not mean that it steps onto a slippery slope of accepting any genomic changes as purely random. Actually, as with "junk" DNA, the presumed "random" factor of biological change presents a gap that appears to be shrinking as new discoveries come to light. I'd appreciate if you or anyone watching could check my assumptions above regarding the size of the genome space with respect to uniform random mutations. Chance Ratcliff
Chance Ratcliff #208: nightlight: they needlessly concede that "random" mutation completely explains "micro-evolution". Are you able to support that assertion by providing a relevant quote from a major ID proponent? That was a conclusion based on the absence in any ID literature or talks of any challenges to "randomness" attribute of spontaneous mutations, and silent going along with the ND leap of logic by which any observed (in lab or nature) instance of microevolution is result of random mutation (or sequence of such combined with NS). The challenge would have to ask those claiming "randomness" to show their event space and the weights they assigned to all possible alterations of DNA in order to conclude it was random rather than due to some 'intelligent process' (reasoning analogous to that in the dice example). If you have seen a challenge on the above issues, you are welcome to bring it up. This is not the case. Point mutations caused by copying errors occur during DNA replication. This fact is made implicit by the presence of multiple error correction mechanisms for replication which each have a less-than-perfect effect in correcting errors. For changes that can be wrought by a point mutation, it's acceptable to presume a point mutation as the cause, when no other cause is implicated. The issue is when such copying error is Mb>credited for the beneficial effects (in order to boost the mythical powers of the RM + NS feedback loop). A counter-example of validity of that type crediting in technological evolution -- a programmer may get a copying error when transferring source code from one machine to another. Who should one credit for the subsequently observed improvement in the repaired code -- to the copying error or to the intelligent activity by the programmer who detected the error and rewrote the affected code, improving it in the process? Copying/reproduction errors are also observed in evolution of technologies. But one would really have to strain to find a case when the copying error is credited for the subsequent evolutionary improvements of the product. The improvement is always the result of the corrective actions by the intelligent agency (humans), rather than random damage. Any such error is at most a trigger (for the intelligent agency), which should only reinforce the conclusion that an intelligent agency was active in generating the resulting beneficial novelties. After all, if you snap a thread on a sweater, it is not going to get fixed let alone improved without some intelligent agency getting involved. Any subsequent improvement of a sweater after a randomly snapped thread only amplifies the evidence of activity of the intelligent agency. Yet, neo-Darwinians crow when they find beneficial mutations, as if it proves the absence of such intelligent agency. And ID proponents, routinely concede in such cases (usually by silence, absence of challenge), when in fact such beneficial mutations are in favor of ID and they should the ones crowing about beneficial mutations. The root cause of this paradoxical situation is precisely the key concession about the randomness of any observed mutations. Interestingly, James Shapiro is often much more critical about such gratuitous attributions of randomness to beneficial mutation than the ID proponents. nightlight
BA77 @209, thanks for those links. I had seen the video before, probably because you had generously provided it previously, but the Biota Curve article was new to me, and I enjoyed it. Phenotypic plasticity looks to provide a rich avenue of biological research, and is yet something else that makes the neo-Darwinian mechanism of variation -- random mutation -- look powerless to do much of anything significant. Chance Ratcliff
Copy paste of the corrected typo wpould have helped... kairosfocus #194 Pardon, but - with all due respect - ignoring cogent correction from several sources then repeating the same talking points ad nauseum will not work at UD. I think two of us have for whatever reason hit a semantic wall and there was no progress over several exchanges. Hence I chose to leave the last word with you, if only to avoid boring the rest of the guys here with what was increasingly turning into a semantic nitpick tiff. For example, on "redness" I was talking about qualia of red ("hard problem of consciousness"), while you were talking about physiology of color perception. Hence there was nothing to concede or stand corrected and change in what I was saying. Similarly, on self-evident schema of any natural science S: (M) - Model space (formalism & algorithms) (E) - Empirical procedures & facts of the "real" world (O) - Operational rules mapping between (M) and (E) The (M) component is a generator of meaningful statements in S. The "statements" can be numbers, words, symbols, pictures,... The generator must follow the rules of logic (e.g. it shouldn't produce mutually contradictory statements). Output obtained within (M) by one practitioner of S should be reproducible within (M) by any other practitioner of S, i.e. the procedures of (M) are algorithmic (one could conceive a computer checking the output or generating it i.e. the operation in (M) should be in principle programmable and executable on a computer). Component (E) is analogously, a procedural system and technology for extracting the data relevant to S from the "real" world. Component (O) are the procedures (algorithms) which map between statements produced by (M) and empirical facts produced by (E), allowing for falsification of statements by (M). In most cases the mappings by (O) are implicit, accomplished by simply using the same name for the corresponding elements of (M) and (E). There is nothing of substance that can be argued about this basic schema which consists mostly of definitions and labels (perhaps priority of various requirements may be reordered), since it is self-evident and the only issues one may have are a matter of taste. I happen to like it since is very useful analytical tool for troubleshooting and disentangling otherwise perplexing semantic tangles such as those often encountered in interpretations of Quantum Theory (that's the literature where I picked this scheme from). You objected to "algorithmic" attribute of (M), a term which you define more narrowly than I do (my semantics classify as algorithmic any procedure which can be programmed into a computer or an android robot or any other computer controlled devices if it requires actions in the real world). Hence, there wasn't anything to correct or retract about any of that either. If you wish to have the last word, that's fine with me, I will leave it at what was said above. nightlight
oops, sorry, mistyped the closing a-tag in #210. kairosfocus #194 Pardon, but - with all due respect - ignoring cogent correction from several sources then repeating the same talking points ad nauseum will not work at UD. I think two of us have for whatever reason hit a semantic wall and there was no progress over several exchanges. Hence I chose to leave the last word with you, if only to avoid boring the rest of the guys here with what was increasingly turning into a semantic nitpick tiff. For example, on "redness" I was talking about qualia of red ("hard problem of consciousness"), while you were talking about physiology of color perception. Hence there was nothing to concede or stand corrected and change in what I was saying. Similarly, on self-evident schema of any natural science S: (M) - Model space (formalism & algorithms) (E) - Empirical procedures & facts of the "real" world (O) - Operational rules mapping between (M) and (E) The (M) component is a generator of meaningful statements in S. The "statements" can be numbers, words, symbols, pictures,... The generator must follow the rules of logic (e.g. it shouldn't produce mutually contradictory statements). Output obtained within (M) by one practitioner of S should be reproducible within (M) by any other practitioner of S, i.e. the procedures of (M) are algorithmic (one could conceive a computer checking the output or generating it i.e. the operation in (M) should be in principle programmable and executable on a computer). Component (E) is analogously, a procedural system and technology for extracting the data relevant to S from the "real" world. Component (O) are the procedures (algorithms) which map between statements produced by (M) and empirical facts produced by (E), allowing for falsification of statements by (M). In most cases the mappings by (O) are implicit, accomplished by simply using the same name for the corresponding elements of (M) and (E). There is nothing of substance that can be argued about this basic schema which consists mostly of definitions and labels (perhaps priority of various requirements may be reordered), since it is self-evident and the only issues one may have are a matter of taste. I happen to like it since is very useful analytical tool for troubleshooting and disentangling otherwise perplexing semantic tangles such as those often encountered in interpretations of Quantum Theory (that's the literature where I picked this scheme from). You objected to "algorithmic" attribute of (M), a term which you define more narrowly than I do (my semantics classify as algorithmic any procedure which can be programmed into a computer or an android robot or any other computer controlled devices if it requires actions in the real world). Hence, there wasn't anything to correct or retract about any of that either. If you wish to have the last word, that's fine with me, I will leave it at what was said above. Tag edited, KF nightlight
kairosfocus #194 Pardon, but - with all due respect - ignoring cogent correction from several sources then repeating the same talking points ad nauseum will not work at UD. I think two of us have for whatever reason hit a semantic wall and there was no progress over several exchanges. Hence I chose to leave the last word with you, if only to avoid boring the rest of the guys here with what was increasingly turning into a semantic nitpick tiff. For example, on "redness" I was talking about qualia of red ("hard problem of consciousness"), while you were talking about physiology of color perception. Hence there was nothing to concede or stand corrected and change in what I was saying. Similarly, on self-evident schema of any natural science S: (M) - Model space (formalism & algorithms) (E) - Empirical procedures & facts of the "real" world (O) - Operational rules mapping between (M) and (E) The (M) component is a generator of meaningful statements in S. The "statements" can be numbers, words, symbols, pictures,... The generator must follow the rules of logic (e.g. it shouldn't produce mutually contradictory statements). Output obtained within (M) by one practitioner of S should be reproducible within (M) by any other practitioner of S, i.e. the procedures of (M) are algorithmic (one could conceive a computer checking the output or generating it i.e. the operation in (M) should be in principle programmable and executable on a computer). Component (E) is analogously, a procedural system and technology for extracting the data relevant to S from the "real" world. Component (O) are the procedures (algorithms) which map between statements produced by (M) and empirical facts produced by (E), allowing for falsification of statements by (M). In most cases the mappings by (O) are implicit, accomplished by simply using the same name for the corresponding elements of (M) and (E). There is nothing of substance that can be argued about this basic schema which consists mostly of definitions and labels (perhaps priority of various requirements may be reordered), since it is self-evident and the only issues one may have are a matter of taste. I happen to like it since is very useful analytical tool for troubleshooting and disentangling otherwise perplexing semantic tangles such as those often encountered in interpretations of Quantum Theory (that's the literature where I picked this scheme from). You objected to "algorithmic" attribute of (M), a term which you define more narrowly than I do (my semantics classify as algorithmic any procedure which can be programmed into a computer or an android robot or any other computer controlled devices if it requires actions in the real world). Hence, there wasn't anything to correct or retract about any of that either. If you wish to have the last word, that's fine with me, I will leave it at what was said above. Tag edited, KF nightlight
Chance Ratcliff you might be interested in this: Phenotypic Plasticity - Lizard cecal valve (cyclical variation)- video http://www.youtube.com/watch?v=zEtgOApmnTA Lizard Plasticity - March 2013 http://biota-curve.blogspot.com/2013/03/lizard-plasticity.html bornagain77
nightlight @191, In #178 I suggested that you provide a quote to support your assertion that ID concedes that random mutations can account for all of microevolution. Specifically you asserted,
That is actually another common misstep by ID proponents — they needlessly concede that “random” mutation completely explains “micro-evolution”.
Are you able to support that assertion by providing a relevant quote from a major ID proponent? You went on to say,
Hence, by conceding “randomness” attribute of observed spontaneous mutations, ID proponents are setting themselves to have to accept any genetic novelty that can be observed to happen spontaneously in the lab or in nature as being result of “random” mutation (such as rapid adaptations of those isolated lizards on an Adriatic island recently).
This is not the case. Point mutations caused by copying errors occur during DNA replication. This fact is made implicit by the presence of multiple error correction mechanisms for replication which each have a less-than-perfect effect in correcting errors. For changes that can be wrought by a point mutation, it's acceptable to presume a point mutation as the cause, when no other cause is implicated. For changes that are the likely result of a signalling process, such as a dietary shift in lizards, and which require alterations much more significant than a couple of point mutations can achieve, there's no good reason to credit point mutations. It does not follow that accepting random mutations as a potential cause for small changes that can occur as the result of replication errors, means that any changes that do occur, no matter how rapidly and how complex, are the result of random processes. Chance Ratcliff
nightlight
I cringe because mixing a perfectly valid empirical observation (the ID design detection) allows those who don’t like the philosophical or religious implications, to disqualify it as non-science since it claims to infer the “mind” or “consciousness”, which is not scientifically valid (within present natural science which lacks a model of ‘mind stuff’ and objective empirical way to detect it).
There is no reason why a philosopher of science, who is commenting on the relationship between science and philosophy, should remain silent about the connection simply because anti-ID partisans will distort it. William Dembski did the same thing when he compared design thinking to the "Logos theory of the Gospel." ID proponents cannot prevent self-serving critics from willfully misrepresenting what they say. In his final decision at the Dover trial, John Judge Jones deceptively mis-characterized Behe's observation that ID is "consistent with religion" and reframed it as ID "depends on religion" in order to justify his false claim that ID is a faith-based methodology. In fact, sometimes B is implied by (or consistent with) A even though B does not strictly follow from A in a scientific sense. We need more public education about the relationship between philosophy, science, and theology, not less. More importantly, we need more public education about the difference between a scientific inference and its religious/philosophical implications.
The only thing that ID design detection in biology implies is that intelligent process (or agent) produced those artifacts, not whether such process or agent had a mind or consciousness as Meyer claims.
As long as Meyer is using the word "imply" rather than the word "infer," he is making a perfectly valid observation. In each known case of design, a mind (not simply an agent) was, indeed, present; so it is reasonable, though not scientifically demonstrable, to suggest that a mind was responsible for biological design. There is nothing new about any of this. From the expanding universe, for example, one can only "infer" a Big Bang, but the "implication" of a First Cause Creator is obviously present. That is why non-Theists first attacked the theory. The "implications" were, and are, obvious.
Neither Stephen Meyer, nor anyone else, has any way of demonstrating scientifically even that his wife has “mind”, who is in front of him and telling him she has it, let alone claiming to infer that something no one has ever observed has it.
Given the distinction I have just made, why would you continue to conflate the scientific demonstration with the philosophical implication of that demonstration by using the word "scientific demonstration" to characterize the implication? StephenB
BA77, I don't think that's what he's (she's?) done. First, he's separated "planckian networks" from "space-time-matter-energy" by arguing that these behaviors of phenomena are orchestrated by intelligence and by pointing out that the "intelligent behavior" inherent in "planckian networks" is like the behavior associated with gravity: it's a given. IOW, the motivating force or property of "planckian networks" cannot be "caused by" space-time-matter-energy, because the behaviors and commodities we call space, time, matter and energy are generated and orchestrated by planckian networks toward a "goal" that is an extension of the "given" intelligent nature of planckian networks. Second, he's deliberately avoided (as far as I can tell) making any claims about "consciousness" other than arguing that the term is unnecessary and problematic for an ID argument or explanation. It is the "given" nature of planckian networks that philosophically require willful purpose (demiurge), but do not require it for a scientific description of what occurs. Why an intelligent process pursues X is one thing; the process by which it achieves X is another. NL is offering a scientific description of how intelligence moves matter down a path towards X, not why - ultimately - intelligent planckian networks with such a goal should exist in the first place. IMO, consciousness is the intersection of will and intelligence; will directs computational intelligence at all/various levels to organize matter & energy giving form to the void, which provides a perspective where consciousness can reside. I'm just organizing these thoughts on the fly here as I attempt to process what NL is saying. I could be completely mistaken about what he means, but frankly I'm enjoying tooling around with some of the concepts here. William J Murray
Nightlight: I played through grad school, got to USCF 2100 (expert rating).
I’m talking about a totally different level of chess. I was a professional tournament player for many years. I have known the top players of the world and many still know me. The point I was making is this. These days everyone uses computers in order to produce opening novelties; while preparing for upcoming games. I have co-analyzed opening positions with the best. And they all know when the computer assessment is of importance and when it is not. They all ignore computer assessments in the certain kind of strategically positions. Many times computer moves are laughed at by strong players – ridiculed. Only amateurs think that computers master all kinds of positions. So again, when ‘overview’ – which springs from consciousness – is required, there is nobody home in the computer. The computer is good at calculating combinations and nothing else. Box
Of semi related note: Do Physical Laws Make Things Happen? - Stephen L. Talbott Excerpt: While there are many complex and diverse movements of mind as we speak, it is fair to say very generally that we first have an idea, inchoate though it may be, and then we seek to capture and clothe this idea in words. Each word gains its full meaning — becomes the word it now is — through the way it is conjoined with other words under the influence of the originating idea. The word simply didn't exist as this particular word before — as a word with these nuances of meaning. So an antecedent whole (an idea) becomes immanent in and thereby transforms and constitutes its parts (words), making them what they are. In terms of active agency, it is less that the parts constitute the whole than the other way around. http://www.natureinstitute.org/txt/st/mqual/ch03.htm#fn3.0 bornagain77
But alas William, what prevents his 'science' from collapsing into epistemological failure since he has, as far as I can tell, defined consciousness as co-existent with space-time matter-energy? Psalm 139:16 Your eyes saw my unformed substance; in your book were written, every one of them, the days that were formed for me, when as yet there was none of them. bornagain77
BA77, I find his bottom-up, scientific description of "how intelligence works and orders the universe" fully compatible with a top-down philosophical perspective if one views the former as the 3-dimensional sequenced view of a 4-dimensional eternal state. He's not saying that consciousness or free will doesn't exist; he's saying there is no need for it in a scientific, computational intelligence explanation of fine-tuning and the advent of life. That doesn't mean consciousness/free will isn't necessary philosophically; it's just not a necessary part of the scientific description of how intelligence acts in the world. The anti-IDists are always clamoring for an agent-to-matter mechanism for intelligent manipulation; NL has just served one up. Unless I'm misunderstanding him. He can certainly correct me if I'm off-base with my interpretation. William J Murray
WJM (198) One of us has invented a new theory :) If it turns out that I am the creative one, I will immediately retract 'my invention'. Box
William J Murray???
That’s not really what I’m getting from reading NL’s material.
HUH??? Is there some prequel to this Matrix movie that you know about that explains all the gaping holes in his plot? bornagain77
@ Correction: The title should have been 'Nightline’s panpsychism in layman terms'. Box
Box, That's not really what I'm getting from reading NL's material. Sure, a "small section" of the planckian network might be appropriately termed "primitive" as far as intelligence is concerned, but then so could a few brain cells and synapses. Examined from the perspective of the breadth of the cosmos, the full planckian network represents a kind of absolute intelliigence, with as much computational power as there can exist as far as the universe is concerned. NL has already said that such planckian networks essentially exist "outside" (or below?) the physical space-time continuum (correct me if I'm misrpresenting, NL), so we can look at the planckian network (universal mind) from the perspective of omniscient intelligence that exists as a whole outside of space and time, with the universal computational power to generate and maintain what would be - to us - miraculous rearrangements of matter - such as the advent of a fine-tuned universe and life. Taken in this context, this universal mind is eternal, omniscient, omnipotent and omnipresent in their logically acceptable forms - what can be known, the planckian universal mind knows; what can be done, it can do. From its "eternal" or "outside space-time" perspective, the "evolution" of mind from bottom up is really no more than an eternally existent gradient through a 4th physically existent dimension we experience as "time". As an analogy, a person on a moving walkway in a long narrow corridor, who is unaware that he is moving, passes from a dark area through increasing light to a well lit area; to him, it seems that the light got more and more bright as time went by. But both the light and dark areas always exist at opposite ends of the hallway. IOW, NL's bottom-up explanation is well suited for scientific purposes, but it does not contradict the philosophical perspective of reality as a top-down ordered system created by god. The "bottom-up" perspective is just what the progress of the universe looks like from the perspective of time-bound observers like us. William J Murray
Nightline’s pantheism in layman terms: It is a bottom-up explanation of reality. Already at the smallest scale (elemental entities) we find primitive intelligence and consciousness. Each level designs and self-organizes the next level. Each level is more conscious (composite consciousness) and more intelligent than the previous level. Everything proceeds unsupervised and by self-learning. At the ground level we find ‘Planckian elemental entities’, who are conscious and intelligent in a primitive way. They design and self-organize into the next level. Planckian networks, who are more intelligent (additive intelligence) and have a broader consciousness (composite consciousness). They design and self-organize into several distinct forms of the next level. The next level is photons, quarks etc. These are in fact super intelligent and very conscious entities only comparable to super computers. Of course they also design and organize themselves into various next levels. As we all know it can go to all sorts of directions from here. One possibility is that photons and quarks design and self-organize into biochemical networks. This is of course a much higher level of consciousness and intelligence. Biochemical networks are able to run internal models to invent body plans. The next step is self-organization into human beings. Any questions? Box
NL:
StephenB #184 "Why would you cringe? The process by which the scientific inference to design is made is not synonymous with the philosophical/religious implications that may follow from it. There are no indicators for “mind” or “consciousness” in ID methodology-only the inferred presence of an intelligent agent." NL: "I cringe because mixing a perfectly valid empirical observation (the ID design detection) allows those who don’t like the philosophical or religious implications, to disqualify it as non-science since it claims to infer the “mind” or “consciousness”, which is not scientifically valid (within present natural science which lacks a model of ‘mind stuff’ and objective empirical way to detect it).",,
Yet,,,
"It will remain remarkable, in whatever way our future concepts may develop, that the very study of the external world led to the scientific conclusion that the content of the consciousness is the ultimate universal reality" - Eugene Wigner - (Remarks on the Mind-Body Question, Eugene Wigner, in Wheeler and Zurek, p.169) 1961 - received Nobel Prize in 1963 for 'Quantum Symmetries' Eugene Wigner receives his Nobel Prize for Quantum Symmetries - video 1963 http://www.nobelprize.org/mediaplayer/index.php?id=1111
Here is Wigner commenting on the key experiment that led Wigner to his Nobel Prize winning work on quantum symmetries,,,
Eugene Wigner Excerpt: When I returned to Berlin, the excellent crystallographer Weissenberg asked me to study: why is it that in a crystal the atoms like to sit in a symmetry plane or symmetry axis. After a short time of thinking I understood:,,,, To express this basic experience in a more direct way: the world does not have a privileged center, there is no absolute rest, preferred direction, unique origin of calendar time, even left and right seem to be rather symmetric. The interference of electrons, photons, neutrons has indicated that the state of a particle can be described by a vector possessing a certain number of components. As the observer is replaced by another observer (working elsewhere, looking at a different direction, using another clock, perhaps being left-handed), the state of the very same particle is described by another vector, obtained from the previous vector by multiplying it with a matrix. This matrix transfers from one observer to another. http://www.reak.bme.hu/Wigner_Course/WignerBio/wb1.htm
Further notes:
“No, I regard consciousness as fundamental. I regard matter as derivative from consciousness. We cannot get behind consciousness. Everything that we talk about, everything that we regard as existing, postulates consciousness.” (Max Planck, as cited in de Purucker, Gottfried. 1940. The Esoteric Tradition. California: Theosophical University Press, ch. 13). “Consciousness cannot be accounted for in physical terms. For consciousness is absolutely fundamental. It cannot be accounted for in terms of anything else.” (Schroedinger, Erwin. 1984. “General Scientific and Popular Papers,” in Collected Papers, Vol. 4. Vienna: Austrian Academy of Sciences. Friedr. Vieweg & Sohn, Braunschweig/Wiesbaden. p. 334.)
bornagain77
NL, there are reasons why you can't exclude Theistic 'mind stuff' from science. One reason why you can't exclude Theistic 'mind stuff' from science is that the foundation of modern science is built upon the epistemology of Theistic 'mind stuff'. Particularly it is built upon the epistemology of the Christian version of Theistic 'Mind stuff':
Epistemology – Why Should The Human Mind Even Be Able To Comprehend Reality? – Stephen Meyer - video – (Notes in description) http://vimeo.com/32145998 Jerry Coyne on the Scientific Method and Religion - Michael Egnor - June 2011 Excerpt: The scientific method -- the empirical systematic theory-based study of nature -- has nothing to so with some religious inspirations -- Animism, Paganism, Buddhism, Hinduism, Shintoism, Islam, and, well, atheism. The scientific method has everything to do with Christian (and Jewish) inspiration. Judeo-Christian culture is the only culture that has given rise to organized theoretical science. Many cultures (e.g. China) have produced excellent technology and engineering, but only Christian culture has given rise to a conceptual understanding of nature. http://www.evolutionnews.org/2011/06/jerry_coyne_on_the_scientific_047431.html The Origin of Science Jaki writes: Herein lies the tremendous difference between Christian monotheism on the one hand and Jewish and Muslim monotheism on the other. This explains also the fact that it is almost natural for a Jewish or Muslim intellectual to become a patheist. About the former Spinoza and Einstein are well-known examples. As to the Muslims, it should be enough to think of the Averroists. With this in mind one can also hope to understand why the Muslims, who for five hundred years had studied Aristotle's works and produced many commentaries on them failed to make a breakthrough. The latter came in medieval Christian context and just about within a hundred years from the availability of Aristotle's works in Latin.. As we will see below, the break-through that began science was a Christian commentary on Aristotle's De Caelo (On the Heavens).,, Modern experimental science was rendered possible, Jaki has shown, as a result of the Christian philosophical atmosphere of the Middle Ages. Although a talent for science was certainly present in the ancient world (for example in the design and construction of the Egyptian pyramids), nevertheless the philosophical and psychological climate was hostile to a self-sustaining scientific process. Thus science suffered still-births in the cultures of ancient China, India, Egypt and Babylonia. It also failed to come to fruition among the Maya, Incas and Aztecs of the Americas. Even though ancient Greece came closer to achieving a continuous scientific enterprise than any other ancient culture, science was not born there either. Science did not come to birth among the medieval Muslim heirs to Aristotle. …. The psychological climate of such ancient cultures, with their belief that the universe was infinite and time an endless repetition of historical cycles, was often either hopelessness or complacency (hardly what is needed to spur and sustain scientific progress); and in either case there was a failure to arrive at a belief in the existence of God the Creator and of creation itself as therefore rational and intelligible. Thus their inability to produce a self-sustaining scientific enterprise. If science suffered only stillbirths in ancient cultures, how did it come to its unique viable birth? The beginning of science as a fully fledged enterprise took place in relation to two important definitions of the Magisterium of the Church. The first was the definition at the Fourth Lateran Council in the year 1215, that the universe was created out of nothing at the beginning of time. The second magisterial statement was at the local level, enunciated by Bishop Stephen Tempier of Paris who, on March 7, 1277, condemned 219 Aristotelian propositions, so outlawing the deterministic and necessitarian views of creation. These statements of the teaching authority of the Church expressed an atmosphere in which faith in God had penetrated the medieval culture and given rise to philosophical consequences. The cosmos was seen as contingent in its existence and thus dependent on a divine choice which called it into being; the universe is also contingent in its nature and so God was free to create this particular form of world among an infinity of other possibilities. Thus the cosmos cannot be a necessary form of existence; and so it has to be approached by a posteriori investigation. The universe is also rational and so a coherent discourse can be made about it. Indeed the contingency and rationality of the cosmos are like two pillars supporting the Christian vision of the cosmos. http://www.columbia.edu/cu/augustine/a/science_origin.html
Please note the 'contingency' pillar of modern science NL. For in your relabeled version of science (where Theistic 'mind stuff' is not considered scientific) you have what you have called a 'last Russian doll' which can't be opened at the beginning of the universe. Well NL, science, as it is properly practiced (not as it is practiced in your imagination), could care less that you don't want to look in that 'last Russian doll', and demands that a rational explanation be given for why the universe exists. And it is here, at the beginning of the universe, that the epistemological strength of Theism comes shining through and the abject epistemological failure of all other worldviews, which shun such Theistic 'mind stuff', is exposed:
The Absurdity of Inflation, String Theory and The Multiverse - Dr. Bruce Gordon - video http://vimeo.com/34468027 Dr. Gordon's astute observation in his last powerpoint is here: The End Of Materialism? * In the multiverse, anything can happen for no reason at all. * In other words, the materialist is forced to believe in random miracles as a explanatory principle. * In a Theistic universe, nothing happens without a reason. Miracles are therefore intelligently directed deviations from divinely maintained regularities, and are thus expressions of rational purpose. * Scientific materialism is (therefore) epistemically self defeating: it makes scientific rationality impossible.
Further notes:
BRUCE GORDON: Hawking’s irrational arguments – October 2010 Excerpt: Rather, the transcendent reality on which our universe depends must be something that can exhibit agency – a mind that can choose among the infinite variety of mathematical descriptions and bring into existence a reality that corresponds to a consistent subset of them. This is what “breathes fire into the equations and makes a universe for them to describe.” Anything else invokes random miracles as an explanatory principle and spells the end of scientific rationality.,,, Universes do not “spontaneously create” on the basis of abstract mathematical descriptions, nor does the fantasy of a limitless multiverse trump the explanatory power of transcendent intelligent design. What Mr. Hawking’s contrary assertions show is that mathematical savants can sometimes be metaphysical simpletons. Caveat emptor. http://www.washingtontimes.com/news/2010/oct/1/hawking-irrational-arguments/ Godel and Physics - John D. Barrow Excerpt (page 5-6): "Clearly then no scientific cosmology, which of necessity must be highly mathematical, can have its proof of consistency within itself as far as mathematics go. In absence of such consistency, all mathematical models, all theories of elementary particles, including the theory of quarks and gluons...fall inherently short of being that theory which shows in virtue of its a priori truth that the world can only be what it is and nothing else. This is true even if the theory happened to account for perfect accuracy for all phenomena of the physical world known at a particular time." Stanley Jaki - Cosmos and Creator - 1980, pg. 49 Kurt Gödel - Incompleteness Theorem - video http://www.metacafe.com/w/8462821 Taking God Out of the Equation – Biblical Worldview – by Ron Tagliapietra – January 1, 2012 Excerpt: Kurt Gödel (1906–1978) proved that no logical systems (if they include the counting numbers) can have all three of the following properties. 1. Validity . . . all conclusions are reached by valid reasoning. 2. Consistency . . . no conclusions contradict any other conclusions. 3. Completeness . . . all statements made in the system are either true or false. The details filled a book, but the basic concept was simple and elegant. He summed it up this way: “Anything you can draw a circle around cannot explain itself without referring to something outside the circle—something you have to assume but cannot prove.” For this reason, his proof is also called the Incompleteness Theorem. Kurt Gödel had dropped a bomb on the foundations of mathematics. Math could not play the role of God as infinite and autonomous. It was shocking, though, that logic could prove that mathematics could not be its own ultimate foundation. Christians should not have been surprised. The first two conditions are true about math: it is valid and consistent. But only God fulfills the third condition. Only He is complete and therefore self-dependent (autonomous). God alone is “all in all” (1 Corinthians 15:28), “the beginning and the end” (Revelation 22:13). God is the ultimate authority (Hebrews 6:13), and in Christ are hidden all the treasures of wisdom and knowledge (Colossians 2:3). http://www.answersingenesis.org/articles/am/v7/n1/equation# The God of the Mathematicians – Goldman Excerpt: As Gödel told Hao Wang, “Einstein’s religion [was] more abstract, like Spinoza and Indian philosophy. Spinoza’s god is less than a person; mine is more than a person; because God can play the role of a person.” – Kurt Gödel – (Gödel is considered one of the greatest logicians who ever existed) http://www.firstthings.com/article/2010/07/the-god-of-the-mathematicians The Center Of The Universe Is Life - General Relativity, Quantum Mechanics, Entropy and The Shroud Of Turin - video http://vimeo.com/34084462
bornagain77
NL: Pardon, but -- with all due respect -- ignoring cogent correction from several sources then repeating the same talking points ad nauseum will not work at UD. Except to classify you in the category of the talking point pushers. Please, think again. KF kairosfocus
NL you state:
So, that’s a counter-example invalidating your and Dr Abel’s claims of impossibility of such natural processes, not an “experimental proof” of anything as you keep relabeling it.
So you have an 'example' of the null being falsified but you have no actual 'experimental proof' of the null being falsified? Such as say a single functional protein or a molecular machine arising by your 'neural network' method? How convenient! Seems to me you are the one doing some major relabeling as to what constitutes falsification in science. Shoot you have even relabeled all of science just so that it conveniently can't include any Theistic 'mind stuff' (or any 'random' Darwinian stuff) but just so happens to conveniently include your false idol MATRIX version of 'mind stuff'.,,, Sure must be nice to practice science in such a way that you can guarantee only your theory will be considered 'scientific' beforehand. bornagain77
PetrerJ #190 nightlight: "Namely, if the natural process has some very simple intelligence front loaded, such as working like a neural network, then such intelligence is additive, hence it can accumulate any level of intelligence needed to explain the intelligence implied by the biological artifacts." The `simple intelligence' you describe as being front loaded, why should it be `simple'? By Ockham's razor, the simplest model that suffices to explain the phenomenon would the preferable. Note that dumber the initial network, the more layers you need to reach given target level of intelligence (target as deduced from biological artifacts via ID detection method). Hence, the minimum needed front loaded level of intelligence may be determined by the number of layers of networks we can fit between the lowest (Planckian scale) and the highest level (cellular biochemical networks). Also, could this `simple intelligence' you talk of be that of a `mind'? Mind stuff (consciousness, qualia) is a fact of the personal experience. Hence it needs explaining. My preference is panpsychism, where the elemental building blocks, such nodes of Planckian networks, already have built in 'mind stuff' as the fundamental driver of their actions/decisions. Earlier posts #58 and #109 describe a possible model for amplification and composition of this elemental mind stuff into the mind stuff as we experience it. nightlight
Chance Ratcliff #178: ID accepts that, in principle, random mutations are perfectly capable of explaining certain microevolutionary changes, such as bacterial drug resistance. Why would they concede even that when no one knows how to evaluate odds that out of all possible alterations of DNA consistent with laws of physics & chemistry, random picks of any such alterations would suffice to produce observed rate of beneficial mutations. Just because one can observe beneficial mutations in the lab or in nature, that doesn't mean the pick of DNA alteration was random among all possible alterations. It only means that DNA transformed in a beneficial manner in a given amount of time, but implies nothing about nature of guidance (intelligently guided or random). Suppose someone claims, and shows, they can get triple 1 by rolling 3 dice in 10 or fewer throws, at least half of the time. How would you know whether it was random or cheating (intelligently guided)? You calculate the size of event space N when rolling 3 dice, which is N=6^3=216 combinations 1=(1,1,1), 2=(1,1,2),... 216=(6,6,6). The odds of not getting (1,1,1) in 1 throw are 215/216. The odds of not getting (1,1,1) in 2 throws are (215/216)^2,... the odds of not getting (1,1,1) in 10 throws are (215/216)^10 = 95.47 %, hence the chance of achieving (1,1,1) in 10 of fewer tries is 100-95.47=4.53 %. So, a random process couldn't be getting (1,1,1) in 10 or fewer throws 50% of the time, but would get it only 4.54 % of time. Hence, the process was intelligently guided. No one has clue how to calculate such event space (and any weights of different configurations), to show that a random pick among all such accessible configurations, given the populations size and number of alternations tried in a given time. All they can do is measure spontaneous mutation rates, but those dice throws were spontaneous, too (on video they looked just like real random throws). Spontaneous is not synonym for random. Spontaneous means not induced by external deliberate interference by the experimenters. But how it was guided beyond that, you can't tell without having a probabilistic model of the event space, such as the above dice model, where you enumerate all accessible alternate configurations, then assign probabilities to events in thatspace that follow from laws of physics & chemistry. I have yet to see any calculation like that (full quantum theoretic computations for DNA size molecule to calculate exact odds of different adjacent states is out of question by a long, long shot). For example, cellular biochemical networks, being networks with adaptable links, are intelligent anticipatory system (that's valid whether they are a computing technology of the Planckian networks or not), and they could have computed the DNA alterations which improve the odds of the beneficial mutations above the (unknown) odds of a random pick among all accessible configurations. Without knowing what the latter odds are, they have no way of telling them apart from the observed spontaneous mutation rate. Consider analogous phenomenon in the evolution of technologies -- there is some observed rate of innovations. The relabeling of spontaneous mutations as "random" mutations in biology, would be analogous to claiming that any innovation that didn't come from government sponsored labs, with official seal, is random i.e. some manufacturing error or copying errors of software gave rise to new version of Windows or new model of a car. It would be absurd to concede "random" in such situation. Hence, by conceding "randomness" attribute of observed spontaneous mutations, ID proponents are setting themselves to have to accept any genetic novelty that can be observed to happen spontaneously in the lab or in nature as being result of "random" mutation (such as rapid adaptations of those isolated lizards on an Adriatic island recently). That's a very bad place to be at, since you never know how rapid intelligently guided (e.g. by biochemical networks) evolution can be and how much novelty can be observed. Paradoxically, with the above concession, the more rapid evolution, which was supposed to be ally of ID, becomes stronger "proof" of neo-Darwinian claims, that "random" mutation can produce such rapid evolution. Yet, nothing of the sort follows from mere observation of "spontaneous" mutation rate, however fast or slow it may be, just as it doesn't follow for other observed instances of "microevolution" already conceded. Even the randomly induced mutations, e.g. via radiation, which turn out beneficial, are not a proof that the resulting beneficial DNA change isn't a result of intelligent repair of the radiation damage by an intelligent processes such as the biochemical networks, rather than being just a randomly altered structure struck by a gamma photon. For example, we can were to look at analogous "induced mutation" via damage in examples of evolution of human technologies (which are obviously intelligently guided). Say a hacker vandalizes Microsoft's Windows source code, by deleting some function, plus all of its backups. When programmers try compiling the source, they get compiler error because of missing function. Then they discover the function is missing in the backups. With no other way out, they just rewrite the function from scratch. It may easily happen that the new version is better than the old, hence, looking from outside (that's all we can do with molecules), it appears as if the "induced random mutation" of the source code has improved source code. In fact, it was the same intelligence which created that source (human programmers) which produced the improvement. The "randomly induced mutation" is not a synonym for "induced random mutation." Only the first one is the fact, the seond one is an unproven conjecture. Therefore, even the "induced mutation" via random damage, which results in beneficial innovation is not a proof that beneficial innovation itself was produced by the random damage even though random damage was a trigger for the improvement (analogous to deletion triggering the fresh rewrite and improvement of the deleted function). The only way one could prove that random damage produced beneficial innovation (instead of merely being a trigger for intelligent process) is by computing the event space and correct odds of such improvement via random damage, exactly as in the spontaneous mutation case or in the dice example. nightlight
Hi Nightlight, I've been following this discussion as best as I can, and have found it very interesting. However, I am no scientist, and can't claim to fully understand your argument in its entirety. Therefore I was wondering if you could help me out a little by giving me a quick overview of your main point (pretty much in laymans terms too, please)? Having explained my position here, and my need of your assistance to understand your point better, could you explian the following statement and answer some questions for me: "Namely, if the natural process has some very simple intelligence front loaded, such as working like a neural network, then such intelligence is additive, hence it can accumulate any level of intelligence needed to explain the intelligence implied by the biological artifacts." The 'simple intelligence' you describe as being front loaded, why should it be 'simple'? Also, could this 'simple intelligence' you talk of be that of a 'mind'? And if it is categorically not of a 'mind', how do you know that? PeterJ
Phinehas #187: Storage and retrieval may not be the same thing as thinking. Biological complexity only implies ability to compute anticipatory (intelligent) algorithms, not ability to think (which is much to vague, anthropomorphic term anyway). Neural networks with unsupervised learning can do that via simple physics-like interactions (see <a href="https://uncommondesc.wpengine.com/intelligent-design/optimus-replying-to-kn-on-id-as-ideology-summarises-the-case-for-design-in-the-natural-world/#comment-451321"post #116), without anyone programming such anticipatory algorithms into them. nightlight
BA77 #185: NL you claim that you have empirical proof and then state nightlight: Read that post #179 again if you missed it (there is a link to a paper). No matter how much complex specified information is encoded in the DNA & proteins, the Imax value, the # of information bits storable and retrievable from the neural network, can exceed it (Imax is proportional to k*N^2, where N is number of nodes, k is number of bits per link which can be as large as you want). BA77: So you think presenting empirical proof for your claim is just programing in whatever number(s) you want into some `neural network' get some numbers out and that you don't actually have to produce any real world evidence for novel functional proteins or DNA sequences? Perhaps we need to define empirical proof a little more clearly? Why are you putting words in my mouth? I restated what I said previously. You were the only one talking about "empirical evidence" not me. What I said throughout, there as well, is that a natural processes, such as unsupervised neural networks, can generate any required amount of complex specified information (CSI). I gave you the evidence for what I said is true. The implication of what I said is that whatever CSI is observed in biological artifact is explicable by a natural process, provided the nature uses neural network based pregeometry (Planck scale physics; network models of that scale already exist). You and Dr Abel (is Cain on it, too?), claim that natural processes cannot produce CSI, which is incorrect. Namely, if the natural process has some very simple intelligence front loaded, such as working like a neural network, then such intelligence is additive, hence it can accumulate any level of intelligence needed to explain the intelligence implied by the biological artifacts. So, that's a counter-example invalidating your and Dr Abel's claims of impossibility of such natural processes, not an "experimental proof" of anything as you keep relabeling it. nightlight
nightlight:
No matter how much complex specified information is encoded in the DNA & proteins, the Imax value, the # of information bits storable and retrievable from the neural network, can exceed it.
Storage and retrieval may not be the same thing as thinking. Someone posted recently about the qualitative difference between Shannon information and what we typically understand as information derived from intelligence. I don't think the capacity to store information or even to 'learn' information is equivalent to the kind of creative things the mind can do. Phinehas
StephenB #184 Why would you cringe? The process by which the scientific inference to design is made is not synonymous with the philosophical/religious implications that may follow from it. There are no indicators for "mind" or "consciousness" in ID methodology-only the inferred presence of an intelligent agent. I cringe because mixing a perfectly valid empirical observation (the ID design detection) allows those who don't like the philosophical or religious implications, to disqualify it as non-science since it claims to infer the "mind" or "consciousness", which is not scientifically valid (within present natural science which lacks a model of 'mind stuff' and objective empirical way to detect it). The only thing that ID design detection in biology implies is that intelligent process (or agent) produced those artifacts, not whether such process or agent had a mind or consciousness as Meyer claims. He can't know that, much less demonstrate it scientifically. Neither Stephen Meyer, nor anyone else, has any way of demonstrating scientifically even that his wife has "mind", who is in front of him and telling him she has it, let alone claiming to infer that something no one has ever observed has it. That's a pure gratuitous self-sabotage, a complete waste of a valid ID inference, by a needless leap too far. So, it is the same cringe I had as I watched Magnus Carlsen needlessly self-destruct aganst Svidler in the critical last round chess game of the London tournament (winner gets to challenge world champion Anand). Luckily, the only other contender for the 1st place, Vlad Kramnik, self-destructed as well a bit later. nightlight
NL you claim that you have empirical proof and then state:
Read that post #179 again if you missed it (there is a link to a paper). No matter how much complex specified information is encoded in the DNA & proteins, the Imax value, the # of information bits storable and retrievable from the neural network, can exceed it (Imax is proportional to k*N^2, where N is number of nodes, k is number of bits per link which can be as large as you want).
So you think presenting empirical proof for your claim is just programing in whatever number(s) you want into some 'neural network' get some numbers out and that you don't actually have to produce any real world evidence for novel functional proteins or DNA sequences? Perhaps we need to define empirical proof a little more clearly?
Empirical proof is "dependent on evidence or consequences that are observable by the senses. Empirical data is data that is produced by experiment or observation."
I know it is probably a bit beneath a man of your caliber, but could you actually go to the trouble of showing us exactly which novel functional proteins have been generated by your 'neural network' in real life. I don't know of any examples from bacteria that you can refer to, but who knows perhaps you've designed hundreds of proteins on 'neural network computers and we just don't about them yet: bornagain77
nightlight
I have heard it (and cringed) many times from him, e.g. google search returns 36,400 hits, with him stating declaring that the implication of the “signature in the cell” is the product of “intelligent mind.”
Why would you cringe? The process by which the scientific inference to design is made is not synonymous with the philosophical/religious implications that may follow from it. There are no indicators for "mind" or "consciousness" in ID methodology--only the inferred presence of an intelligent agent. StephenB
BA77 #180: NL, since you cannot produce any empirical proof for this claim,,, Read that post #179 again if you missed it (there is a link to a paper). No matter how much complex specified information is encoded in the DNA & proteins, the Imax value, the # of information bits storable and retrievable from the neural network, can exceed it (Imax is proportional to k*N^2, where N is number of nodes, k is number of bits per link which can be as large as you want). And again Abel directly challenges ANY scenarios such as yours to falsify the null,,, The capabilities of stand-alone chaos, complexity, self-ordered states, natural attractors, fractals, drunken walks, complex adaptive systems, and other subjects of non linear dynamic models are often inflated,, That is all about the guessing setup he uses, described in #179. But Abel's FSC pattern guessing setup is only one among possible scheme (corresponding to GA or RM+NS methods) one can try for generating CSI observed in the DNA and proteins of the live cells. That guessing method doesn't work, which is merely a rephrased ancient result about incompressibility of random data. The Abel's restriction doesn't apply to CSI generated by neural networks or by a computer program or by human brain, for that matter. NNs, computer programs or human brain for example can generate any amount of CSI. The CSI only means that you have two large matching patterns A and B, e.g. with A corresponding to some subset of DNA code and B corresponding to some well adapted phenotypic traits or requirements. One can say that B specifies A in the sense that well fitting phenotypic traits specify requirements on what encoding DNA needs to have. For example, if an animal lives in cold climate, DNA code for longer fur or thicker layers of fat are specified by these phenotypic requirements. The C in CSI only means that there is large enough number of such matching elements between A and B (hence large bits of information). CSI is not a synonym for the Abel's search method (or Dembski's assisted search), but an independent concept which long predates Abel, Dembski and ID. Note also that unlike regular computer program which can also produce any amount of CSI, but it needs a programmer, the neural networks don't a need a programmer, since unsupervised learners are self programming (they only need simple interactions). nightlight
Chance Ratcliff #178: What specifically is Meyer stating that you take issue with, in regards to consciousness? Instead of "consciousness talk" generalities, if you could quote something relevant that Meyer actually said, it would make it possible to talk about his comments, rather than your interpretations of them. I have heard it (and cringed) many times from him, e.g. google search returns 36,400 hits, with him stating declaring that the implication of the "signature in the cell" is the product of "intelligent mind." The mind and consciousness are synonymous in this context, in that neither has any algorithmically effective definition i.e. nothing logically follows from them. Hence they are parasitic element of the "algorithmic component" (M) of his ID theory. Let me recall the general structure of natural science (post #49) (M) - Model space (formalism & algorithms) (E) - Empirical procedures & facts of the "real" world (O) - Operational rules mapping between (M) and (E) The Model space (M) is the generator (algorithms, formulas) of meaningful statements (numbers, words, pictures,..) of that science. Component (E) contains empirical facts and procedures (algorithms, techniques...) for extracting them from the real world. The component (O) prescribes mapping between (M) and (E) e.g. given statement S of some prediction by (M), then (O) prescribes how to pick element of (E) to compare with S (e.g. which kind of measurement should yield a number to compare with S). All postulates/hypotheses of the theory are elements of model space (M), along with all logical deductions from them. If you take issue with the fact that designed things have features in common that are only known to come about as the result of a mind, note that this is empirical, and positing mind as a source of such features is a necessity of that observation So, what you are saying is that his (E) space has "E-mind" element and his (M) space has some element M-mind, as a primitive (postulate since it doesn't follow from any other element of (M) in ID or in other natural science). Hence the mapping (O) maps trivially between the two orphaned primitives E-mind M-mind, i.e. it is a contentless tautology. Nothing else follows in the model space (M) from M-mind in algorithmic or deductive manner. Hence M-mid is an orphan (or parasitic) element with no logically deducible consequence within (M). If you take it out of (M), nothing else in model space (M) changes. On the other hand, E-mind is also orphaned within (E) since there is no way to detect it objectively. Nothing measures it, and it does nothing as far as present natural science (that's not a synonym for "personal experience") can tell. You can't take when a statement is objectively observed "I think" that this means E-mind was detected (a tape recorder or computer program or parrot can produce that sound as well). Hence, he has two orphaned elements E-mind and M-mind in two spaces, whose sole role in the theory is to point to each other and do nothing else. Drop both, and nothing changes, the ID detection methods still point to intelligent process or intelligence as the designer of the artifacts of life. The unfortunate part is that "intelligence" or "intelligent process" (with term "mind" dropped) can be defined in algorithmically effective way (e.g. via internal modeling and anticipatory computations, like in AI) for the model space (M), and it can be objectively detected within (E) space (via ID detection methods). All that is implied by ID argument is "intelligence" (or intelligent process) which is scientifically uncontroversial concept. If he and others (since that's a pretty common reflex among ID researchers) were to simply state that ID detection methods point to "intelligent process" (which is an algorithmically effective scientific concept), the ID would have as easy entry into the natural science as the Big Bang theory did. Even though atheists didn't like Big Bang theory either (because of informal implications of the universe having a beginning), it wasn't labeled as a non-science but merely remained a minority vew, until it was confirmed experimentally. Someone may object, well, even though we can't measure "mind" we know we have mind or consciousness. It's a problem of present natural sciece which has a gap in that place. That may well be so, but then what about the wisdom of sticking one leg of the ID chair into that gap, when there is plenty of floor room nearby without gaps. nightlight
NL, if you don't mind a personal question are you Jewish or perhaps Muslim?
The Origin of Science Jaki writes: Herein lies the tremendous difference between Christian monotheism on the one hand and Jewish and Muslim monotheism on the other. This explains also the fact that it is almost natural for a Jewish or Muslim intellectual to become a pa(n)theist. About the former Spinoza and Einstein are well-known examples. As to the Muslims, it should be enough to think of the Averroists. With this in mind one can also hope to understand why the Muslims, who for five hundred years had studied Aristotle's works and produced many commentaries on them failed to make a breakthrough.,,, http://www.columbia.edu/cu/augustine/a/science_origin.html panpsychism is the view that all matter has a mental aspect, Pantheism is the belief,,, that the universe (or nature) is identical with divinity.
bornagain77
NL, since you cannot produce any empirical proof for this claim,,,
there is no limit how much specified complex information they can learn or generate as memories of learned patterns
,,,Perhaps it would be well for you to quit claiming it. Particularly the 'generate' portion of the claim you made!!!. Playing games and trying to make exceptions for what type of information it generates does not really matter to me for your broad claim is that it can generate as such. Since you cannot produce even one example to refute the null then that should give a smart guy like you a major clue that you are barking up the wrong tree with your pseudo-theory! It is not that complicated NL,, Do you just want it to be true so bad that you can't see this glaring deficit between your broad claim and the real world evidence???,,, And again Abel directly challenges ANY scenarios such as yours to falsify the null,,, The capabilities of stand-alone chaos, complexity, self-ordered states, natural attractors, fractals, drunken walks, complex adaptive systems, and other subjects of non linear dynamic models are often inflated,, bornagain77
bornagain77 #176: nightlight: there is no limit how much specified complex information they can learn or generate as memories of learned patterns bornagain77: I'm the one calling your bluff. If it is truly unlimited in the CSI it can generate then by all means produce one example of functional information (CSI) being generated above the 500 bit limit set by Dembski. You don't get it -- the two problems are different. The FSC by Abel is a problem where you are given some function of n values, whose values are 1 or 0: e.g. F(i)=1,0,1,0,1,1... for i=1,2,3,...n. The task is to devise a guessing algorithm G (random or deterministic, or any combination), such that first, G(1) 'predicts' F(1) as one of 0 or 1, and receives Yes/No answer, thus in effect it receives F(1). Then, knowing the answer for F(1), G predicts F(2) and gets another Yes/No answer, thus receives value for F(2). Then knowing F(1) and F(2), G predicts F(3), etc. Any other variation of guessing schedule is allowed to G, e.g. G can request to guess in blocks of say 4 successive values of F, e.g. G guesses next 4 digits 1011 and receives response that F has values 1001. Any other guessing schedule is allowed, as long as G doesn't have values of F it is trying to guess before it tries to guess them (duh). The claim is that no G algorithm exist which can beat the chance (50% hits, or n/2 guesses on average) if tested against all possible functions F i.e. on all possible 2^n patterns of n bits (or equoivalently on random F). That is correct, the best G can do is to get n/2 guesses on average. It is essentially restatement of incompressibility of random sequence. If some G were able to beat the chance by guessing n/2 + x bits on average over all possible F's, where x > 0, that would in this scheme be stated as 'G has generated x bits of FSC'. It is well known (and trivial) that no such G can exist. The neural networks tackle a different problem: there is some set of C 'canonical' bit patterns, P1, P2,... Pc, each containing n bits (e.g. these could be C=26 bitmaps of scanned alphabet letters A-Z, where each bitmap has n bits). After the learning phase of C canonical patterns, network is given some other n-bit patterns Q1, Q2,... which are 'damaged' altered bitmaps (e.g. via random noise) of the same 26 letters. The network then decides for each incoming Q which letter P1,..Pc it should retrieve. Depending on network algorithms and size, there is no upper limit on how many letters it can store or how many Q bitmaps it can process (i.e. how many bits per pattern n are there), provided you add enough nodes and links. For example, here is a paper from the top few on a google search, which for their particular type of network (BCPNN) with N nodes where each link is encoded in k bits of precision, and H columns (H>1) gives maximum information capacity of such network as: Imax = k*N*N/2 * (1 - 1/H) bits [eq. (9), p. 6]. This number 'Imax' can be as large number of bits as you want by making N and/or k large enough (the factor (1-1/H) is a fixed number 1/2 < f < 1). But that number Imax has no relation with x from the FSC problem, since the two problems have nothing to do with each other. The Abel's FCS (or Dembski's CSI) is set up meant to refute effectiveness claims of neo-Darwinian RM+NS algorithm (or more generally any GA) for generating complex specified information, which is fine, you can't get x>0 on average. The Planckian or higher level networks are not searching for a needle in the exponential haystack with 2^n choices. They are matching and evaluating closeness of input patterns Q to set of C patterns used for learning (or to C attractor regions). While total number of distinct patterns Q is also 2^n as in the FSC problem, the number of attractor regions C is a much smaller number than 2^n. The origin of such reduction of complexity is that Planckian networks (or higher level networks they produce, such as biochemical networks, or our social networks) are searching in a space populated by other agents of the same general kind (other networks working via same anticipatory or pattern matching algorithms), not in some general space of random, lawless entities Q, which can be any of the 2^n patterns (for n-bit entities). They are operating in a lawful, knowable world, in which pattern regularities extend from physical to human levels. In other words, Planckian & higher level networks are 'stereotyping' in all encounters with new patterns Q by classifying any new Q based on partial matches into one of the C known stereotypes (or C canonical patterns). The bottom up harmonization process (which maximizes mutual predictability), from physical laws and up, assures that stereotyping works by driving patterns toward the stereotypical forms. The harmonization thus helps make patterns or laws regular and knowable (e.g. check Wigner's paper "The Unreasonable Effectiveness of Mathemics"). nightlight
nightlight, I hope your weekend was good. At #135 you wrote,
"Neo-Darwinian Evolution theory (ND-E = RM + NS is the primary mechanism of evolution), carries a key parasitic element of this type, the attribute “random” in “random mutation” (RM) — that element is algorithmically ineffective since it doesn’t produce any falsifiable statement that can’t be produced by replacing “random” with “intelligently guided” (i.e. computed by an anticipatory/goal driven algorithm). In this case, the parasitic agenda carried by the gratuitous “randomness” attribute is atheism. That is actually another common misstep by ID proponents — they needlessly concede that “random” mutation completely explains “micro-evolution”."
ID accepts that, in principle, random mutations are perfectly capable of explaining certain microevolutionary changes, such as bacterial drug resistance. That is not the same as conceding that random mutations can explain all microevolutionary change. In other words, random mutations are sufficient for changes which can be achieved by a small number of "coordinated" heritable genomic changes; that doesn't imply sufficiency to account for all observed changes. If you could produce a relevant quote from a major ID proponent conceding that random mutations account for all of microevolution, it would support your assertion. In comment #117 you state,
But if you do insist on injecting such algorithmically ineffective cogs, as Stephen Meyer keeps doing with ‘consciousness’, than whatever it is you’re offering is going to trigger a strong immune response from the existent natural sciences which do follow the rule of ‘no algorithmically ineffective cogs’.
and from comment #128,
My point is that it certainly can be, provided its proponents (such as S. Meyer) get rid of the algorithmically ineffective baggage and drop the ‘consciousness’ talk, since it only harms the cause of getting the ID to be accepted as a science.
What specifically is Meyer stating that you take issue with, in regards to consciousness? Instead of "consciousness talk" generalities, if you could quote something relevant that Meyer actually said, it would make it possible to talk about his comments, rather than your interpretations of them. If you take issue with the fact that designed things have features in common that are only known to come about as the result of a mind, note that this is empirical, and positing mind as a source of such features is a necessity of that observation -- it shouldn't really be controversial with regard to non-biological objects such as machinery, Blu-ray players, and big-screen TVs. Chance Ratcliff
Box (168): BTW what are the planckian networks up to when they self-organize into stars? How promising is the self-organized star formation trajectory in relation to expressing intelligence for the average self-respecting Planckian network? Undoubtedly a bright future lies ahead, but some may wonder what the prospects for happiness are at those network demolishing temperatures.
Nightlight (174): The Planckian networks are no more harmed by supernova temperatures than your PC is harmed by some wild pattern of cells in Conway’s Game of Life that is running on that computer.
In order to be happy Planckian networks want to design and self-organize into elemental particles (aka super-computers), and from there design and self-organize into biochemical networks (aka super-super–computers) to run internal models to invent body plans right? Well that trajectory is off the table when you design and self-organize into stars, right? I’m just asking. Box
NL, you are the one making a specific claim that,,,
there is no limit how much specified complex information they can learn or generate as memories of learned patterns
I'm the one calling your bluff. If it is truly unlimited in the CSI it can generate then by all means produce one example of functional information (CSI) being generated above the 500 bit limit set by Dembski. A single protein or better yet, a molecular machine should do the trick. As to your claim that
That’s all about limitations of genetic algorithms (GA) for search problems, which has nothing to do with neural networks and their pattern recognition algorithms.
Abel's null hypothesis covers 'everything' including the convoluted scenario you a-priorily prefer for a worldview!
The Capabilities of Chaos and Complexity: David L. Abel – Null Hypothesis For Information Generation – 2009 Excerpt: The capabilities of stand-alone chaos, complexity, self-ordered states, natural attractors, fractals, drunken walks, complex adaptive systems, and other subjects of non linear dynamic models are often inflated. Scientific mechanism must be provided for how purely physicodynamic phenomena can program decision nodes, optimize algorithms, set configurable switches so as to achieve integrated circuits, achieve computational halting, and organize otherwise unrelated chemical reactions into a protometabolism. To focus the scientific community’s attention on its own tendencies toward overzealous metaphysical imagination bordering on “wish-fulfillment,” we propose the following readily falsifiable null hypothesis, and invite rigorous experimental attempts to falsify it:,,
You cannot claim that on the one hand,,,
there is no limit on how much specified complex information they can learn or generate
then on the other hand claim:
problem substance tackled by neural networks is entirely different (than the functional information addressed by Abel's null hypothesis)
Either your method can generate unlimited CSI as you claim and falsify the null hypothesis, and thus prove your outrageous claim that your program is 'intelligent', or it cannot generate functional information. There is no weasel room for you in this null hypothesis as it is set up!. bornagain77
@bornagain77 #173 That's all about limitations of genetic algorithms (GA) for search problems, which has nothing to do with neural networks and their pattern recognition algorithms. The latter are unsupervised clustering algorithms for sets of patterns and there is no limit on how much specified complex information they can learn or generate as memories of learned patterns (such as learn to recognize noisy patterns of Chinese or Japanese characters; or natural language,... etc). The translation from pattern recognition language to anticipatory behavior language was explained in post #116. The GA critique (by Dembski & others) merely shows that neo-Darwinian algorithm, RM+NS, is incapable of solving large search problems. That's beating the same old dead horse. The problem set up and problem substance tackled by neural networks is entirely different (unsupervised pattern recognition or clustering) and none of Dembski's or other GA search limitations results apply to neural networks or pattern recognition. but the preceding is just a bunch of word salad. Oops, sorry didn't mean to overload your circuits. nightlight
Box #169: I'm a strong chess player for many years and I can assure each and everyone that computer chess is very bad at strategy. ... I can still draw the top chess programs - about every other game. But I have to admit I cannot win. In played through grad school, got to USCF 2100 (expert rating). My younger brother (in ex-Yugoslavia) is a national master. With computers (my favorite is Hiarcs), if I play for exciting, fun games, I will lose every time. If I play for revenge, dull blocked position and slow maneuvering, I can draw half the time, even win every now and then (especially if I drill into the same dull variation and keep refining it). #168 What I meant to say was that your theory predicts a vivid super-intelligent universe - in any shape or form - rather than the comatose inert universe at hand. You must have succumbed to the materialist brainwashing. I see everything as animated, sparkling with life, in pursuit of its own happiness. Undoubtedly a bright future lies ahead, but some may wonder what the prospects for happiness are at those network demolishing temperatures. The Planckian networks are no more harmed by supernova temperatures than your PC is harmed by some wild pattern of cells in Conway's Game of Life that is running on that computer. Temperatures, energies, forces and the rest of physics with its space-time parameterization are just few coarse grained properties/regularities of the activation patterns unfolding on the Planckian networks. These networks are merely computing their patterns in pursuit of their happiness (maximizing +1 scores; posts #59 and #109 address the mind stuff semantics of +1,-1 labels). Their "space" is made of distances which are counts of hops between nodes, their "time" is the node state sequence number (each node has its own state seqence numbers 1,2,3,...; these numbers tell it which state sequence numbers of other nodes it needs to refer to when messaging with them). As far as postulates/assumptions, one can as well imagine all nodes as being compressed into a single point, just like you can stack a set of Ethernet switches on top of each other, without changing network connections (topology) or affecting its operation (everything will run as when switches are spread out in some 2D pattern). The "links" of Planckian networks are abstract "things in itself" which for a given node X merely refer to which other nodes Y1, Y2, Y3,... it takes/sends messages from/to. Links thus specify the labels of some of the other nodes that can be all compressed in the common point i.e. nothing needs to carry messages anywhere outside the single point. One can, thus, imagine the whole system as one point talking to different aspects of itself, as it were, as if trying to work out 'what am I' and 'why am I here'. nightlight
NL, excuse me but the preceding is just a bunch of word salad. In order to provide solid empirical proof for your position that computers and calculators are 'intelligent', and to differentiate your preferred worldview from pseudo-science, you SIMPLY must produce an observed example(s) of functional information being generated above the 500 bit threshold proposed by Dembski. There is/are a null hypothesis(es) in place that says it will never be done:
The Capabilities of Chaos and Complexity: David L. Abel - Null Hypothesis For Information Generation - 2009 Excerpt: The capabilities of stand-alone chaos, complexity, self-ordered states, natural attractors, fractals, drunken walks, complex adaptive systems, and other subjects of non linear dynamic models are often inflated. Scientific mechanism must be provided for how purely physicodynamic phenomena can program decision nodes, optimize algorithms, set configurable switches so as to achieve integrated circuits, achieve computational halting, and organize otherwise unrelated chemical reactions into a protometabolism. To focus the scientific community’s attention on its own tendencies toward overzealous metaphysical imagination bordering on “wish-fulfillment,” we propose the following readily falsifiable null hypothesis, and invite rigorous experimental attempts to falsify it: “Physicodynamics cannot spontaneously traverse The Cybernetic Cut [9]: physicodynamics alone cannot organize itself into formally functional systems requiring algorithmic optimization, computational halting, and circuit integration.” A single exception of non trivial, unaided spontaneous optimization of formal function by truly natural process would falsify this null hypothesis. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2662469/ Can We Falsify Any Of The Following Null Hypothesis (For Information Generation) 1) Mathematical Logic 2) Algorithmic Optimization 3) Cybernetic Programming 4) Computational Halting 5) Integrated Circuits 6) Organization (e.g. homeostatic optimization far from equilibrium) 7) Material Symbol Systems (e.g. genetics) 8 ) Any Goal Oriented bona fide system 9) Language 10) Formal function of any kind 11) Utilitarian work ,,, "Artificial intelligence does not organize itself either. It is invariably programmed by agents to respond in certain ways to various environmental challenges in the artificial life data base." ,,, ,,,"Evolutionary algorithms, for example, must be stripped of all artificial selection and the purposeful steering of iterations toward desired products." - Abel,,, Three subsets of sequence complexity and their relevance to biopolymeric information - Abel, Trevors Excerpt: Three qualitative kinds of sequence complexity exist: random (RSC), ordered (OSC), and functional (FSC).,,, Shannon information theory measures the relative degrees of RSC and OSC. Shannon information theory cannot measure FSC. FSC is invariably associated with all forms of complex biofunction, including biochemical pathways, cycles, positive and negative feedback regulation, and homeostatic metabolism. The algorithmic programming of FSC, not merely its aperiodicity, accounts for biological organization. No empirical evidence exists of either RSC of OSC ever having produced a single instance of sophisticated biological organization. Organization invariably manifests FSC rather than successive random events (RSC) or low-informational self-ordering phenomena (OSC).,,, Testable hypotheses about FSC What testable empirical hypotheses can we make about FSC that might allow us to identify when FSC exists? In any of the following null hypotheses [137], demonstrating a single exception would allow falsification. We invite assistance in the falsification of any of the following null hypotheses: Null hypothesis #1 Stochastic ensembles of physical units cannot program algorithmic/cybernetic function. Null hypothesis #2 Dynamically-ordered sequences of individual physical units (physicality patterned by natural law causation) cannot program algorithmic/cybernetic function. Null hypothesis #3 Statistically weighted means (e.g., increased availability of certain units in the polymerization environment) giving rise to patterned (compressible) sequences of units cannot program algorithmic/cybernetic function. Null hypothesis #4 Computationally successful configurable switches cannot be set by chance, necessity, or any combination of the two, even over large periods of time. We repeat that a single incident of nontrivial algorithmic programming success achieved without selection for fitness at the decision-node programming level would falsify any of these null hypotheses. This renders each of these hypotheses scientifically testable. We offer the prediction that none of these four hypotheses will be falsified. http://www.tbiomed.com/content/2/1/29 The Law of Physicodynamic Incompleteness - David L. Abel - August 2011 Summary: “The Law of Physicodynamic Incompleteness” states that inanimate physicodynamics is completely inadequate to generate, or even explain, the mathematical nature of physical interactions (the purely formal laws of physics and chemistry). The Law further states that physicodynamic factors cannot cause formal processes and procedures leading to sophisticated function. Chance and necessity alone cannot steer, program or optimize algorithmic/computational success to provide desired non-trivial utility. http://www.scitopics.com/The_Law_of_Physicodynamic_Incompleteness.html The GS Principle (The Genetic Selection Principle) - Abel - 2009 Excerpt: Biological control requires selection of particular configurable switch-settings to achieve potential function. This occurs largely at the level of nucleotide selection, prior to the realization of any integrated biofunction. Each selection of a nucleotide corresponds to the setting of two formal binary logic gates. The setting of these switches only later determines folding and binding function through minimum-free-energy sinks. These sinks are determined by the primary structure of both the protein itself and the independently prescribed sequencing of chaperones. The GS Principle distinguishes selection of existing function (natural selection) from selection for potential function (formal selection at decision nodes, logic gates and configurable switch-settings). http://www.bioscience.org/2009/v14/af/3426/fulltext.htm Book Review - Meyer, Stephen C. Signature in the Cell. New York: HarperCollins, 2009. Excerpt: As early as the 1960s, those who approached the problem of the origin of life from the standpoint of information theory and combinatorics observed that something was terribly amiss. Even if you grant the most generous assumptions: that every elementary particle in the observable universe is a chemical laboratory randomly splicing amino acids into proteins every Planck time for the entire history of the universe, there is a vanishingly small probability that even a single functionally folded protein of 150 amino acids would have been created. Now of course, elementary particles aren't chemical laboratories, nor does peptide synthesis take place where most of the baryonic mass of the universe resides: in stars or interstellar and intergalactic clouds. If you look at the chemistry, it gets even worse—almost indescribably so: the precursor molecules of many of these macromolecular structures cannot form under the same prebiotic conditions—they must be catalysed by enzymes created only by preexisting living cells, and the reactions required to assemble them into the molecules of biology will only go when mediated by other enzymes, assembled in the cell by precisely specified information in the genome. So, it comes down to this: Where did that information come from? The simplest known free living organism (although you may quibble about this, given that it's a parasite) has a genome of 582,970 base pairs, or about one megabit (assuming two bits of information for each nucleotide, of which there are four possibilities). Now, if you go back to the universe of elementary particle Planck time chemical labs and work the numbers, you find that in the finite time our universe has existed, you could have produced about 500 bits of structured, functional information by random search. Yet here we have a minimal information string which is (if you understand combinatorics) so indescribably improbable to have originated by chance that adjectives fail. http://www.fourmilab.ch/documents/reading_list/indices/book_726.html To clarify as to how the 500 bit universal limit is found for 'structured, functional information': Dembski's original value for the universal probability bound is 1 in 10^150, 10^80, the number of elementary particles in the observable universe. 10^45, the maximum rate per second at which transitions in physical states can occur. 10^25, a billion times longer than the typical estimated age of the universe in seconds. Thus, 10^150 = 10^80 × 10^45 × 10^25. Hence, this value corresponds to an upper limit on the number of physical events that could possibly have occurred since the big bang. How many bits would that be: Pu = 10-150, so, -log2 Pu = 498.29 bits Call it 500 bits (The 500 bits is further specified as a specific type of information. It is specified as Complex Specified Information by Dembski or as Functional Information by Abel to separate it from merely Ordered Sequence Complexity or Random Sequence Complexity; See Three subsets of sequence complexity: Abel) This short sentence, "The quick brown fox jumped over the lazy dog" is calculated by Winston Ewert, in this following video at the 10 minute mark, to contain 1000 bits of algorithmic specified complexity, and thus to exceed the Universal Probability Bound (UPB) of 500 bits set by Dr. Dembski Proposed Information Metric: Conditional Kolmogorov Complexity - Winston Ewert - video http://www.youtube.com/watch?v=fm3mm3ofAYU Here are the slides of preceding video with the calculation of the information content of the preceding sentence on page 14 http://www.blythinstitute.org/images/data/attachments/0000/0037/present_info.pdf Lack of Signal Is Not a Lack of Information - July 18, 2012 Excerpt: Putting it all together: The NFL (No Free Lunch) Theorems show that evolution is stuck with a blind search. Information lights the path out of blind search; the more information, the brighter the light. Complex specified information (CSI) exceeds the UPB, so in the evolutionary context a blind search is not an option. Our uniform experience with CSI is that it always has an intelligent cause. Evolution is disconfirmed by negative arguments (NFL theorems and the UPB). Intelligent design is confirmed by positive arguments (uniform experience and inference to the best explanation). http://www.evolutionnews.org/2012/07/lack_of_signal062231.html
Music:
Moriah Peters - Well Done Official Music Video - Music Videos http://www.godtube.com/watch/?v=WDL7GLNX
bornagain77
Bornagain77 & Nightlight This article about 'deep learning' at newyorker.com by Gary Marcus may be of interest. Box
bornagain77 #: please give us a little perspective and cite your exact empirical evidence that computers can generate functional information above and beyond what they were originally programmed by a `mind' to generate A sketch of how that works, including the emergence of goal oriented anticipatory behaviors, internal modeling, etc. without explicitly being front loaded with these behaviors, is given in the post #116 via neural networks with unsupervised learning. That post doesn't give any links since it's based on common knowledge about such (artificial) neural networks which anyone can google and introduce himself into the subject. I have learned that material mostly from books (before google) and from my own computer experimentation with neural networks (from mid 1990s and on), but it is all easily accessible common knowledge (especially to as prolific searcher as you seem to be). I just don't need to call upon authority on matters which are obvious to me and which are easily verifiable. Your request is a bit like asking a master chef to point you to the prepackaged officially FDA approved frozen meal so you can compare the ingredients, for a dish he is making, that he has honed over years based on recipes from cookbooks, from older master chefs and from his own experimentation. He is well beyond the need to look it up or assure himself with what FDA or other "authority" says about it since he knows and understands that recipe as well as anyone. The key ingredient of the 'unsupervised learning' capability is to have a system which has a flexible feedback driven mechanism for reshaping its 'attractor surface' (a.k.a. fitness landscape). The attractor surface is easily understood by imagining a tub of clay (or play-doh), with initially flat surface, than sculpting the valleys in it by pressing you finger into it at different points. The evolution of system state in time is then like a marble dropped at any place on the surface, rolling and settling at the bottom of the nearest valley. This set of valley bottoms is a discrete set of system's memories which in this example memorize the points where you earlier poked the play-doh. The key attribute of such attractor surface is that the valleys attract to the same bottom point all marbles from anywhere on their slopes, which corresponds to recalling a canonical memory from partial/approximate matches. You imagine X,Y coordinates of the tub surface as representing the input pattern or state which needs to be recognized, such as bitmap in optical character recognition system. The n valley bottoms, given via 2D coordinates (X1,Y1), (X2,Y2)... (Xn,Yn) represent the n canonical patterns (or memories) that need to be recalled or recognized (such as canonical patterns of n=26 letters). The coordinates here are just some binary digits, say X1 = 1001,0000 for X coordinate of the first valley bottom. Now, when you put a marble at some point with coordinate X = 1001,1101, whose digit pattern doesn't correspond to digit pattern of X for any explicit memory or canonical pattern, this marble will roll say to 1001,0000 valley bottom, which is the nearest of the n valley bottoms. In other words, the approximate (noisy, damaged) pattern X is recognized as one of the n remembered/canonical patterns (e.g. one 26 letters). Hence, if such attractor surface is shaped the right way for the task, it can in principle recognize any set of canonical patterns from noisy, damaged, partial... input patterns, such as retrieving memorized pattern X1 = 1001,0000 from the noisy input pattern X = 1001,1101. For system of this type to be interesting or non-trivial, you need another key ingredient -- the feedback driven mechanism which can reshape its attractor surface. Neural networks are one such system, where successive adjustments of link strengths (based on some adjustment rules) between nodes can deform its attractor surface into any shape. The link adaptation mechnism works whether the 'attractor surface is static or dynamic (changeable in time). Unlike the play-doh 2D surface coordinates (X,Y), the system states here are specified in some d-dimensional space via numeric arrays such as (S1, S2,..., Sd) for network with d nodes, where Sd are numbers, e.g. 0 or 1 for nodes with binary states (boolean networks); generally a node state can have any number of distinct values including a continuous range of values (usually interval -1 to +1, or 0 to 1). The feedback mechanism which modifies the links is specified via some simple, local interaction rules, similar to physics or such as those for trading network sketched in post #116. The links here need not be wires, or dendrites & axons, or any material stringlike objects. They can be general or abstract "connections" such as those to family members or to brands of products you buy, or customers you sell to, etc. Their essential attribute is feedback driven adaptability of link strengths, e.g. you might change quantities of goods or services you buy of different brands based on their pricing, availability, perceived value... Such criteria which drive the modifications of the links are abstracted under generic labels "punishments" and "rewards". Getting from pattern recognition to anticipatory, goal directed behavior and internal modeling is fairly trivial, as explained in post #116 on the example of such network learning to control a robotic soccer player. The canonical example (or a whole cottage industry in this field) implemented in thousand of ways via all kinds of neural networks, in simulated and robotic forms doing it in real world, is the pole balancing task, where a network learns how to balance a pole (or broomstick) on a cart by moving the cart back and forth. Reverse engineering such network, once it learns the task, allows one to identify the "neural correlates" (such as specific activation patterns) of its internal model of the problem and its operation. Such internal model which network uses via internal what-if game to anticipate the consequences of its actions, doesn't look (in its neural correlate form) anything like the cart and the pole looks to us, just as your DNA doesn't look anything like you look in a photo. In both cases, though, that imprint is the 'code' encoding highly specified complex information. Stepping back for a birds eye view -- we have a simple system (network with adaptable links) with purely local physics-like rules for modifying links based on some generalized "punishments" and "rewards" (provided by interaction with environment). Running this network, let it adapt its links (by the given simple rules) while interacting with cart and the pole, without any additional input, gives rise to a fairly complex skill controlled via network's internal model and its encoding (expressed via link strengths, which shape the activation patters of its internal model). There was no external input that had injected this skill or the encoding of that skill into the network. The operation via its simple rules plus interaction with the cart & pole accomplished that all by itself. Of course, the whole program is written by intelligent agency (programmer) and is running on an intelligently designed and built computer. One can look these ingredients as front loading -- the network rules of operation and rules of interaction with the cart and the pole, are given upfront. But there was nothing that gave it upfront either the skill to control the cart or the encoding of that skill so it can apply it any time later. All these intelligent extras came out as result of operating under the simple rules of link modification (which are like toy physics laws or like toy trading network rules) and interaction with the 'environment' (cart & pole). Neither the anticipatory, goal directed behavior nor the internal modeling of the environment nor the internal encoding for that model had to be front loaded -- they came out entirely from the much simpler direct rules. All of the above are the well known, uncontroversial facts about neural networks. The interesting stuff happens when you follow up the implication of augmenting the seemingly unrelated network models of Planck scale physics (pregeometry models) with the adaptable links of neural networks -- you end up with super-intelligent Planckian networks (my term), capable of generating physics, as well as explaining the fine tuning of physics for life, serving as the intelligent agency behind the origin of life and its evolution. As with the pole balancing network, you don't to need to input any this intelligence via front loading. You do of course need to front load the rules of the neural network (link adaption rules) into the initial system, as set of givens that don't explain themselves (i.e. Planckian networks don't explain their origin). But these givens are simple, local rules of operation of 'dumb' links and nodes which are not any more complicated or assumption laden than conventional laws of physics about 'dumb' particles and fields. In other words, you don't need to front load anything remotely resembling the kind of intelligence that we see manifesting in live organisms -- that comes as an automatic consequence of the initial much simpler assumptions. The relation of the 'mind stuff' with these computations is explained in posts #59 and #109. nightlight
Box: I have to admit that chess programs are stronger than humans. The way I see it is that chess involves much more calculating than I used to think.
Right. Computers (machines intelligently designed by humans) are faster than humans at calculating. My PC can calculate millions of floating point operations per second. Humans have the insight and foresight (both properties of intelligence) to harness nature this way. But computer didn't build or program themselves. Some smart people did it because of their insight and foresight. Humans are pretty cool. And so are many their designs. CentralScrutinizer
About chess programs. I have to admit that chess programs are stronger than humans. The way I see it is that chess involves much more calculating than I used to think. I used to think that chess was 50% calculation and 50% 'overview' (strategic thinking). Now I think it is about 95% calculation - if one can make such a general claim, because it depends on the position. I'm a strong chess player for many years and I can assure each and everyone that computer chess is very bad at strategy. Computers don't have 'overview' - nada, zilch. That's why they can't play Go - which is probably a much more strategic game than chess. It's all calculations and some programmed general 101 guide lines for strategy. But you cannot teach matter to be conscious. And there is no overview without consciousness. I can still draw the top chess programs - about every other game. But I have to admit I cannot win. And I know how to draw those games, because I'm well aware of there weakness. And I can always show where the computer goes strategically wrong. Box
Nightlight (164): So, computing physics for your functionality over few seconds is a massive and complex computational task that we can’t dream to ever approaching with all of our intelligence and technology put together.
Good point. My ‘universe filled with Max Plancks’ was intended to be metaphorical rather than anthropocentric. What I meant to say was that your theory predicts a vivid super-intelligent universe – in any shape or form - rather than the comatose inert universe at hand. BTW what are the planckian networks up to when they self-organize into stars? How promising is the self-organized star formation trajectory in relation to expressing intelligence for the average self-respecting Planckian network? Undoubtedly a bright future lies ahead, but some may wonder what the prospects for happiness are at those network demolishing temperatures.
Nightlight (164): Similarly, during morphogenesis, their internal model has a ‘picture’ of what they are constructing. That ‘picture’ would certainly not look like anything you see with your senses and your mind looking at the same form. But it looks like what they will perceive or sense when it is complete.
So in case of the monarch butterfly the internal models that biochemical networks run first picture the larve body plan in their mind and after its completion they picture the butterfly body plan in their mind? How do you explain that distinct body plans originate from the same source? Box
NL you state:
we need to get the right perspective first
Okie dokie NL, please give us a little perspective and cite your exact empirical evidence that computers can generate functional information above and beyond what they were originally programed by a 'mind' to generate, or find, in the first place. Your references to computer programs and calculators being 'intelligent' are ludicrous and simply will not cut it as to empirical evidence for what you are radically claiming for true 'consciousness and intelligence' being inherent within the computer programs and calculators. Brute force computational ability does not intelligence nor consciousness make. Nor does redefining science so that it serendipitously includes your desired conclusion, and excludes Theistic conclusions, make you scientific. The assumed a-prioris you take for granted in your bizarre conjectures are gargantuan and this is without, as far as I can tell, even a inkling of validation, or grounding, from hard empirical science. As far as I can tell without such firm grounding in observational evidence, you have drifted, in the apparent full delusion of pride in the incoherent 'word salad' descriptions you have given to us, into a full fledged pseudo-science, no better than tea-leaf reading or such, without any real or true confirmation for others to see as to you being firmly grounded in reality. This is simply unacceptable scientifically and for you to insist that programs and calculators 'prove' your point, without such a demonstration of information generation. is to severely beg the very question being asked as to computers and consciousness!
Epicycling Through The Materialist Meta-Paradigm Of Consciousness GilDodgen: One of my AI (artificial intelligence) specialties is games of perfect knowledge. See here: worldchampionshipcheckers.com In both checkers and chess humans are no longer competitive against computer programs, because tree-searching techniques have been developed to the point where a human cannot overlook even a single tactical mistake when playing against a state-of-the-art computer program in these games. On the other hand, in the game of Go, played on a 19×19 board with a nominal search space of 19×19 factorial (1.4e+768), the best computer programs are utterly incompetent when playing against even an amateur Go player.,,, https://uncommondesc.wpengine.com/intelligent-design/epicycling-through-the-materialist-meta-paradigm-of-consciousness/#comment-353454 Signature In The Cell - Review Excerpt: There is absolutely nothing surprising about the results of these (evolutionary) algorithms. The computer is programmed from the outset to converge on the solution. The programmer designed to do that. What would be surprising is if the program didn't converge on the solution. That would reflect badly on the skill of the programmer. Everything interesting in the output of the program came as a result of the programmer's skill-the information input. There are no mysterious outputs. Software Engineer - quoted to Stephen Meyer http://www.scribd.com/full/29346507?access_key=key-1ysrgwzxhb18zn6dtju0 Can a Computer Think? - Michael Egnor - March 31, 2011 Excerpt: The Turing test isn't a test of a computer. Computers can't take tests, because computers can't think. The Turing test is a test of us. If a computer "passes" it, we fail it. We fail because of our hubris, a delusion that seems to be something original in us. The Turing test is a test of whether human beings have succumbed to the astonishingly naive hubris that we can create souls.,,, It's such irony that the first personal computer was an Apple. http://www.evolutionnews.org/2011/03/failing_the_turing_test045141.html Algorithmic Information Theory, Free Will and the Turing Test - Douglas S. Robertson Excerpt: Chaitin’s Algorithmic Information Theory shows that information is conserved under formal mathematical operations and, equivalently, under computer operations. This conservation law puts a new perspective on many familiar problems related to artificial intelligence. For example, the famous “Turing test” for artificial intelligence could be defeated by simply asking for a new axiom in mathematics. Human mathematicians are able to create axioms, but a computer program cannot do this without violating information conservation. Creating new axioms and free will are shown to be different aspects of the same phenomena: the creation of new information.,,, The basic problem concerning the relation between AIT (Algorithmic Information Theory) and free will can be stated succinctly: Since the theorems of mathematics cannot contain more information than is contained in the axioms used to derive those theorems, it follows that no formal operation in mathematics (and equivalently, no operation performed by a computer) can create new information. http://cires.colorado.edu/~doug/philosophy/info7.pdf Evolutionary Computation: A Perpetual Motion Machine for Design Information? By Robert J. Marks II Final Thoughts: Search spaces require structuring for search algorithms to be viable. This includes evolutionary search for a targeted design goal. The added structure information needs to be implicitly infused into the search space and is used to guide the process to a desired result. The target can be specific, as is the case with a precisely identified phrase; or it can be general, such as meaningful phrases that will pass, say, a spelling and grammar check. In any case, there is yet no perpetual motion machine for the design of information arising from evolutionary computation.,,, "The mechanical brain does not secrete thought "as the liver does bile," as the earlier materialists claimed, nor does it put it out in the form of energy, as the muscle puts out its activity. Information is information, not matter or energy. No materialism which does not admit this can survive at the present day." Norbert Wiener created the modern field of control and communication systems, utilizing concepts like negative feedback. His seminal 1948 book Cybernetics both defined and named the new field. "A code system is always the result of a mental process (it requires an intelligent origin or inventor). It should be emphasized that matter as such is unable to generate any code. All experiences indicate that a thinking being voluntarily exercising his own free will, cognition, and creativity, is required. ,,,there is no known law of nature and no known sequence of events which can cause information to originate by itself in matter. Werner Gitt 1997 In The Beginning Was Information pp. 64-67, 79, 107." (The retired Dr Gitt was a director and professor at the German Federal Institute of Physics and Technology (Physikalisch-Technische Bundesanstalt, Braunschweig), the Head of the Department of Information Technology.)
bornagain77
No volition means no personhood, ergo, no intelligence; merely data. Is there not a radically qualitative distinction between the dynamism of energy vivifying matter and the energy vivifying a living creature? And a further radically qualitative distinction between the volition of the creature without free will, i.e. a limited kind of personhood, and that of a human being with free will - and a moral dimension? (in which latter case, however, the psychopath would seem to present a puzzle? Or are psychopaths, too - at least, not so afflicted as a result of a brain injury - born with, at least, an inchoate potential for a moral sense? Axel
Phinehas #163: Thus, the networks we create will never be more intelligent than we are and we will never be more intelligent than the network that created us. Or something like that It's a bit more subtle than that. For example, chess programmers routuinely lose chess games against their creations. A pocket calculator calculates faster than engineers who designed it or technicians who built it. It just happens that the post #164 right above explains this same topic in more detail. nightlight
Box #162 I'm not going to let you off the hook so easily, Thanks. I appreciate challenging questions, such as those you bring up, since they make me follow paths I probably wouldn't have thought of visiting on my own. Everything is so unfathomably intelligent starting from the bottom and skyrocketing in ever more increasing intelligence that your theory rendered itself incapable of explaining non-intelligence. It is not able to explain why there is no overcrowded universe filled with Max Plancks. That reveals a highly anthropocentric perspective which enormously underestimates difficulties and amounts of computations needed for different problems in the whole picture. So, we need to get the right perspective first. The networks operating at smaller scales are computationally more powerful, with the ratio of computing powers scaling as L^4, where 1/L is the scale (of the cogs or of the elemental components intelligent agents are working with). Namely, factor L^3 is due to ability to fit L^3 times more cogs of length 1/L than of length 1 (unit) into the same space (or in the same amount of matter-energy). The additional factor L is due to shorter distances, allowing for L times quicker signaling (faster CPU clocks) between components of size 1/L than for components of size 1. But the task these more powerful, denser networks are solving is computationally far more demanding than the tasks at larger scales. Imagine someone trying to solve all the equations of physics involved in you typing a sentence it took you few seconds to compose and type. If we took all the computers in the world, dedicated them just to that task, of computing physics needed for you to type one sentence, in those few seconds they might solve actions of a few smaller molecules, and even that little only very approximately i.e. if you were to let such solutions go for a millisecond, you would unravel into components, that's how badly they would diverge from the correct behaviors. The computing gear doing that job would occupy a state size facility and require proprtionately huge power for all the gear. Yet, the Planckian networks working in the fraction of that space (just your body), compute all that physics exactly, to perfection, in real time, for every particle (every photon, electron, quark,...) in your body. So, computing physics for your functionality over few seconds is a massive and complex computational task that we can't dream to ever approaching with all of our intelligence and technology put together. The next layer, biological functions of your body as you think of and type the sentence, is a minor refinement, a droplet in the sea of the computations needed for computing its physics. In turn, the computations that you did to think up your sentence and type it, is a microscopic droplet in the sea of the biological computations that kept your body going for those few seconds. Glancing over those YouTube videos on operation of cellular nano-technology of just one molecular machine such as ATP synthase, inside one organelle in one cell (among trillions cells in your body) churning out ATP at furious 10,000 RPM pace... it's obvious that just work & computation of one cell would easily exceed any large industrial city in the amount of logistics and problem solving computations done at the human level, let alone your work in composing and typing one sentence. Hence, what for us seems like a human genius at work, whether it is Planck, Einstein,... or whoever, is an infinitesimally tiny speck of intelligent computation going on in a sea of intelligent computations by the underlying networks in that same space and time. So, producing Plancks or Einsteins is very, very small fish to fry in the more complete perspective. Similarly, our computational contribution to the harmonization of this small corner of the universe is equally infinitesimal to that which was computed by the underlying layers of networks. However small, though, our contributions are still irreplaceable and invaluable since nothing else can provide them at our human scales. We were designed and constructed to figure out and do the jobs at our scales that have to be done and that nothing else can presently do as well. Consider for example, a task of fixing a broken bone. The two sides of the fracture have broken through skin and are inch apart. However smart and powerful at molecular engineering the biochemical networks are, they can't bring those two pieces together and align them for the job of fusing the fragments to begin. For that, they need that gigantic 'dumb' brute with his little speck of computational intelligence, the surgeon, to pull the fractured pieces together, align them just right, then fix them in that position with a plate. Only then can the cellular biochemical networks get down and do their lions share of the work, fusing the two fragments at the cellular and molecular levels so they become one live bone again. While we can't dream of ever achieving anything like the latter feat, the biochemical networks without the 'dumb' brutes, such as humans, could not dream of doing on their own the first step that the 'dumb' brute did aided by his tiny speck of computation. Hence, intelligence at each level is highly specialized and optimized for the specific kinds of tasks and problems of that scale. While magnitudes of computations and resulting intelligence vastly differ at different scales, increasing as L^4 at lower scales 1/L, the specialization makes each one irreplaceable and necessary. At present, the Planckian networks have figured out no better way to do tasks and solve problems at our scale that we do, than through us the way we do them with our little specks of intelligence controlling our bulk and brute force, at least in this little corner of the universe. As to why they wouldn't design and construct millions of Max Plancks or equivalents (imagine that nightmare world), I would guess they figured such solution to be suboptimal, compared to, say, having one Max Planck equivalent on every x1 thousands farmers, x2 thousands truck drivers, x3 thousands bakers, x4 thousands of nurses, x5 thousands of cheerleaders,... Another relevant constraint on what can be done is the hierarchy of laws and prohibition against violations of lower level laws (or harmonized solutions) from the higher levels. Hence, a life cannot be conjured at will without the right ingredients produced and brought all together at the right place under the right conditions. Putting together massive star, having it go supernova to make the atomic ingredients needed for life, takes a bit of doing and then a bit waiting for the furnace to reach its temperature. While economies of scale do help (larger stars will go supernova quicker than smaller stars), it still takes a lengthy gathering of hydrogen & helium gasses via gravity, to get enough material, pack it densely enough to light the fusion, etc. Considering the 10^80 factor edge in computing power of Planckian networks over our own networks of neurons, we surely have no basis or right to second guess whether the way that is being done is the best that can be done with what is available. For us, it is a goodlike perfection and the best of all possible worlds, for all practical purposes. ... one cannot explain the whole from its parts. What we see in organisms is top-down organization from the level of the whole organism. We cannot reconstruct the pattern at any level of activity by starting from the parts and interactions at that level. There are always organizing principles that must be seen working from a larger whole into the parts. Obviously, a dumb trial and error, putting the parts every which way until a viable form comes out would be absurd. The way it is done is the way you build or make something -- you first do all the arranging in your mind, as computed by the networks of your neurons (see post #109 on body-mind aspect), where it is lot cheaper and lot quicker to figure it out and try it out than in the real physical world. The same kind of intelligent construction process goes on in the internal models that biochemical networks run in their mind before committing to the construction in the physical world. As sketched in the post #116, these networks are goal oriented anticipatory systems with the mind stuff, just like your brain, except computationally much quicker and smarter. Of course, the latter superiority is within their specialty and on their scales e.g. they can't read this sentence or type on the keyboard (for those little bits of work, they built you). Similarly, during morphogenesis, their internal model has a 'picture' of what they are constructing. That 'picture' would certainly not look like anything you see with your senses and your mind looking at the same form. But it looks like what they will perceive or sense when it is complete. nightlight
Box:
Everything is so unfathomably intelligent starting from the bottom and skyrocketing in ever more increasing intelligence that your theory rendered itself incapable of explaining non-intelligence.
I'm not going to claim to understand nightlight's theory, but my take was that he was saying intelligence decreased as you moved up in size. Almost as though scaling involved some sort of information entropy. Thus, the networks we create will never be more intelligent than we are and we will never be more intelligent than the network that created us. Or something like that. :P Phinehas
Box(159): If everything is so unfathomably intelligent, from the intelligent elemental building blocks that form the conscious super-intelligent Planckian networks , that form the elementary particles, all the way up, why is it that the universe is so unintelligent and lifeless?
Nightlight(161): There are limits on what can be computed in a given time on given amount of hardware, no matter how powerful the computer is. The computations are building up from smaller to higher scales as illustrated with the multi-dimensional, multi-level crossword puzzle metaphor in post #141. As the coordination of computations is extended, the economies of scale squeeze out more inefficiencies and boost the computing power of the overall system. That still only pushes the boundary of possible out a bit, but the boundary still exists.
I’m not going to let you off the hook so easily, because I’m pretty sure I’m on to something here. Your theory involves an utmost attempt to explain intelligence bottom-up. In fact it is obvious that your theory has a much better chance in succeeding than plain old naturalism. Unfortunately the looming success of your theory has become its main problem. Everything is so unfathomably intelligent starting from the bottom and skyrocketing in ever more increasing intelligence that your theory rendered itself incapable of explaining non-intelligence. It is not able to explain why there is no overcrowded universe filled with Max Plancks. You mention time as a boundary, but there are billions of stars in the galaxy that are billions of years older than the sun. But let’s forget about obtuse planets and stars, most organisms are also bad at math. Come to think of it, most people are too. I’m also arguing a more principled case against your theory, 'one cannot explain a whole from its parts'. Maybe you care to give your opinion on the examples I presented in post#154 Box
Box #159: If everything is so unfathomably intelligent, from the intelligent elemental building blocks that form the conscious super-intelligent Planckian networks , that form the elementary particles, all the way up, why is it that the universe is so unintelligent and lifeless? There are limits on what can be computed in a given time on given amount of hardware, no matter how powerful the computer is. The computations are building up from smaller to higher scales as illustrated with the multi-dimensional, multi-level crossword puzzle metaphor in post #141. As the coordination of computations is extended, the economies of scale squeeze out more inefficiencies and boost the computing power of the overall system. That still only pushes the boundary of possible out a bit, but the boundary still exists. At present, in this corner of the universe we (humans & our societies) are the edge of the technological advance, the best solution the Planckian network could compute around here. As we're all well aware, harmonization of computations or actions at the level of groups of humans is still quite an incomplete job. We're the technology which was designed by the Planckian networks to solve these harmonization problems, the best one they have, and we're doing it the best we know how. There isn't an omniscient, omnipotent solver or a cheat sheet to short-circuit the job. Computation has to take what it takes to complete at our scale, 1+1 cannot become 3 no matter how convenient or useful that might be sometimes. Our contribution is a small refinement, a finer tuning of the capacities and efficiencies already achieved by the heavy lifters at the levels below our, such as biochemical networks making life possible or Planckian networks making physics & chemistry possible for the latter. A grain of salt is needed here when speaking of these levels (physics, chemistry, biology) as discrete, cleanly separate concepts. These layers are an artifact of the cognitive coarse graining we have to settle with due to our human limitations in comprehending all the intricacies and finesse of the patterns computed by the Planckian networks. They're not computing laws of physics, chemistry, biology,... separately but as a whole, single live pattern advancing as computed, some features of which we label as layers of laws at different levels. That's similar to seeing a discrete set of few rainbow colors in what is in fact a continuous spectrum of virtually infinite number of distinct colors. Laws are thus not reducible between "layers" e.g. biology doesn't follow from laws of physics just a laws of social organization don't follow from biological laws of human organism. Biology is only consistent with laws of physics, but since laws of physics are of statistical nature at their foundation (quantum theory), the consistency constraint leaves plenty of room for finer tuning at higher layers i.e. for finer details of the computed whole patterns which are not captured by the coarse grained regularities we conceptualize as laws of physics. Hence, concept of "laws" altogether is a limited tool for conceptualizing and describing the full, whole patterns computed by the Planckian networks. In addition to capacity vs problem difficulty limitations, there are additional constraints on computations, the general rules of the game. The most important one is that higher levels cannot violate harmonization already achieved at the lower levels e.g. we, who are at biological level, cannot violate laws of physics, just as laws of physics cannot violate laws of computations of Planckian networks (e.g. we cannot reach down and tweak the cogs of Planckian networks through some physical contraption). Allowing for any such violations would invalidate harmonization (or solutions in crossword puzzle picture) achieved at the lower layers by computing systems which are far superior in their computing capacity to us (our computations are merely a finer tuning, little corrections to the least significant digit, as it were, to solutions computed by the heavy lifters). The resulting loss of harmonization at lower layers (via violations of laws of physics), would cost far more in lost computing capacity than the tiny addition we might be able to get in ruturn for such violation. This additional large costs would result from the loss of mutual predictability between the cogs at smaller scales (since the mutual predictability is the key lever of the economies of scale). It's the laws (or regularity of patterns) that make the predictability possible, thus their violation would drive the system into a lawless, everyone for himself, inefficient state of operation. An immediate consequence of the above rule is that actions are local, limited by the speed of light and the physical forces and laws. Hence, harmonization is local as well i.e. at larger scales the chaos still rules and that can throw a monkey wrench into any local advance. For example, as result of the large scale chaos, a large asteroid, following its own happiness, could strike Earth almost any time and we may not be able to do anything to deflect it at present. Once our technological harmonization extends into the larger solar system, then some level of such chaotic reversal can be prevented (e.g. short of another star heading our way). We are in fact an intermediate level of the technology designed by the underlying networks as a way to compute how to achieve that level of harmonization and preclude chaotic setbacks of that kind, and whatever we build for our stretch of that task is then the rest of that protective technology. Another of the implications of this bottom up superiority of laws, is that at our level, any fully harmonized social system will not be able to violate individual's 'pursuit of happiness' (which is the primary law of human individual), assuming we're at that time evaluated as a technology that should carry on. Obviously, we're still quite a bit away from computing that level of social scale harmonization. The tuning and adjustments needed for that level of harmonization will have to modify both sides i.e. while the social rules will obviously need to evolve, the humans who will live in such fully harmonized society will also very likely not include the full spectrum of human variety present today. Otherwise the sanctity of the individual's 'pursuit of happiness' could easily backfire, as you can easily imagine considering all the stuff that makes some people happy nowadays. At the extreme end, the further computations by Planckian networks and their larger scale technologies (including us and our computers) may eventually reveal that carbon based technology (humans) is altogether unsuitable (suboptimal) for the job, and silicon based or some other technology will carry on the harmonization beyond some point, just as dinosaurs and countless other carbon technologies were computed as being suboptimal at various points and were replaced with more suitable, improved technologies. All we can do is to continue contributing the harmonization process the best we can, to prove ourselves worth keeping. Regarding the main question, "where are they?" the above limitations point to one possibility -- it is not easy to produce life. Consider how much production has to happen, from supernovas cooking up heavy elements, then exploding to scatter their products so that potentially habitable planetary systems can form, provided lots of other conditions line up just the right way at the right place. Since that is apparently the best technology Planckian networks were able to compute so far for the job, it may be that life is indeed very rare. It may also be that we don't know how to recognize it in what we already see or what is reaching us. There could be high tech live entities which are a lot smaller or a lot larger than our imagination could conceive. Or they may operate at spectral ranges we don't watch for. Or appear as something we don't expect life ought to look like. It may also be that beyond certain point of technological advance, far more efficient communication technologies arise which don't scatter and waste away as much energy into the universe as our present technologies do, becoming thus invisible from far away. For example, if you look at the biological organisms, which are computed by the biochemical networks (keeping in mind the "grain of salt" above) -- we can only envy the energy efficiency of that nano-technology, which scatters and wastes very little into the stray EM radiation as it coordinates operation between trillions of cells. If our technology were to reach that level of efficiency, we would probably be EM-undetectable from Moon. Another possibility is that carbon life isn't the best solution for large scale harmonization and we're just a try that will turn out to be dead end and get discarded when that gets figured out. So, the question is a bit like asking someone "if you are so smart why aren't rich," as if being rich is the smartest thing one can do. nightlight
Re the human mind and a computer's artificial intelligence, it all comes down to the activities or procedures of the agent, and whether it is the proximate exercise of the latter by the human mind, or the ultimate exercise of it by the software writer, the intelligence depends on the human will - volition; just as crucial as the other faculties of the human soul: the memory and the understanding. NL seems to have set out to explain intelligence as a product of matter, as his basic assumption, then set out to convince himself of the veracity of an ever-more imaginative and complex edifice that he proceeded to create. Without reference to human volition, his excogitations can never arrive at the truth of the matter. Axel
Nightlight, If everything is so unfathomably intelligent, from the intelligent elemental building blocks that form the conscious super-intelligent Planckian networks , that form the elementary particles, all the way up, why is it that the universe is so unintelligent and lifeless? How can it not be intelligent? How can this self-organizing, self-learning, super-intelligence present itself as e.g. the obtuse planet Mars? Box
Here's an interesting quote from lecture notes of Scott Aaronson: Lecture 11: Decoherence and Hidden Variables - Scott Aaronson Excerpt: Look, we all have fun ridiculing the creationists who think the world sprang into existence on October 23, 4004 BC at 9AM (presumably Babylonian time), with the fossils already in the ground, light from distant stars heading toward us, etc. But if we accept the usual picture of quantum mechanics, then in a certain sense the situation is far worse: the world (as you experience it) might as well not have existed 10^-43 seconds ago! http://www.scottaaronson.com/democritus/lec11.html bornagain77
NL, your entire argument against quantum non-locality fails for the simple reason that the entire universe was brought into being non-locally (i.e. by a beyond space and time, matter and energy, cause). For you to argue against quantum non-locality when the entire universe originated in such fashion is 'not even wrong' to put it mildly! :) ,,, bornagain77
"Creating a universe would seem to require an intelligence that is external and causally prior to the universe" nightlight
Nope, that doesn’t follow.
Of course if follows. A universe cannot create itself. In order to do that, it would have to exist before it existed, which is absurd.
As explained in #19 and #35, with adaptable networks you can have a form of intelligence which is additive, i.e. you start with relatively ‘dumb’ elements (nodes & links), using simple rules to change their states and modify links (unsupervised learning), which would no more of cost in assumptions than regular physical postulates.
Even the most optimistically conceived process of additive intelligence cannot serve as an ex nilio creator or facilitate retroactive causation.
I think that this type of computational notion of intelligent agency would have served ID a lot better than the scientifically undefined ‘mind’ or other concepts that don’t have counterparts in natural science.
ID methodology does not posit or make provisions for a scientifically undefined mind.
Natural science has no a priori problem with having intelligent agency as an element.
Natural science, as defined by the National Center for Science Education, does have a problem with intelligent agency as an explanatory element in biology, as do many other influential agencies. This is a problem. StephenB
bornagain77 #147: I guess that is why NL went after Quantum Non-locality so hard (Bell's theorem violations), since it undermines his entire framework (though his framework is shaky from many different angles anyway). Not at all, I knew Bell's inequalities "violation" was a dead end long before I ever heard of neural networks or of Planck scale pregeometry models. Now that you brought that up, let me 'splain a bit how all that went. After reading hundreds of papers and dozens of books for the masters thesis on "Quantum Paradoxes" (that was in the old country), I was more perplexed about the problems than when I started, when I knew only the QM textbook material. Then I 'came to America', the land of milk and honey. After the grad school at Brown (where I worked on problems of quantum field theory and quantum gravity, doing my best to forget everything about the perplexing "quantum paradoxes"), I went to work in industry and got a chance to get into a real world quantum optics lab (a clean room instrumentation company), where they do exactly the type of coincidence experiments on photons that had supposedly "proven" (modulo loopholes) violations of Bell inequalities (BI). That's when it struck me that with all the massive reading and theorizing, all I knew about it was completely wrong. It basically comes down to what becomes instantly obvious in the real world lab -- the origin of the apparent QM non-locality (as implied by the BI "violations") is the explicitly non-local measurement procedure. Namely to simultaneously measure 2 spins as a pair (or polarizations for photons) of two photons A and B, the actual real lab procedure gets the result on photon A (clicks on 2 detectors +1 and/or -1), then it accepts or rejects result obtained on B based on the result obtained on the remote photon A, leaving the filtered pair results as the final pair event counts. Yet, the assumption behind BI derivation is that the two measurements on A and B are local and completely independent from each other. As an illutration, imagine The Master claiming telepathic powers by arranging a procedure like this: the 'sender' writes down number 1 or 2 he is thinking of, the Master who is 'receiver' in the other room, writes down his guess 1 or 2. All good and fine so far. Then the Master gets the sender's slip, puts it face up next to his, quickly glances down and after a moment of meditation to consult with higher powers, declares the judgment of the higher powers as: 'experiment is valid' (results count) or 'experiment is invalid' (result is discarded). I see. Yep, I am definitely going to invest into the Master's wireless telecommunication company that needs no electric power or Data Centers to work (analogue of quantum computing). But then some doubting Thomas starts challenging the Master's claim, pointing out the suspicious glance at the senders slip. Master dismisses it, oh, that's just an innocent loophole, a stopgap measure until we develop a more ideal coupling channel with the higher powers. The current imperfect coupling requires that both slips must sit there face up for a second. It would be absurd to imagine that the improved coupling would yield worse results, when even the current imperfect coupling already demonstrates the immense power of this transmission technology. That's precisely the kind of verbal weaving and weaseling used by the 'quantum magicians' to dismiss half a century long, uninterrupted chain of failures to obtain the "loophole free" BI violations (the reasoning about the improved technology and absurdity of doubt is literally from John Bell's paper, only translated from physics jargon to Master's experiment). This is exactly what struck me in the real world quantum optics lab, where it downed on me that somehow, through all that reading and long discussions with 3 professors, I was, as it were, kept unaware of the Master's 'quick glance' over the other slip before the meditation. It was like watching a stage magician from behind the curtain and slapping my forehead, oh, that's how he does it. That little insignificant bit of allegedly mere experimental trivia was just glossed over, somehow never reaching my consciousness. Whatever one may think of Zeilinger and the rest of the 'quantum magic' brotherhood, you can't but admire the art of verbal misdirection they have honed to absolute perfection over the decades. You can watch it hundred times from a foot away, and it will still dupe you every single time. That's how good they are. As I got more in depth of Quantum Optics, I found that there is another measurement theory (MT) that quantum opticians use, developed in 1964-5 by Roy Glauber [1], based on Quantum Electrodynamics (QED) model of photodetection. The MT of regular quantum mechanics (QM), as taught to students and as used by Bell for his theorem, was developed in 1930s by von Neumann, Bohr, Heisenberg, Schrodinger and others. Quantum Opticians use the newer one, Glauber's QED MT, because QED is a deeper theory of photons than QM, and it tells them exactly what they should get and how to get it. The Glauber's MT is a pretty heavy reading, though, with 60+ page long proof of the main result, [1], and I have yet to find a physicist working on BI violations or quantum computing, quantum crypto etc, who has ever heard of it, let alone gone through the proof. Even the quantum opticians, who use that theory in daily practice, learn it in a simplified, engineering form like a cooking recipe, without bothering with proofs. Having a particularly strong motivation for the matter because of the previous thesis subject and the resulting perplexity, I took the trouble to work my way through the long and dense primary source [1] (using up about couple weeks of evenings and weekends, in free time from a day job). The critical difference between QED MT and QM MT is that QED MT prescribes, as a result of the QED model of photon measurements, precisely the above non-local procedure for extracting the results on pair of photons (when you accept or reject results for the pair based on inspection of both results from remote photodetectors i.e. via the Master's telepathic scheme with the 'quick glance' step mandated by the theory). In contrast, the QM (which doesn't have a detailed theory of photodection or of quantized EM fields), merely postulates the existence of an "ideal apparatus" for the pair measurement (analogous to the Master's ideal 'coupling channel'), in which the results on A and B are taken locally and independently from each other (i.e. without knowing the remote result before making pair decision; or, without Master's glance on the other slip). This "ideal apparatus" is allegedly just around the corner, as soon as the technology of photodetectors catches up with the 1930s QM measurement "theory". The 1960s Glauber's MT implies that such apparatus can't exist for photons as a matter of more fundamental theory (QED). Yet, it is precisely this imagined "ideal apparatus" that allows Bell to derive his inequalities and prediction that QM violates them, on the "ideal apparatus", though. Hence, the situation is "interesting" to put it politely. On one hand, you have experiments which don't violate BI and you have deeper and newer theory (Glauber's QED measurement theory) which says: what those experiments got is exactly what QED predicts they ought to get (non-violation). On the other hand, you have a weaker (shallower and older) theory of measurement, QM MT, which says you should get BI violations, but the experiments are still imperfect and in the next few years we will get it to work "loophole free" for sure, this time, just one more round of funding and we're there. Knowing all this, I had no problem or conflicts when later these pregeometric Planck scale models came out (last 10 years mostly), since I knew with absolute certainty I need not pay the slightest attention to the Bell Inequalities constraint, they are work fiction. Folks like t'Hooft, Wolfram, Penrose and others who came up with those pregeometry models, while not familiar with much of the above (especially with the Glauber's work), simply overrode the apparent conflict by shear force of intuition which told them to just go ahead, this is much too interesting and promising to stop pursuing only because of Bell's theorem which is kind of weird anyway (t'Hooft thinks it's irrelevant for his pregeometry). --------- refs ------- 1. R. J. Glauber, "Optical coherence and photon statistics" in Quantum Optics and Electronics, ed. C. de Witt-Morett, A. Blandin, and C. Cohen-Tannoudji (Gordon and Breach, New York, 1965), pp. 63-185. (paywalled pdf, sorry, I have only a hard copy) nightlight
Nightlight (152), thank you for your informative response. I intend to get back to you now that I have a clearer idea of what you are aiming for. For now I would like to repeat that one cannot explain the whole from its parts. What we see in organisms is top-down organization from the level of the whole organism. We cannot reconstruct the pattern at any level of activity by starting from the parts and interactions at that level. There are always organizing principles that must be seen working from a larger whole into the parts. I can provide you with many examples but instead I will give you just two. - We cannot explain an organisms phenotype from DNA. A monarch butterfly and its larva, for example have totally distinct body plans originating from the same DNA. - The whole – the form - can also mold multiple sets of DNA into one organism - chimerism. Box
Philip, I was pretty shocked to read that even the Pontifical Academy of Sciences isn't a safe haven for scientists informed by their theistic, Christian faith/knowledge, the basis of which is now amply proven on a number of grounds. In fact, the Catholic church still seems to be adversely affected by the part-scandal, part atheist-propaganda coup of Galileo's trial. In the Gospel days, in some significant regards, the Christian faith had a different meaning to the faith of later centuries. Notably, commitment to an indigent, homeless, itinerant preacher and his motley band of apostles, all supported by a group of women followers - in right-wing, economic parlance, 'freeloaders', 'panhandlers', 'welfare-scroungers', 'stumble-bums', etc. It meant more or less openly committing oneself to Christ and his Gospel, in the teeth of the threat of banishment from the Synagogue - no small thing in a small, theocratic society. However, in reality, to some extent, the demands of faith obviously changed for most people subsequently, becoming more a matter of credence, even convention than commitment, as to borrow Francis's words, as it became increasingly self-referential and decreasingly evangelical. This led to a bizarre posture of the Church comically adverted to by the late Malcolm Muggeridge, who remarked on the way in which the Catholic church seemed to want to do everything in its power to downplay, almost to deny, the supernatural, when the Church is, in fact, a highly supernatural sacramental phenomenon, notwithstanding the egregiously scandalous periods of its lengthy, institutional history. Muggeridge claimed that if priests stood at the door of their churches on a Sunday morning with whips in their hands, menacing churchgoers, they could scarcely be more likely to drive people away from Christianity. (Perhaps, in reaction, the Charismatic movement was started, which imo rather tends to encourage a more superficial interest in the faith - although far better than denying its supernatural ethos and ambience, of course). Now, it seems to me, the Church really needs to 'get a grip' and go after atheism, bald-headed with the now manifestly indisputable scientific underpinning, not only of theism, but of the Christian faith, itself. Such articles as I have read concerning the ostension of the Holy Shroud of Turin, and quotes of churchmen in its regard, are still, downplaying, indeed, marginalising the confluence of the supernatural with the very latest molecular physics, chemistry, etc, as delineated in that YouTube video on the Shroud, and the evident signs of an event horizon and singularity having manifested. Much should be made of the innumerable indicators of the genuineness of the Shroud, including pollen only found in the Jerusalem area. Also, emphatically, the Sudarium of Oviedo, the history of which was, I believe, recorded in an unbroken fashion from the time of Christ, and the way in which it matches the blood-stains on the Shroud. The one radiocarbon testing carried out, indicating it was fraudulent, since dating from no earlier than the Middle Ages, was itself apparently fraudulent; but the matter for incredulous astonishment to all but us UDers/IDers, is that that radiocarbon testing seemingly renders all the other confirmatory evidence of no value, their verification presumably being of an inferior nature. One Catholic author even referred to appeals to scientific proof as being 'dangerous'! Of course, it is understandable that one would have to be absolutely certain of the science, in the normal applicable terms, to adduce it emphatically as proof of Christ's life, death and what looks uncommonly like some kind of scientifically identifiable resurrection. Of course, it remains of paramount importance to emphasise that our Christian faith cannot and does not rely on such scientific findings. Nevertheless, it seems to me that, while in his own day, when he walked this earth, with rare exceptions (such as raising Lazarus), Christ did not wish to convince everyone of his infinite, divine power, since his appeal would then have been and would still be a vapid and meretricious appeal to the head, instead of the heart: to the worldly intelligence, instead of the spiritual wisdom of the heart. Nevertheless, it seems to me that we are approaching a new kind of faith paradigm, should have done so, in fact, some time ago, in which science should be used to its fullest extent as a Sting, for which it seems, in part, to have been intended. There would be many people today who are not power-lovers, but would profit greatly from such encouragement to believe, in the teeth of the ubiquitous, media-driven materialists' propaganda, seeking to keep our faith separate from the 'certainties(!) of scientism' - the rationalists' reckless perversion of modern, scientific understanding, in order to disparage Christianity. As for the Pontifical Academy of Sciences, Francis needs to go through it purging it of its aggressively atheist members, like Christ driving out the money-lenders, with the whip he so carefully plaited. Axel
Box #142: So quarks, photons and such designed Planckian networks? Did they use their intelligence to do that? It's the other way around. Planck scale is 10^-35 m, while our "elementary" particles are at 10^-15 m scale. Our "elementary" particles are analogous to gliders in Conway's Game of Life. The Planckian networks would correspond to computer running that program, hence they are computing our physics (along with biology and up). Check for example Wolfram's NKS or other similar networks based pregeometry models. While there isn't presently a single unifying pregeometry model of this type which could reproduce entire physics, there are isolated models for each of the major equations/laws of physics (e.g. Maxwell, Schrodinger, Dirac, Lorentz transformations). Although still fragmented, such models provide interesting clues as to what might be going on at that level. If one then considers the adjacent open questions, such as fine tuning of physical laws and origin of life problems, both requiring enormously powerful computations to navigate the whole system at the razor edge above the oblivion, the augmentation of Planck scale networks of physics models into neural networks (Planckian networks) seems the most natural hypothesis. The computing power available via such augmentation is 10^80 times greater than the best computing technology we could ever design using our elementary particles as building blocks. If you then add to the list of clues the ID implications about massive computing power needed to compute molecular nano-technology behind life and its evolution, the Planckian networks click in perfectly again providing exactly what is missing. That's three birds with one stone, at least. One could hardly imagine a stronger hint as to how it all must be put together. They don't need a brain because they just happen to be conscious right? The Planckian networks are "brain", a distributed self-programming computer, operating as a goal directed anticipatory system via internal modeling algorithms. These are all traits and capabilities available to unsupervised neural networks (see post #116 for bit more details). The key element of intelligence is built into (front loading aspect) the elemental building blocks that form the Planckian networks. As explained in posts #58 and #109, these building blocks have only two states +1 = (reward, happiness, joy, pleasure, love...) and -1 = (punishment, unhappiness, misery, pain, hate...). The descriptive terms refer to some of the 'mind stuff' manifestations of such states, as they get amplified by the hierarchy of networks up to human level. One might say the 'mind stuff' is driving the actions of the networks via optimization seeking to maximize the sums of +1s and -1s, or in human language, the pursuit of happiness is the go of it. You say it all combines, but exactly that is a huge problem for panpsychism: `how do the alleged experiences of fundamental physical entities such as quarks and photons combine to yield human conscious experience'. One cannot explain the whole from its parts. The William James' "composition problem" of panpsychism is not a problem in this model as explained posts #58, #109. In short, when your pattern recognizers for "red" are in a "happy" state (sums of +1s within the recognizer dominate), and recognizers for "round" are in a happy state, then, if there is a third recognizer "connected" to these two, it goes into a happy state, which is experienced as a "red ball" by you. Note that "connected" here is meant in a generalized sense, i.e. including not only neurons connected via axons and dendrites, but also wirelessly, via resonant superposition of electromagnetic fields without any direct cellular contact, which works as well, provided the distant neurons oscillate at the same frequency. nightlight
But Phinehas, I do find one thing correct in the 'computational universe' model of Wolfram,,, The universe we measure (consciously observe) is 'information theoretic' at its base:
"It from bit symbolizes the idea that every item of the physical world has at bottom - at a very deep bottom, in most instances - an immaterial source and explanation; that which we call reality arises in the last analysis from the posing of yes-no questions and the registering of equipment-evoked responses; in short, that things physical are information-theoretic in origin." John Archibald Wheeler Why the Quantum? It from Bit? A Participatory Universe? Excerpt: In conclusion, it may very well be said that information is the irreducible kernel from which everything else flows. Thence the question why nature appears quantized is simply a consequence of the fact that information itself is quantized by necessity. It might even be fair to observe that the concept that information is fundamental is very old knowledge of humanity, witness for example the beginning of gospel according to John: "In the beginning was the Word." Anton Zeilinger - a leading expert in quantum teleportation:
But alas, as Anton Zeilinger has pointed out, Theists have, long before Wolfram was even born, been here all along:
"For the scientist who has lived by his faith in the power of reason, the story ends like a bad dream. He has scaled the mountain of ignorance; he is about to conquer the highest peak; as he pulls himself over the final rock, he is greeted by a band of theologians who have been sitting there for centuries." - Robert Jastrow
bornagain77
What does the term "measurement" mean in quantum mechanics? "Measurement" or "observation" in a quantum mechanics context are really just other ways of saying that the observer is interacting with the quantum system and measuring the result in toto. http://boards.straightdope.com/sdmb/showthread.php?t=597846 bornagain77
Of note to the 'randomness' of free will conscious observation being different from the 'external entropic randomness' of the universe: In the beginning was the bit - New Scientist Excerpt: Zeilinger's principle leads to the intrinsic randomness found in the quantum world. Consider the spin of an electron. Say it is measured along a vertical axis (call it the z axis) and found to be pointing up. Because one bit of information has been used to make that statement, no more information can be carried by the electron's spin. Consequently, no information is available to predict the amounts of spin in the two horizontal directions (x and y axes), so they are of necessity entirely random. If you then measure the spin in one of these directions, there is an equal chance of its pointing right or left, forward or back. This fundamental randomness is what we call Heisenberg's uncertainty principle. http://www.quantum.at/fileadmin/links/newscientist/bit.html bornagain77
'(Caution: there is a foul-mouthed neo-Darwinian dogmatist character posting there, a robot named Thorton, who merely plays back the official ND-E mantras which are triggered by the first few keywords in a post he recognizes, without ever reading or understanding the arguments being made; trying to discuss with him is a waste of time.)' Tell Joe about him, nightlife... then duck, crouch into a ball, covering your face and head as best you can. Axel
Phinehas at 89:
Hey BA, have you looked much into Wolfram’s new kind of science? I’d be interested in your take on it. I only know enough to find it very intriguing, but not enough to seriously evaluate it as a truth claim.
Sorry I have not answered you sooner. Basically, I have not looked too deeply into his work but have only heard of his work in passing from a criticism I read by Scott Aaronson
Quantum Computing Promises New Insights, Not Just Supermachines - December 5, 2011 Excerpt: And yet, even though useful quantum computers might still be decades away, many of their payoffs are already arriving. For example, the mere possibility of quantum computers has all but overthrown a conception of the universe that scientists like Stephen Wolfram have championed. That conception holds that, as in the “Matrix” movies, the universe itself is basically a giant computer, twiddling an array of 1’s and 0’s in essentially the same way any desktop PC does. Quantum computing has challenged that vision by showing that if “the universe is a computer,” then even at a hard-nosed theoretical level, it’s a vastly more powerful kind of computer than any yet constructed by humankind. Indeed, the only ways to evade that conclusion seem even crazier than quantum computing itself: One would have to overturn quantum mechanics, or else find a fast way to simulate quantum mechanics using today’s computers. http://www.nytimes.com/2011/12/06/science/scott-aaronson-quantum-computing-promises-new-insights.html?pagewanted=all&_r=0
And as I pointed out yesterday we already have very good evidence that quantum computation is already being accomplished in molecular biology for 'traveling saleman' problems: https://uncommondesc.wpengine.com/news/from-scitechdaily-study-describes-a-biological-transistor-for-computing-within-living-cells/#comment-451310 But one point I did not draw out yesterday, in the traveling salesman example, is that there are limits to the problems that even quantum computation can solve in molecular biology:
The Limits of Quantum Computers - Scott Aaronson - 2007 Excerpt: In the popular imagination, quantum computers would be almost magical devices, able to “solve impossible problems in an instant” by trying exponentially many solutions in parallel. In this talk, I’ll describe four results in quantum computing theory that directly challenge this view.,,, Second I’ll show that in the “black box” or “oracle” model that we know how to analyze, quantum computers could not solve NP-complete problems in polynomial time, even with the help of nonuniform “quantum advice states”,,, http://www.springerlink.com/content/0662222330115207/
And Protein folding is found to be a 'intractable NP-complete problem' by several different methods. Thus protein folding will not be able to take advantage of any advances in speed that quantum computation may offer to any other problems of computation that may be solved in polynomial time:
Combinatorial Algorithms for Protein Folding in Lattice Models: A Survey of Mathematical Results – 2009 Excerpt: Protein Folding: Computational Complexity 4.1 NP-completeness: from 10^300 to 2 Amino Acid Types 4.2 NP-completeness: Protein Folding in Ad-Hoc Models 4.3 NP-completeness: Protein Folding in the HP-Model http://www.cs.brown.edu/~sorin/pdfs/pfoldingsurvey.pdf
Thus, even though NL rejects Quantum Computation, even a 'naturalistic' view of quantum computation is found to be prevented from being able to find a 'bottom up' path to increased functional complexity at the protein level. Scott Aaronson was more specific in his critique of Wolfram here:
Wolfram's speculations of a direction towards a fundamental theory of physics have been criticized as vague and obsolete. Scott Aaronson, Assistant Professor of Electrical Engineering and Computer Science at MIT, also claims that Wolfram's methods cannot be compatible with both special relativity and Bell's theorem violations, which conflict with the observed results of Bell test experiments.[23] http://en.wikipedia.org/wiki/A_New_Kind_of_Science#The_fundamental_theory_.28NKS_Chapter_9.29
I guess that is why NL went after Quantum Non-locality so hard (Bell's theorem violations), since it undermines his entire framework (though his framework is shaky from many different angles anyway). Yet contrary to the narrative that NL has been promoting that quantum non-locality has been a failure for 50 years, the plain fact of the matter is that quantum non-locality has been making steady progress towards 100% verification, whereas those who oppose quantum non-locality because of philosophical reasons, like NL and Einstein before him, have been in steady retreat for 50 years (especially over the last decade or so).
Quantum Entanglement – The Failure Of Local Realism - Materialism - Alain Aspect - video http://www.metacafe.com/w/4744145 Quantum Measurements: Common Sense Is Not Enough, Physicists Show - July 2009 Excerpt: scientists have now proven comprehensively in an experiment for the first time that the experimentally observed phenomena cannot be described by non-contextual models with hidden variables. http://www.sciencedaily.com/releases/2009/07/090722142824.htm
(of note: hidden variables were postulated to remove the need for 'spooky' forces, as Einstein termed them — forces that act instantaneously at great distances, thereby breaking the most cherished rule of relativity theory, that nothing can travel faster than the speed of light.) In fact the foundation of quantum mechanics within science is now so solid that researchers were able to bring forth this following proof from quantum entanglement experiments;
An experimental test of all theories with predictive power beyond quantum theory – May 2011 Excerpt: Hence, we can immediately refute any already considered or yet-to-be-proposed alternative model with more predictive power than this. (Quantum Theory) http://arxiv.org/pdf/1105.0133.pdf
Moreover, Quantum Mechanics has now been extended to falsify local realism without even using quantum entanglement to do it:
‘Quantum Magic’ Without Any ‘Spooky Action at a Distance’ – June 2011 Excerpt: A team of researchers led by Anton Zeilinger at the University of Vienna and the Institute for Quantum Optics and Quantum Information of the Austrian Academy of Sciences used a system which does not allow for entanglement, and still found results which cannot be interpreted classically. http://www.sciencedaily.com/releases/2011/06/110624111942.htm
bornagain77
'I don’t see how computing targets and algorithmic goals can exist anywhere except in some form of consciousness, nor can I see how there is a “bottom up” pathway to such machinery regardless of what label one puts on that which is driving the materials and processes.' Yes, William, the question of will, volition, remains unanswered, even unaddressed, by nightlight, as far, as I can understand your drift nightlife. Am I correct in thinking that you state that one cannot yet identify the prime mover, since it is the vanishingly small nucleus of a Russian doll-kind of superposition of causes? But, one day....? Axel
NL: I have a bit of prep work to get on with for this evening, so I simply note that my usage of "algorithm" happens to be standard; where BTW a nodes-arcs framework notoriously describes such, per flowcharts ancient and modern [i.e. disguised forms in UML and methods/functions in OO languages -- I see If_else just got built into a Java version]]. Also, attributing designing intelligence to biochem rxn sets is a bit odd and going to particle-quantum networks is even odder. Please cf. Liebniz [IIRC] and the analogy of the Mill. KF kairosfocus
Happy Easter everyone, I have just a moment this morning. It seems like nightlight is using Planck Networks as a way to account for biology and all of biology's output (designed objects) by a unifying principle underlying all physical laws. From comment #109
Q2) Do you agree that intelligent agency is a causal phenomenon which can produce objects that are qualitatively distinguishable from the products of chance and necessity, such as those resulting from geological processes? Assuming ‘intelligent agency’ to be the above Planckian networks (which are conscious, super-intelligent system), then per (Q1) answer, that is the creator of real laws (which combine our physical, chemical, biological… laws as some their aspects). Since this agency operates only through these real laws (they are its computation, which is all it does), its actions are the actions of the real laws, hence there is nothing to be distinguished here, it’s one and the same thing.
nightlight, I don't want to misrepresent you so please correct or qualify the above. Cheers Chance Ratcliff
Axel #137: Although it is a hallmark of the atheist's credo that a single human being would be similarly insignificant and inconsequential to God; in the starkest contrast with the Christian tenet that Christ would have accepted his crucifixion for just one, single human being. This is a different kind of "front loading" than deism, where the initial mover just sets the cogworks into motion and lets go. While the additive form of intelligence is front loaded into the elemental building blocks, in this perspective there is no separation between creation and creator at any point or any place, since the creation is being upheld (computed) in existence continually, from physical level and up. This relation is analogous to that between computer (analogue of creator) running the Conway's Game of Life (analogue of universe), where gliders and other patterns are analogous to our "elementary" particles and larger objects (including us). The computer (which is creator of this toy universe) upholds it in existence at all moments and at each point of the grid. If the program were to quit its busy work even for a moment, the toy universe would perish instantly. Check for example post #100 on how this phenomenon you brought up of 'god becoming man' can be modeled in this scheme. nightlight
Nightlight (35): In contrast, “computation” and “algorithms” are scientifically and technologically well accepted concepts which suffice in explaining anything attributed to type of intelligence implied (via ID argument) by the complexity of biological phenomena.
Why do you hold that FSCO/I (or information) does not encompass ‘computation’ and ‘algorithms’? And is it not so that computation and algorithms – like FSCO/I and information – refer to a designer?
Nightlight (35): As well you have a severe blind spot in that it impossible to account for the origination of these chess playing programs in the first place without reference to an intelligent, conscious, agent(s). Yes, there were designed by intelligent agents, called humans. Just as humans are designed by other intelligent agents, the cellular biochemical networks.
What is your definition of designing and agents? Cellular biochemical networks designed us?? They are agents like us? Panpsychism is truly remarkable.
Nightlight (35): Your body, including brain, is a ‘galactic scale’ technology, as it were, designed and constructed by these intelligent networks, who are the unrivalled masters of molecular engineering (human molecular biology and biochemistry are a child’s babble compared to the knowledge, understanding and techniques of these magicians in that realm).
They are masters who are intelligent, construct, design, understand and have technique and knowledge … Especially how you panpsychists adhere overview (needed for planning and design) to these networks, photons and quarks is beyond my comprehension. Are you sure you are not speaking metaphorically?
Nightlight (35): In turn, the biochemical networks were designed and built by even smaller and much quicker intelligent agents, Planckian networks, which are computing the physics and chemistry of these networks as their large scale technologies (our physics and chemistry are coarse grained approximation of the real laws being computed).
More intelligent agents. I should not be surprised, because that is panpsychism. Why would they work together? Why doesn't the whole thing just fall apart? What force holds everything together precisely for a lifetime?
Nightlight (35): Since in panpsychism consciousness is a fundamental attribute of elemental entities at the ground level, it’s the same consciousness (answering “what is it like to be such and such entity”) which combines into and permeates all levels, from elemental Planckian entities though us, and then through all our creations, including social organisms.
So quarks, photons and such designed Planckian networks? Did they use their intelligence to do that? They don’t need a brain because they just happen to be conscious right? You say it all combines, but exactly that is a huge problem for panpsychism: ‘how do the alleged experiences of fundamental physical entities such as quarks and photons combine to yield human conscious experience’. One cannot explain the whole from its parts. Box
William J Murray #134: I don't really see how any of what you are saying is threatening in any way to any other ID position. It is the ID position, but the way a theoretical physicist would put it, not as biologists or biochemists or molecular biologists are doing it (as physics grad students we looked down upon those as soft and fuzzy fields for lesser minds; I've grown up a bit since then, though). It doesn't claim - or even imply - that god doesn't exist or that humans do not have autonomous free will One form of 'free will' within the scheme is as a tie-breaking mechanism within the internal model of the anticipatory system -- after evaluating prospective actions by the ego actor (counterpart of self within the model) during the what-if decision game running in the model space, if the evaluation for multiple choices is a tossup, since some action has to be taken, one is "willed" as the pick over the alternatives. Another form of free will arises when we realize that the evaluations in model space are recursive (i.e. the model space is fractal), modeling the other agents and their internal models playing their what-if game inside our model-actor of these agents, etc. In such multi-agent cases, the evaluation is highly sensitive to stopping place, e.g. what seems best at stage 1 of evaluation, may become inferior in stage 2, after we account that the other agent (playing in the model with ego actor) has realized it, too, hence his action may not be what was assumed in stage 1, thus another choice may become better in stage 2. Hence, the choice of the stopping place while navigating through the fractal space of models nested within models, is also an act of free will which affects the decision. The first form can also be seen as "free willing" the stopping place, since the alternative in the case of tie-break, which is no action until further finer evaluations are complete, is evaluated as inferior to doing something now. It is the ultimate "front loading" postulate (or "foundation loading"), with the fundamental algorithms (pattern recognition and reaction development) built into the substrate of the universe (if I'm understanding you correctly). Yep, that's exactly what it is. Just as with panpsychism, were you need some elemental 'mind stuff' at the ground level to get anything of that kind at the higher levels, here, in order to get intelligence at the higher levels, you need elemental intelligence built into the objects at the ground level. It is the ontological form of the "no free lunch" results about search algorithms. The key requirement was to find the simplest elements which have additive intelligence, and adaptable networks nicely fit that requirement (plus they resonate well with many other independent clues, including models of Planck scale physics). The main strength of the bottom-up approach is that it tackles not just the origin of intelligence guiding biological evolution, but also the origin of life and the fine tuning of laws of physics and physical constants (for life). Namely, in this picture the "elementary" particles and their laws (physics) are computational technology designed and built by the Planckian networks, the way humans or their computers may design and build technologies which span not just the globe, as they do today, but solar system and eventually galaxies. With that picture in mind, the fine tuning of physics for life is as natural and expected as the fact that cogs of technologies we build fit together correctly, the monitors and keyboards plug into and communicate with PCs, cars fit into the carwash gear, the same electric generators power vast spectrum of motors, computers and other devices,... since they are all designed to work together and combine into the next layer of technologies at a larger scale. The interesting question is what is this whole contraption (universe) trying to do, what is it building? Then, what for, why all the trouble? A little clue as to what it is doing, comes from inspecting how these networks work at our human level. Each of us belongs to multitudes of adaptable networks simultaneously, such as economic, cultural, political, ethnic, national, scientific, linguistic... Hence these larger scale adaptable networks, which are themselves intelligent agencies, each in pursuit of its own happiness, as it were (optimization of their net [rewards - punishments] score via internal modeling, anticipation, etc), are permeating each other as they unfold, each affecting the same cogs (human individuals), each tugging them their way. But these larger scale networks are shaped in the image of the lower scale intelligent networks building them, such cellular biochemical networks, which in turn are build in the shape of underlying Planckian networks which built them. The picture that this forms is like a gigantic multi-dimensional and multi-level crossword puzzle, where the smallest cells contain letters, next larger cells contain words, then sentences, then paragraphs, then chapters, then volumes, then subject areas, then libraries,... This crossword puzzle is solving itself simultaneously in all dimensions and on all levels of cells, seeking to harmonize letters so they make meaningful words in each dimension, then to harmonize multiple words so they make meaningful sentences in each dimension, then paragraphs ... across the whole gigantic hypertorus all at once. As the lower level cells harmonize and settle into solved, harmonious form, the main action, the edge between chaos and order, shifts to the next scale to be worked out. The higher scales must operate without breaking the solved cells of the previous layers, e.g. we have to operate without breaking physical, chemical and biological laws, which were solved into harmonious state in the previous phases, by networks which are computationally far more powerful than ourselves (thus having superior wisdom to our own). Now the hotspot of action is chiefly in our court to compute our little part and harmonize our level of the puzzle. Once completed, the razor edge of innovation shoots up to higher scales, thinner and sharper than ever before, leaving us behind, frozen in a perfect crystalline harmony and a permanent bliss of an electron. nightlight
Further notes on 'free will': Why Quantum Physics (Uncertainty) Ends the Free Will Debate - Michio Kaku - video http://www.youtube.com/watch?v=lFLR5vNKiSw Moreover, advances in quantum mechanics have shown that 'free will choice' is even effecting the state of 'particles' into the past: Quantum physics mimics spooky action into the past - April 23, 2012 Excerpt: According to the famous words of Albert Einstein, the effects of quantum entanglement appear as "spooky action at a distance". The recent experiment has gone one remarkable step further. "Within a naïve classical world view, quantum mechanics can even mimic an influence of future actions on past events", says Anton Zeilinger. http://phys.org/news/2012-04-quantum-physics-mimics-spooky-action.html In other words, if my conscious choices really are just merely the result of whatever state the material particles in my brain happen to be in in the past (deterministic) how in blue blazes are my choices instantaneously effecting the state of material particles into the past? since our free will choices figure so prominently in how reality is actually found to be constructed in our understanding of quantum mechanics, I think a Christian perspective on just how important our choices are in this temporal life, in regards to our eternal destiny, is very fitting: Is God Good? (Free will and the problem of evil) - video http://www.youtube.com/watch?v=Rfd_1UAjeIA “There are only two kinds of people in the end: those who say to God, "Thy will be done," and those to whom God says, in the end, "Thy will be done." All that are in Hell, choose it. Without that self-choice there could be no Hell." - C.S. Lewis, The Great Divorce bornagain77
Moreover NL, it seems to me that you are, besides attributing consciousness to computer programs, are claiming that computer programs, and the algorithmic information inherent within the programming of a cell specifically, is capable of creating new information. Just as James Shapiro, of 'natural genetic engineering' fame, claims. But you, just like James Shapiro, have ZERO evidence for this conjecture,,,
On Protein Origins, Getting to the Root of Our Disagreement with James Shapiro - Doug Axe - January 2012 Excerpt: I know of many processes that people talk about as though they can do the job of inventing new proteins (and of many papers that have resulted from such talk), but when these ideas are pushed to the point of demonstration, they all seem to retreat into the realm of the theoretical. http://www.evolutionnews.org/2012/01/on_protein_orig055471.html
In fact the best evidence I currently know of to support your position that algorithmic information can generate functional information is the immune system. But even this stays within Dembski's Universal Probability Bound:
Generation of Antibody Diversity is Unlike Darwinian Evolution - microbiologist Don Ewert - November 2010 Excerpt: The evidence from decades of research reveals a complex network of highly regulated processes of gene expression that leave very little to chance, but permit the generation of receptor diversity without damaging the function of the immunoglobulin protein or doing damage to other sites in the genome. http://www.evolutionnews.org/2010/11/response_to_edward_max_on_talk040661.html
bornagain77
NL:
Neo-Darwinian Evolution theory (ND-E = RM + NS is the primary mechanism of evolution), carries a key parasitic element of this type, the attribute “random” in “random mutation” (RM) — that element is algorithmically ineffective since it doesn’t produce any falsifiable statement that can’t be produced by replacing “random” with “intelligently guided” (i.e. computed by an anticipatory/goal driven algorithm). In this case, the parasitic agenda carried by the gratuitous “randomness” attribute is atheism.
Save for the fact that we actually can trace the source for randomness down in this universe,,,
It is interesting to note that if one wants to build a better random number generator for a computer program then a better source of entropy is required to be found to drive the increased randomness: Cryptographically secure pseudorandom number generator Excerpt: From an information theoretic point of view, the amount of randomness, the entropy that can be generated is equal to the entropy provided by the system. But sometimes, in practical situations, more random numbers are needed than there is entropy available. http://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator “Gain in entropy always means loss of information, and nothing more.” Gilbert Newton Lewis – Eminent Chemist Thermodynamics – 3.1 Entropy Excerpt: Entropy – A measure of the amount of randomness or disorder in a system. http://www.saskschools.ca/curr_content/chem30_05/1_energy/energy3_1.htm And the maximum source of entropic randomness in the universe is found to be where gravity is greatest,,, Evolution is a Fact, Just Like Gravity is a Fact! UhOh! – January 2010 Excerpt: The results of this paper suggest gravity arises as an entropic force, once space and time themselves have emerged. Entropy of the Universe – Hugh Ross – May 2010 Excerpt: Egan and Lineweaver found that supermassive black holes are the largest contributor to the observable universe’s entropy. They showed that these supermassive black holes contribute about 30 times more entropy than what the previous research teams estimated. http://www.reasons.org/entropy-universe ,,, there is also a very strong case to be made that the cosmological constant in General Relativity, the extremely finely tuned 1 in 10^120 expansion of space-time, drives, or is deeply connected to, entropy as measured by diffusion: Big Rip Excerpt: The Big Rip is a cosmological hypothesis first published in 2003, about the ultimate fate of the universe, in which the matter of universe, from stars and galaxies to atoms and subatomic particles, are progressively torn apart by the expansion of the universe at a certain time in the future. Theoretically, the scale factor of the universe becomes infinite at a finite time in the future.,,, Thus, even though neo-Darwinian atheists may claim that evolution is as well established as Gravity, the plain fact of the matter is that General Relativity itself, which is by far our best description of Gravity, testifies very strongly against the entire concept of ‘random’ Darwinian evolution because of the destructiveness inherent therein.
Moreover we can now differentiate that entropic randomness that is found in the universe from the randomness that would be inherent with a free will conscious agent., Quantum mechanics, which is even stronger than general relativity in terms of predictive power, has a very different ‘source for randomness’, a free will source, which sets it as diametrically opposed to materialistic notion of 'external' randomness:
Can quantum theory be improved? – July 23, 2012 Excerpt: However, in the new paper, the physicists have experimentally demonstrated that there cannot exist any alternative theory that increases the predictive probability of quantum theory by more than 0.165, with the only assumption being that measurement (conscious observation) parameters can be chosen independently (free choice, free will assumption) of the other parameters of the theory.,,, ,, the experimental results provide the tightest constraints yet on alternatives to quantum theory. The findings imply that quantum theory is close to optimal in terms of its predictive power, even when the predictions are completely random. http://phys.org/news/2012-07-quantum-theory.html Needless to say, finding ‘free will conscious observation’ to be ‘built into’ quantum mechanics as a starting assumption, which is indeed the driving aspect of randomness in quantum mechanics, is VERY antithetical to the entire materialistic philosophy which demands randomness as the driving force of creativity! Moreover we have empirical evidence differentiating these sources of randomness: i.e. The Quantum Zeno Effect: Quantum Zeno effect Excerpt: The quantum Zeno effect is,,, an unstable particle, if observed continuously, will never decay. http://en.wikipedia.org/wiki/Quantum_Zeno_effect The reason why I am fascinated with this Zeno effect is, for one thing, that 'random' Entropy is, by a wide margin, the most finely tuned of initial conditions of the Big Bang: Roger Penrose discusses initial entropy of the universe. – video http://www.youtube.com/watch?v=WhGdVMBk6Zo The Physics of the Small and Large: What is the Bridge Between Them? Roger Penrose Excerpt: “The time-asymmetry is fundamentally connected to with the Second Law of Thermodynamics: indeed, the extraordinarily special nature (to a greater precision than about 1 in 10^10^123, in terms of phase-space volume) can be identified as the “source” of the Second Law (Entropy).” How special was the big bang? – Roger Penrose Excerpt: This now tells us how precise the Creator’s aim must have been: namely to an accuracy of one part in 10^10^123. (from the Emperor’s New Mind, Penrose, pp 339-345 – 1989) Moreover, it is very interesting to note just how foundational entropy is in its scope of explanatory power for current science: Shining Light on Dark Energy - October 21, 2012 Excerpt: It (Entropy) explains time; it explains every possible action in the universe;,, Even gravity, Vedral argued, can be expressed as a consequence of the law of entropy. ,,, The principles of thermodynamics are at their roots all to do with information theory. Information theory is simply an embodiment of how we interact with the universe —,,, http://crev.info/2012/10/shining-light-on-dark-energy/ Evolution is a Fact, Just Like Gravity is a Fact! UhOh! - January 2010 Excerpt: The results of this paper suggest gravity arises as an entropic force, once space and time themselves have emerged. https://uncommondesc.wpengine.com/intelligent-design/evolution-is-a-fact-just-like-gravity-is-a-fact-uhoh/
Moreover:
Scientific Evidence That Mind Effects Matter - Random Number Generators - video http://www.metacafe.com/watch/4198007 I once asked a evolutionist, after showing him the preceding experiments, "Since you ultimately believe that the 'god of random chance' produced everything we see around us, what in the world is my mind doing pushing your god around?"
Thus NL, your conjecture for substituting 'intelligence' for randomness,,, (basically your conjecture is not new and is merely a Theistic Evolution compromise gussied up in different clothing),,, fails on empirical grounds bornagain77
As regards the 'question of 'front-loading, and our sense that, for instance, the functioning of every single cell of every living microbe and every individual part of them, even, would manifestly be unthinkable on the part of an omniscient and omnipotent God. But is that really so? Without any logical limitation of powers, physical or mental, applicable to the Christian God, such minutiae would not necessarily constitute an almost infinitely trivial, vapid, mind-numbing distraction at all, would they? Although it is a hallmark of the atheist's credo that a single human being would be similarly insignificant and inconsequential to God; in the starkest contrast with the Christian tenet that Christ would have accepted his crucifixion for just one, single human being. This is not to deny the occurrence or possibility of the occurrence of front-loading in creation, but merely to point out our mistaken, anthropomorphic attribution (more notably among atheists, but surely latent in us all) of a susceptibility to mind-numbing by trivia, to an infinite God. Axel
Optimus: …why the death-grip on methodological naturalism? I suggest that its power lies in its exclusionary function. It rules out ID right from the start, before even any discussions about the empirical data are to be had. MN means that ID is persona non grata,
Nightlight (19): MN doesn’t imply anything of the sort (at least as I understand it). As a counter example, consider a chess playing computer program — it is an intelligent process, superior (i.e. more intelligent) in this domain (chess playing) to any human chess player.
A chess playing computer program doesn’t understand chess. In fact there is no agent (consciousness) present in the program who can (or cannot) understand chess. Like in Searle’s Chinese Room there is merely a simulation of the ability to understand Chess. So it is highly debatable whether 'intelligent' is a proper description of this program.
Nightlight (19): How does MN exclude intelligent agency as an explanation for performance of chess playing program? It doesn’t, since functionality of such program is fully explicable using conventional scientific methods. Hence, MN would allow that chess playing program is an intelligent agency (agent).
I agree that MN would allow for that, but I think you will agree with me that they would be wrong to do so. The fact that they would just shows their metaphysical bias.
Nightlight (19): The net result is that such Planckian network would be 10^60 (more cogs) x 10^20 (faster clocks) (…) With that kind of ratio in computing power, anything computed by this Planckian network would be to us indistinguishable from a godlike intelligence beyond our wildest imagination and comprehension.
Indistinguishable mimicking of intelligence and personhood given extensive instructions (software) designed by us (agents).
Nightlight (128): I was only trying to point out the critical faulty cog in the scheme and how to fix it.
An interesting U-turn tactic in order to smuggle in intelligence as a respectable causal explanation for MN? Box
kairosafocus #131: That is, your suggestion that you have successfully given necessary criteria of being scientific, fails. For instance, while it is a desideratum that something in science is reducible to a set of mathematical, explanatory models that have some degree of empirical support as reliable, that is not and cannot be a criterion of being science. Response 119 already clarifies why that objection is not applicable. You're using a very narrow semantics for terms "algorithmic" and "algorithm", something from mainframe and punched cards era (1960s, 1970s). Here is the basic schemata: (M) - Model space (formalism & algorithms) (E) - Empirical procedures & facts of the "real" world (O) - Operational rules mapping numbers between (M) and (E) The model space (M) is a set of algorithms for generating valid statements of that science. The generated statements need not be math or numerics (they do have to be logically coherent). The "statements" can be words, symbols, pictures, graphs, charts, numbers, formulas,... etc. It is the operational procedures (O) which assigns empirical semantics to those statements produced by (M) in whatever form they were expressed. Without algorithms of (O), the symbolic output from (M) are merely logically coherent formal statements without empirical meaning or content. The component (E) is a system for obtaining and labeling empirical facts (numbers, symbols, pictures...) relevant for that science. Hence (E) is algorithmic as well, containing instructions on how to interact with the object of that science to extract the data (numbers, pictures, words,...), something that could be in principle programmed into some future android (hence algorithmic). Right now, it is programmed into brains of students in that discipline, just as it is done with statements generating algorithms from (M). All of the above self-evident (even trivial) and is merely a convenient way (for the intended purpose) to partition the conceptual space that you and some others here may be unfamiliar with. Physicists, especially theoretical, and philosophers of science would certainly recognize that representation (model) of natural science. The necessary requirement for algorithmic effectiveness (that's my term for it) of generating rules (cogs) from (M) applies also to algorithmic elements of (O) and (E). For example, component (E) shouldn't have algorithmically ineffective elements such as: upon arrival to an archeological dig, turn your face to Mecca and go down on your knees for one minute. Injecting algorithmically ineffective elements like that into (E) is also a disqualifying flaw since it doesn't (help) produce any empirical facts for (E). The same requirement for 'algorithmic effectiveness' applies to algorithms of (O) as well. For example, instruction 'Symbol WS from the computer model M corresponds to water spirit' is a disqualifying operational rule. As in the case of analogous injection of 'consciousness' into (M), these are parasitic elements belonging to some other agenda foreign to the discipline, seeking to hitch a free ride on the back of the science. The immune system of a scientific network will rightfully reject it, unless you are dealing with thoroughly corrupt disciplines riddled with political/ideological agendas and corporate cronyism (as often seen in climate science, public health, psychiatry, pharmacology, toxicology, environmental science, sociology, women studies, ethnic studies and other 'xyz' studies ... etc). Neo-Darwinian Evolution theory (ND-E = RM + NS is the primary mechanism of evolution), carries a key parasitic element of this type, the attribute "random" in "random mutation" (RM) -- that element is algorithmically ineffective since it doesn't produce any falsifiable statement that can't be produced by replacing "random" with "intelligently guided" (i.e. computed by an anticipatory/goal driven algorithm). In this case, the parasitic agenda carried by the gratuitous "randomness" attribute is atheism. That is actually another common misstep by ID proponents -- they needlessly concede that "random" mutation completely explains "micro-evolution". To realize that it doesn't completely explain it, it suffices to consider known examples of intelligently guided evolution, such as evolution of technologies, sciences,... etc. There is micro- and macro-evolution here, too, showing from outside the same type of patterns that biological micro- and macro-evolutions show. In both domains, either degree of evolution is characterized by "mutation" i.e. the change in the construction recipe (DNA, epigenetic, or source code, manufacturing blueprints) which are associated with external/phenotypic changes, the evolution of the product. But there is nothing in any of it that implies or demonstrates that the change in the recipe is "random" i.e. that the mutation must be random. In the case of evolution of technology, we know that this is not the case, the evolutionary changes are intelligently guided. Of course, in either domain, there could be product defects caused by random errors (e.g. in blueprint or in manufacturing), which occasionally may improve the product. But that doesn't show that such "random" errors are significant, let alone the primary or the sole mechanism of evolution (micro or macro), as ND-E postulates in order to be able to claim the complete absence of intelligent guidance. If you think that some natural science doesn't fall into the above triune pattern or fails on 'algorithmic effectiveness' requirement in any component (M), (E) or (O), show me a counterexample, keeping in mind that "algorithm" as I am using it, isn't only about math or numbers. Of course, if you include any of the mentioned parasites riddled examples, the 'algorithmically ineffective' elements will always be a parasitic agenda hitching a free ride on the backs of the honest scientific work, hence it isn't a counter-example to the requirement (but merely an example of some traits of human nature). nightlight
NL, That's very interesting, stimulating stuff. I appreciate the thoughtful responses and the time you've invested here. I don't really see how any of what you are saying is threatening in any way to any other ID position. It seems to me you are offering the same kind of description of the way intelligence computes and generates ID phenomena as Newton offered in his mathematical description of gravity. It is a model for describing the mechanism of intelligent ordering towards design - nothing more, really. It doesn't claim - or even imply - that god doesn't exist or that humans do not have autonomous free will (which, IMO, is a commodity distinct from intelligence anyway). It is the ultimate "front loading" postulate (or "foundation loading"), with the fundamental algorithms (pattern recognition and reaction development) built into the substrate of the universe (if I'm understanding you correctly). It's always been my view that "mind" and "soul" are two different things, and that mind has "computational intelligence", but not free will. IMO, mind is the software, body is the hardware, and soul is the operator. William J Murray
semi related: podcast - "What's at Stake for Science Education" http://intelligentdesign.podomatic.com/entry/2013-03-29T16_49_10-07_00 bornagain77
PS: Let us recall, the criteria of being scientific set out by NL, which was first addressed at 112 above:
In any natural science, you need 3 basic elements: (M) – Model space (formalism & algorithms) (E) – Empirical procedures & facts of the “real” world (O) – Operational rules mapping numbers between (M) and (E) The model space (M) defines an algorithmic model of the problem space via postulates, equations, programs, etc. It’s like scaled down model of the real world, where you can run the model, observe behaviors, measure numbers, then compare these with numbers from empirical observations obtained via (E) . . . . scientific postulates can make no use of concepts such as ‘mind’ or ‘consciousness’ or ‘god’ or ‘feeling’ or ‘redness’ since no one knows how to formalize any of these, how to express them as algorithms that do something useful, even though we may have intuition that they do something useful in real world. But if you can’t turn them into algorithmic form, natural science has no use for them. Science seen via the scheme above, is in fact a “program” of sorts, which uses human brains and fingers with pencils as its CPU, compiler and printer. The ID proponents unfortunately don’t seem to realize this “little” requirement. Hence, they need to get rid of “mind” and “consciousness” talk, which are algorithmically vacuous at present, and provide at least a conjecture about the ‘intelligent agency’ which can be formulated algorithmically might be, at least in principle (e.g. as existential assertion, not explicit construction).
Notice the first problem, what modelling and algorithms are about. It is simply not the case that scientific work is always like that, though it is often desirable to have mathematical models. The implied methodological naturalism should also be apparent and merits the correction in 112. However, I fail to see where NL has taken the concerns on board and has cogently responded. His attempt to suggest an idiosyncratic usage for algorithm, and to specify redness to the experience of being appeared to redly, fail. Indeed [as was highlighted already], there is a lot of work that is not only scientific but legitimately physics, that uses the intelligent responses of participants, to empirically investigate then analyse important phenomena, such as colour, sound, etc. One consequence of this has been the understanding that our sensory apparatus often uses in effect roughly log compression, such as in the Weber-Fechner law, where increments of noticeable difference are fractional, i.e dx/x is a constant ratio across the scale of a phenomenon. Thus the appearance and effectiveness of log metrics for things like sound [the dB scale for sound relative to a reference level of 10^-12 W/sq m], and the scaling of the magnitude criteria for stars that was based on apparent size/brightness, that then turned out to be in effect logarithmic. kairosfocus
NL: Pardon, I must again -- cf. 112 and ff above -- observe that in effect you are trying to impose a criterion that fails the test of factual adequacy relative to what has been historically acceptable as science, and what has been acceptable as science across its range. That is, your suggestion that you have successfully given necessary criteria of being scientific, fails. For instance, while it is a desideratum that something in science is reducible to a set of mathematical, explanatory models that have some degree of empirical support as reliable, that is not and cannot be a criterion of being science. For crucial instance, one has to amass sufficient empirical data before any mathematical analysis or model can be developed [observe --> hypothesise], and it will not do to dismiss that first exploration and sketching out of apparent patterns as "not science." Similarly, empirical or logical laws which may be inherently qualitative can be scientific or scientifically relevant. As a simple example of the latter, it is a logical point that that which is coloured is necessarily (not merely observed to be) extensive in space. Likewise, the significance of drawing a distinction {A|NOT_A} and what follows from it in logic is an underlying requisite of science. [Notice, I have here given a genuine necessary criterion, by way of counter-instance to your claimed cluster of necessary criteria.] For the former, let us observe that the taxonomy of living forms is not inherently a mathematical model or framework, but a recognition of memberships on genus/difference, per keys that are creatively developed on empirical investigation, not givens set by a model. Going further, scientific work typically seeks to describe accurately, explain, predict or influence and thus allow for control. An exploratory description is scientific, even if that is not mathematical. In that context, let us note a pattern of pre-mathematical, qualitative observations c. 1970 - 82, by eminent scientists, that lie at the root of key design theory concepts:
WICKEN, 1979: ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in.)] ORGEL, 1973: . . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [[The Origins of Life (John Wiley, 1973), p. 189.] HOYLE, 1982: Once we see that life is cosmic it is sensible to suppose that intelligence is cosmic. Now problems of order, such as the sequences of amino acids in the chains which constitute the enzymes and other proteins, are precisely the problems that become easy once a directed intelligence enters the picture, as was recognised long ago by James Clerk Maxwell in his invention of what is known in physics as the Maxwell demon. The difference between an intelligent ordering, whether of words, fruit boxes, amino acids, or the Rubik cube, and merely random shufflings can be fantastically large, even as large as a number that would fill the whole volume of Shakespeare’s plays with its zeros. So if one proceeds directly and straightforwardly in this matter, without being deflected by a fear of incurring the wrath of scientific opinion, one arrives at the conclusion that biomaterials with their amazing measure or order must be the outcome of intelligent design. No other possibility I have been able to think of in pondering this issue over quite a long time seems to me to have anything like as high a possibility of being true.” [[Evolution from Space (The Omni Lecture[ --> Jan 12th 1982]), Enslow Publishers, 1982, pg. 28.]
None of these is a mathematical model or an algorithm in the crucial meaning of the term. However, each, in an unquestionably scientific context of investigation, is highlighting a key qualitative observation that allows us to then go on to analyse and develop models. It would be improper and inviting of fallacious selectively hyperskeptical dismissal to suggest that absent the full panoply of mathematicisation, operational definitions, models, fully laid out bodies of empirical data, etc, such is not scientific. Similarly, it is false to suggest that such inferences amount to an alleged improper injection of the supernatural, or the like. These concerns were already outlined. However, we can go on a bit. By 2005, having first used the no free lunch theorems to identify the significance of complex specified information, William Dembski proposed a general quantification:
define phi_S as . . . the number of patterns for which [agent] S’s semiotic description of them is at least as simple as S’s semiotic description of [a pattern or target zone] T. [26] . . . . where M is the number of semiotic agents [S's] that within a context of inquiry might also be witnessing events and N is the number of opportunities for such events to happen . . . . [where also] computer scientist Seth Lloyd has shown that 10^120 constitutes the maximal number of bit operations that the known, observable universe could have performed throughout its entire multi-billion year history.[31] . . . [Then] for any context of inquiry in which S might be endeavoring to determine whether an event that conforms to a pattern T happened by chance, M·N will be bounded above by 10^120. We thus define the specified complexity [chi] of T given [chance hypothesis] H [in bits] . . . as [the negative base-2 log of the conditional probability P(T|H) multiplied by the number of similar cases phi_S(t) and also by the maximum number of binary search-events in our observed universe 10^120] Chi = – log2[10^120 ·phi_S(T)·P(T|H)].
I continued, in the always linked note:
When 1 >/= chi, the probability of the observed event in the target zone or a similar event is at least 1/2, so the available search resources of the observed cosmos across its estimated lifespan are in principle adequate for an observed event [E] in the target zone to credibly occur by chance. But if chi significantly exceeds 1 bit [i.e. it is past a threshold that as shown below, ranges from about 400 bits to 500 bits -- i.e. configuration spaces of order 10^120 to 10^150], that becomes increasingly implausible. The only credibly known and reliably observed cause for events of this last class is intelligently directed contingency, i.e. design. Given the scope of the Abel plausibility bound for our solar system, where available probabilistic resources qWs = 10^43 Planck-time quantum [not chemical -- much, much slower] events per second x 10^17 s since the big bang x 10^57 atom-level particles in the solar system Or, qWs = 10^117 possible atomic-level events [--> and perhaps 10^87 "ionic reaction chemical time" events, of 10^-14 or so s], . . . that is unsurprising.
From this, a couple of years back now, for operational use [and in the context of addressing exchanges on what chi is about] several of us in and around UD have deduced a more practically useful form, by taking a log reduction and giving useful upper bounds at Solar System level:
Chi = – log2[10^120 ·phi_S(T)·P(T|H)]. xx: To simplify and build a more "practical" mathematical model, we note that information theory researchers Shannon and Hartley showed us how to measure information by changing probability into a log measure that allows pieces of information to add up naturally:
Ip = - log p, in bits if the base is 2. That is where the now familiar unit, the bit, comes from. Where we may observe from say -- as just one of many examples of a standard result -- Principles of Comm Systems, 2nd edn, Taub and Schilling (McGraw Hill, 1986), p. 512, Sect. 13.2:
Let us consider a communication system in which the allowable messages are m1, m2, . . ., with probabilities of occurrence p1, p2, . . . . Of course p1 + p2 + . . . = 1. Let the transmitter select message mk of probability pk; let us further assume that the receiver has correctly identified the message [[--> My nb: i.e. the a posteriori probability in my online discussion here is 1]. Then we shall say, by way of definition of the term information, that the system has communicated an amount of information Ik given by I_k = (def) log_2 1/p_k (13.2-1)
xxi: So, since 10^120 ~ 2^398, we may "boil down" the Dembski metric using some algebra -- i.e. substituting and simplifying the three terms in order -- as log(p*q*r) = log(p) + log(q ) + log(r) and log(1/p) = – log (p): Chi = – log2(2^398 * D2 * p), in bits, and where also D2 = phi_S(T) Chi = Ip – (398 + K2), where now: log2 (D2 ) = K2 That is, chi is a metric of bits from a zone of interest, beyond a threshold of "sufficient complexity to not plausibly be the result of chance," (398 + K2). So, (a) since (398 + K2) tends to at most 500 bits on the gamut of our solar system [[our practical universe, for chemical interactions! ( . . . if you want , 1,000 bits would be a limit for the observable cosmos)] and (b) as we can define and introduce a dummy variable for specificity, S, where (c) S = 1 or 0 according as the observed configuration, E, is on objective analysis specific to a narrow and independently describable zone of interest, T: Chi = Ip*S – 500, in bits beyond a "complex enough" threshold NB: If S = 0, this locks us at Chi = - 500; and, if Ip is less than 500 bits, Chi will be negative even if S is positive.
--> E.g.: a string of 501 coins tossed at random will have S = 0, but if the coins are arranged to spell out a message in English using the ASCII code [[notice independent specification of a narrow zone of possible configurations, T], Chi will -- unsurprisingly -- be positive. --> Following the logic of the per aspect necessity vs chance vs design causal factor explanatory filter, the default value of S is 0, i.e. it is assumed that blind chance and/or mechanical necessity are adequate to explain a phenomenon of interest. --> S goes to 1 when we have objective grounds -- to be explained case by case -- to assign that value. --> That is, we need to justify why we think the observed cases E come from a narrow zone of interest, T, that is independently describable, not just a list of members E1, E2, E3 . . . ; in short, we must have a reasonable criterion that allows us to build or recognise cases Ei from T, without resorting to an arbitrary list. --> A string at random is a list with one member, but if we pick it as a password, it is now a zone with one member. (Where also, a lottery, is a sort of inverse password game where we pay for the privilege; and where the complexity has to be carefully managed to make it winnable. ) --> An obvious example of such a zone T, is code symbol strings of a given length that work in a programme or communicate meaningful statements in a language based on its grammar, vocabulary etc. This paragraph is a case in point, which can be contrasted with typical random strings ( . . . 68gsdesnmyw . . . ) or repetitive ones ( . . . ftftftft . . . ); where we can also see by this case how such a case can enfold random and repetitive sub-strings. --> Arguably -- and of course this is hotly disputed -- DNA protein and regulatory codes are another. Design theorists argue that the only observed adequate cause for such is a process of intelligently directed configuration, i.e. of design, so we are justified in taking such a case as a reliable sign of such a cause having been at work. (Thus, the sign then counts as evidence pointing to a perhaps otherwise unknown designer having been at work.) --> So also, to overthrow the design inference, a valid counter example would be needed, a case where blind mechanical necessity and/or blind chance produces such functionally specific, complex information. (Points xiv - xvi above outline why that will be hard indeed to come up with. There are literally billions of cases where FSCI is observed to come from design.)
xxii: So, we have some reason to suggest that if something, E, is based on specific information describable in a way that does not just quote E and requires at least 500 specific bits to store the specific information, then the most reasonable explanation for the cause of E is that it was designed. The metric may be directly applied to biological cases: Using Durston’s Fits values -- functionally specific bits -- from his Table 1 [--> in the context of his published analysis on Shannon's metric of average info per symbol, H applied to various functionality-relevant states of biologically relevant strings . . . -*-*-*- . . . ], to quantify I, so also accepting functionality on specific sequences as showing specificity giving S = 1, we may apply the simplified Chi_500 metric of bits beyond the threshold: RecA: 242 AA, 832 fits, Chi: 332 bits beyond SecY: 342 AA, 688 fits, Chi: 188 bits beyond Corona S2: 445 AA, 1285 fits, Chi: 785 bits beyond xxiii: And, this raises the controversial question that biological examples such as DNA -- which in a living cell is much more complex than 500 bits -- may be designed to carry out particular functions in the cell and the wider organism.
In short, there is in fact a framework of quantification and related analysis that undergirds the observation that FSCO/I is characteristically produced by intelligent design, and that the other main source of high contingency outcomes, chance processes [perhaps aided by mechanical necessity, which in itself will not lead to highly contingent outcomes] wandering across a configuration space, does not plausibly have access to the atomic and temporal resources to sample enough of the space that we can reasonably expect to see anything from a special, narrow zone popping up. That is, once we have sufficient complexity in terms of bit depth, a chance contingency driven search will predictably pick up gibberish, not meaningful strings that are based on multi part complex functional organisation. Where also, given how a nodes and arcs based architecture/wiring diagram for a functionally specific system can be reduced to a structured set of strings, analysis on strings is WLOG. In this overall context, I must also point out that intelligent designers, human and non human [I think here of beavers especially, as already noted], are an empirical fact. They can be observed, and their works can be observed. It is reasonable to ask, whether such works have distinguishing, characteristic signs that indicate their cause. And so, however imperfect suggestions such as CSI, IC or FSCO/I may be, they are therefore reasonable and scientific responses to a legitimate scientific question. [Correctness cannot be a criterion of being scientific, for many good reasons.] KF kairosfocus
Nighlight @129, it was to make a point about technospeak. The link turned out to be temporary. See here. When a conversation becomes dominated by jargon, especially when one is not speaking among peers, overly technical language serves more to obfuscate than to enlighten. Chance Ratcliff
Chance Ratcliff #118: Also check out my own research paper, That link is broken. What's it about? nightlight
Chance Ratcliff #118: It's not clear to me that what you've provided constitutes a definition of science which can address the demarcation problem That was not definition of natural science but outline of some necessary elements and their properties. Hence, it was a listing of some necessary conditions for natural science, not sufficient conditions. Namely, the point of that scheme was to explain how the ID proponents often violate the key necessary conditions for a natural science. Violating the necessary conditions, such as algorithmic effectiveness of postulates, suffices to disqualify a proposal from clams on becoming a science (see post #117 on why that is so). Since they have tripped already on the necessary conditions, there is no need to analyze further as to whether their proposal is sufficient. As to what science is, what are sufficient conditions, that's a lot bigger subject than few posts on a blog can fit, one would need to write books for that. I was only trying to point out the critical faulty cog in the scheme and how to fix it. However a search for the three terms, "model space", "empirical procedures", and "operational rules" turned up nothing. Search google scholar, one at a time, will give you tens of thousands hits on each from scholarly papers. Pairs of phrases also have some hits. These are fairly common concepts in philosophy of science, which I learned from the books (physics), before google. The three part scheme itself is a canonical partition (labels may differ depending on author) arising often in discussions about the meaning and perplexities of quantum theory. So, no, those are not my private terms or concepts. They are a commonplace in philosophy and methodology of science. The only "originality" is perhaps in my particular choices of using one label from one author, another from another, third one from yet another. Hence there may not be someone else using exactly those three phrases together, since each of them can be expressed in many ways. Since all that has been accumulated before google and bookmark collections, I have no specific links as to where I picked each one along the way; they're most likely from somewhere out of ~15K books behind me in my study. Another effect in play is that English is not my native tongue but only the 5th language, hence some of those may be literal translations from Serbo-Croatian, Russian, Italian or Latin, depending on where I picked the concept first (most likely from Russian since a good number of books behind me is in Russian). what can be considered useful and applicable to human knowledge, then it must take for granted consciousness, reason, the correspondence of perception to reality, these things which you appear to have placed outside of usefulness. That's one mixed bag of concepts. One must distinguish what can be object of science and what can be postulate or cog. Any of the above can be object of research. But not all can be cogs, or postulates. For that you need 'algorithmic effectiveness' -- the cog must be churning something out, it must have consequences, it must do something that makes a difference in a given discipline. The "difference" is not whether you or someone feels better about it, more in harmony with universe, but difference in some result specific and relevant to the discipline. In that sense, consciousness does nothing to any given discipline (other than as an object of research, such as in neuroscience or cognitive science). It lacks any algorithmic aspect, it doesn't produce anything. That doesn't mean it is irrelevant for your internal experience. But we are not talking about whether ID can be a personal experience, but whether it can be a natural science. My point is that it certainly can be, provided its proponents (such as S. Meyer) get rid of the algorithmically ineffective baggage and drop the 'consciousness' talk, since it only harms the cause of getting the ID to be accepted as a science. Intelligent agency can be formulated completely algorithmically, hence injecting the non-algorithmic term 'consciousness' is entirely gratuitous. It adds not one new finding to other findings of ID. Hence dropping it loses no ID finding either. It's not clear what you mean by "algorithmic model". A search for the term That was a self-contained term plus the explanation what exactly is meant by it. Whether you can find it in those same exact words and rhe same exact meaning I have no idea, but being explained right there in the post, the search was unnecessary. In any case it is not an original concept and specific words might be a literal translation from another language. I think I explained exactly what it is. What is missing in that explanation? A natural science needs some rules of operation and techniques, a.k.a. algorithms, on how it produces some output that is to be compared with empirical observations and facts. That part was labeled as (M) in the scheme, and named as 'algorithmic model' of that science. Isn't the necessity of such component (M) completely self-evident? Why would you need an authority to confirm something as obvious? For instance, you appeal to "planckian networks" and "planckian nodes" repeatedly. Again, a search for these terms turns up absolutely nothing. Dropping "ian" from Planckian will give you more. I use suffix "ian" to distinguish my concept since it extends what others in physics are playing with (e.g. see Wolfrwam's NKS). My extension is to assume that these networks have adaptable links (in physics these links would have only on/off levels), so they can operate like neural networks. In effect that combines perspectives and results from two fields of research. Planck scale model of physics (pregeometry), including Planck scale network models, spinor networks, pregeometry,... is a whole little cottage industry in physics going back 6-7 decades at least. I provided few links on the subject in posts #19 and #35. Try this link as a very clean, narrow search on Google scholar. I am not a theoretical physicist, a theoretical mathematician, or a theoretical computer scientist. Perhaps you are some or all of these things. Actually, that's a fairly close hit. I was educated as a theoretical physicist, but went to work in industry right after the grad school, working mostly as a chief scientist in various companies, doing math and computer science research (e.g. design of new algorithms; or generally tackling any problem everyone else they tried before, got stuck on). Here is a recent and very interesting discovery I stumbled upon (by having a lucky combination of fields of expertise which clicked together just the right way on that particular problem). It turns out, that the two seemingly unrelated problems, each a separate research field of its own, are after a suitable transformation one and the same problem: (a) maximizing network throughput (bisection) and (b) maximizing codeword distance of error correcting codes. Amazingly, these two are mathematically the same problem, which is quite useful since there are many optimal EC codes, and the paper provides a simple translation recipe that converts those codes into optimal large scale networks (e.g. useful for large Data Centers). What are these Planckian networks, what is there applicability to understanding physical laws, how are they modeled and simulated, what research is being done, ... The basic concept is known as "digital physics" and it goes back to 1970s MIT (Fredkin, Toffoli). Stephen Wolfram's variant which he calls "New Kind of Science" (NKS) is the most ambitious project of this type (here is NKS forum), a vision of translating and migrating all of natural science into the computational/algorithmic framework. Since he is physicist, major part of it is recreating physics out of some underlying networks at Planck scale. Another major source on this subject and ideas goes under the name "Complexity Science", mostly coming from Santa Fe Institute. The whole field goes back to early 1980s, with cellular automata research & dynamical systems (chaos), then late 1980s through 1990s, when neural network research took off, then 1990s "complexity science" which sought to integrate all the various branches under one roof "complex systems". I think those who have studied works from of the above sources would recognize what I am writing as a bit of rehash of those ideas. Here again you hint at a kind of Gnostic synthesis, in which you claim to know that our knowledge of physical laws can be superceded by some other notion, apparently not amenable to investigation, Au contraire, this perspective (which is closest to Wolfram's NKS project) is a call for investigation, not rejection of it. The strong sense of imperfection of the present knowledge is the result of realization of all the possibilities opened by the NKS style approach. For example the fact that major equations/laws of physics (Maxwell, Schrodinger, Dirac eqs, relativistic Lorenz transforms) can be obtained as a coarse grained approximations of the much finer behaviors of the patterns unfolding on these types of underlying computational elements, is a huge hint of what is really going on at the Planck scale. What "natural computations" are being lobotomized, and what demonstrable synthesis shows the inadequacy of our conception of physical laws, as they relate to the phenomena they describe? The "lobotomized" refers to how we seek to reduce biological phenomena to laws of physics. If you keep in mind the above picture of underlying computations and the activation patterns on these networks as being what is really going on (the "true laws"), then our laws of physics (such as Maxwell, Schrodinger or Dirac eqs) are merely some properties of those patterns extracted under very contrived constraints that allow only one small aspect of those patterns to stand out. The biological phenomena, which are different manifestations and properties of the unconstrained patterns are suppressed when extracting "physical laws". That's what I called "lobotomized" true laws. Another analogy may be taking a Beethoven's symphony and tuning into and extracting just two notes, cay C and D, filtering out all other notes, and then imagining that the pattern seen on that subset (C, D sequence) is the fundamental law onto which the rest of symphony is reducible. nightlight
What you say about Godel's thoughts may well be so, but judging from a photo on the first page of photos in that book about and him and Einstein, A World Without Time, his mother is not impressed. I think it might be the funniest photograph I have ever seen. She's got her arm around his shoulder and is looking at him, as much as to say to the camera, 'Look at him. Just look at him... What am I to do with this boy? I can barely trust him to do up his shoe-laces...' And just to set it off to perfection, uncharacteristically, perhaps, he's looking very happy and pleased with himself Axel
NL, to continue on towards a more 'complete' picture of reality, I would like to point out that General Relativity is the 'odd man out' in a 'complete' picture of reality, yet Special Relativity is 'in' the picture:
Quantum electrodynamics Excerpt: Quantum electrodynamics (QED) is the relativistic quantum field theory of electrodynamics. In essence, it describes how light and matter interact and is the first theory where full agreement between quantum mechanics and special relativity is achieved. http://en.wikipedia.org/wiki/Quantum_electrodynamics Quantum Mechanics vs. General Relativity Excerpt: The Gravity of the Situation The inability to reconcile general relativity with quantum mechanics didn’t just occur to physicists. It was actually after many other successful theories had already been developed that gravity was recognized as the elusive force. The first attempt at unifying relativity and quantum mechanics took place when special relativity was merged with electromagnetism. This created the theory of quantum electrodynamics, or QED. It is an example of what has come to be known as relativistic quantum field theory, or just quantum field theory. QED is considered by most physicists to be the most precise theory of natural phenomena ever developed. In the 1960s and ’70s, the success of QED prompted other physicists to try an analogous approach to unifying the weak, the strong, and the gravitational forces. Out of these discoveries came another set of theories that merged the strong and weak forces called quantum chromodynamics, or QCD, and quantum electroweak theory, or simply the electroweak theory, which you’ve already been introduced to. If you examine the forces and particles that have been combined in the theories we just covered, you’ll notice that the obvious force missing is that of gravity.,,, http://www.infoplease.com/cig/theories-universe/quantum-mechanics-vs-general-relativity.html
Yet, by all rights, General Relativity should be able to somehow be unified within Quantum theory:
LIVING IN A QUANTUM WORLD – Vlatko Vedral – 2011 Excerpt: Thus, the fact that quantum mechanics applies on all scales forces us to confront the theory’s deepest mysteries. We cannot simply write them off as mere details that matter only on the very smallest scales. For instance, space and time are two of the most fundamental classical concepts, but according to quantum mechanics they are secondary. The entanglements are primary. They interconnect quantum systems without reference to space and time. If there were a dividing line between the quantum and the classical worlds, we could use the space and time of the classical world to provide a framework for describing quantum processes. But without such a dividing line—and, indeed, with­out a truly classical world—we lose this framework. We must ex­plain space and time (4D space-time) as somehow emerging from fundamental­ly spaceless and timeless physics. http://phy.ntnu.edu.tw/~chchang/Notes10b/0611038.pdf
A very interesting difference to point out between General Relativity and Special Relativity is that Special Relativity and General Relativity have two completely opposite curvatures in space time as to how the curvatures relate to us: Please note the 3:22 minute mark of the following video when the 3-Dimensional world ‘folds and collapses’ into a tunnel shape around the direction of travel as a ‘hypothetical’ observer accelerates towards the ‘higher dimension’ of the speed of light, (Of note: This following video was made by two Australian University Physics Professors with a supercomputer.),,
Approaching The Speed Of Light – Optical Effects – video http://www.metacafe.com/watch/5733303/
And please note the exact opposite effect for ‘falling’ into a blackhole. i.e. The 3-Dimensional world folds and collapses into a higher dimension as a ‘hypothetical’ observer falls towards the event horizon of a black-hole describe in General Relativity:
Space-Time of a Black hole http://www.youtube.com/watch?v=f0VOn9r4dq8
And remember time comes to a stop (becomes 'eternal') at both the event horizon of black-holes and at the speed of light:
time, as we understand it temporally, would come to a complete stop at the speed of light. To grasp the whole 'time coming to a complete stop at the speed of light' concept a little more easily, imagine moving away from the face of a clock at the speed of light. Would not the hands on the clock stay stationary as you moved away from the face of the clock at the speed of light? Moving away from the face of a clock at the speed of light happens to be the same 'thought experiment' that gave Einstein his breakthrough insight into e=mc2. Albert Einstein - Special Relativity - Insight Into Eternity - 'thought experiment' video http://www.metacafe.com/w/6545941/
The implications of having two completely different eternities within reality, one eternity that is very destructive at black holes (in fact black-holes are the greatest source of entropy in the universe) and one eternity that is very ordered at the speed of light, are, at least to those of us who are of a Christian 'eternity minded' persuasion, very sobering to put it mildly. Verse and Music:
Hosea 13:14 I will ransom them from the power of the grave; I will redeem them from death: O death, I will be thy plagues; O grave, I will be thy destruction:,, Dolly Parton - He's Alive - 1989 CMA - music video http://www.youtube.com/watch?v=UbRPWUHM80M
bornagain77
nightlight, back at 100 in response to my question as to what brought all the 'conscious' matter-energy into being at the big bang, i.e. did matter-energy think itself into existence before it existed?, you stated:
There is always the ‘last opened’ Russian doll, no matter how many you opened. As to what is inside that one, it may take some tricky twists and tinkering before it splits open and the next ‘last opened’ shows itself. Hence, you are always at the ‘last opened’ one.
Well that sure sounds like the most 'unscientific' cop out I have ever seen since we are in fact talking about the instantaneous origination of every thing that you 'just so happen' to have attributed inherent consciousness to. To just leave the whole thing unaddressed since you can't address it with your preferred philosophy is a even worse violation of integrity than the blatant bias you have displayed against all the quantum evidence that goes against your preferred position!
What Properties Must the Cause of the Universe Have? - William Lane Craig - video http://www.youtube.com/watch?v=1SZWInkDIVI “Certainly there was something that set it all off,,, I can’t think of a better theory of the origin of the universe to match Genesis” Robert Wilson – Nobel laureate – co-discover Cosmic Background Radiation
Moreover it is important to note that Einstein, whom I believed leaned towards Spinoza's panpsychism, made his self admitted 'greatest blunder' of his scientific career, in not listening to what his very own equation was telling him about reality, due in some part (large part?) to this 'incomplete' (last doll will remain unopened) philosophy that you seem to be so enamored with. In fact it was a theist, a Belgium priest no less, that first brought the full implications of General Relativity to Einstein's attention: Albert Einstein (1879-1955), when he was shown his general relativity equation indicated a universe that was unstable and would 'draw together' under its own gravity, added a cosmological constant to his equation to reflect a stable universe rather than dare entertain the thought that the universe had a beginning.
Einstein and The Belgian Priest, George Lemaitre - The "Father" Of The Big Bang Theory - video http://www.metacafe.com/watch/4279662
Moreover, this is not the only place where Einstein has been shown to be wrong. The following article speaks of a proof developed by legendary mathematician/logician Kurt Gödel, from a thought experiment, in which Gödel showed General Relativity could not be a complete description of the universe:
THE GOD OF THE MATHEMATICIANS - DAVID P. GOLDMAN - August 2010 Excerpt: Gödel's personal God is under no obligation to behave in a predictable orderly fashion, and Gödel produced what may be the most damaging critique of general relativity. In a Festschrift, (a book honoring Einstein), for Einstein's seventieth birthday in 1949, Gödel demonstrated the possibility of a special case in which, as Palle Yourgrau described the result, "the large-scale geometry of the world is so warped that there exist space-time curves that bend back on themselves so far that they close; that is, they return to their starting point." This means that "a highly accelerated spaceship journey along such a closed path, or world line, could only be described as time travel." In fact, "Gödel worked out the length and time for the journey, as well as the exact speed and fuel requirements." Gödel, of course, did not actually believe in time travel, but he understood his paper to undermine the Einsteinian worldview from within. http://www.firstthings.com/article/2010/07/the-god-of-the-mathematicians Physicists continue work to abolish time as fourth dimension of space - April 2012 Excerpt: "Our research confirms Gödel's vision: time is not a physical dimension of space through which one could travel into the past or future." http://phys.org/news/2012-04-physicists-abolish-fourth-dimension-space.html
Moreover NL, contrary to the narrative you would prefer to believe in, it is quantum theory that has been steadily advancing on Einstein's 'incomplete' vision of reality for the last 50 years, and it is certainly not Quantum Mechanics that has been in retreat from Einstein's 'incomplete' view of reality.,, In the following video Alain Aspect speaks of the infamous Bohr-Einstein debates and of the steady retreat that Einstein's initial position has suffered:
Quantum Entanglement – The Failure Of Local Realism - Materialism - Alain Aspect - video http://www.metacafe.com/w/4744145
As a interesting sidelight to this, Einstein hated the loss of determinism that quantum mechanics brought forth to physics, as illustrated by his infamous 'God does not play dice' quote to Neils Bohr, yet actually, quantum mechanics restored the free will of man to its rightful place in a Theistic view of reality,, First by this method,,,
Why Quantum Physics (Uncertainty) Ends the Free Will Debate - Michio Kaku - video http://www.youtube.com/watch?v=lFLR5vNKiSw
And now, more recently, by this method:
Quantum physics mimics spooky action into the past - April 23, 2012 Excerpt: The authors experimentally realized a "Gedankenexperiment" called "delayed-choice entanglement swapping", formulated by Asher Peres in the year 2000. Two pairs of entangled photons are produced, and one photon from each pair is sent to a party called Victor. Of the two remaining photons, one photon is sent to the party Alice and one is sent to the party Bob. Victor can now choose between two kinds of measurements. If he decides to measure his two photons in a way such that they are forced to be in an entangled state, then also Alice's and Bob's photon pair becomes entangled. If Victor chooses to measure his particles individually, Alice's and Bob's photon pair ends up in a separable state. Modern quantum optics technology allowed the team to delay Victor's choice and measurement with respect to the measurements which Alice and Bob perform on their photons. "We found that whether Alice's and Bob's photons are entangled and show quantum correlations or are separable and show classical correlations can be decided after they have been measured", explains Xiao-song Ma, lead author of the study. According to the famous words of Albert Einstein, the effects of quantum entanglement appear as "spooky action at a distance". The recent experiment has gone one remarkable step further. "Within a naïve classical world view, quantum mechanics can even mimic an influence of future actions on past events", says Anton Zeilinger. http://phys.org/news/2012-04-quantum-physics-mimics-spooky-action.html
In other words, if my conscious choices really are just merely the result of whatever state the material particles in my brain happen to be in in the past (deterministic) how in blue blazes are my choices instantaneously effecting the state of material particles into the past? Moreover NL, it is simply preposterous for you, given the key places you refuse to look at evidence (i.e. especially the big bang!) to play all this evidence off as 'mind stuff', or as 'quantum magic'! It is called 'cherry picking' and 'confirmation bias' to do as you are doing with the evidence! Also of recent related note on Einstein from Zeilinger:
Of Einstein and entanglement: Quantum erasure deconstructs wave-particle duality - January 29, 2013 Excerpt: While previous quantum eraser experiments made the erasure choice before or (in delayed-choice experiments) after the interference – thereby allowing communications between erasure and interference in the two systems, respectively – scientists in Prof. Anton Zeilinger's group at the Austrian Academy of Sciences and the University of Vienna recently reported a quantum eraser experiment in which they prevented this communications possibility by enforcing Einstein locality. They accomplished this using hybrid path-polarization entangled photon pairs distributed over an optical fiber link of 55 meters in one experiment and over a free-space link of 144 kilometers in another. Choosing the polarization measurement for one photon decided whether its entangled partner followed a definite path as a particle, or whether this path-information information was erased and wave-like interference appeared. They concluded that since the two entangled systems are causally disconnected in terms of the erasure choice, wave-particle duality is an irreducible feature of quantum systems with no naïve realistic explanation. The world view that a photon always behaves either definitely as a wave or definitely as a particle would require faster-than-light communication, and should therefore be abandoned as a description of quantum behavior. http://phys.org/news/2013-01-einstein-entanglement-quantum-erasure-deconstructs.html
bornagain77
NL: AmHD:
al·go·rithm (lg-rm) n. A step-by-step problem-solving procedure, especially an established, recursive computational procedure for solving a problem in a finite number of steps. red (rd) n. 1. a. The hue of the long-wave end of the visible spectrum, evoked in the human observer by radiant energy with wavelengths of approximately 630 to 750 nanometers; any of a group of colors that may vary in lightness and saturation and whose hue resembles that of blood; one of the additive or light primaries; one of the psychological primary hues. b. A pigment or dye having a red hue. c. Something that has a red hue.
KF kairosfocus
Chance Ratcliff
This is not a context which is amenable to meaningful communication, as far as I can tell. Perhaps this just isn’t the proper venue.
Indeed perhaps not! Box
Mung @121, priceless. :D Chance Ratcliff
EA:
BTW, Mung, I hope this helps answer your nagging and heart-felt question about macroevolution. Perhaps there is a reason Nick has gone so silent? There isn’t anything special about macroevolution — it is just microevolution writ large.
So if I buy enough books on micro-evolutionary theory it'll all eventually add up to a book on Macro-Evolutionary Theory? So I don't suppose that micro-evolution is caused by biochemical changes. Tour can't be right, can he? Mung
From the OP:
In my judgment, a mind incapable of making the requisite distinctions hardly deserves to be taken seriously.
What a strange choice of words. Mung
kairosfocus #112: You have so prioritised mathematical formalisms and algorithms that much of both historic and current praxis is excluded. Not at all. We are using different semantics. You are simply taking 'algorithmic' much too narrowly, understanding it only as either mathematical formula or a program listing in computer science or engineering class i.e. something that runs on a digital computer. The semantics I use would classify a 'cake baking recipe' as an algorithm. Of course, this algorithm doesn't run on a silicon processor as a program, but instead uses human brain as its CPU and a compiler. Essentially, if you can imagine an android robot doing something, that something is algorithmic in my semantics. In that sense, even historical sciences, such as archeology include 'algorithmic' component (M), as well as (E) and (O). As in the 'cake baking' algorithm, it is the brains of the archeology students which are programmed to operate as a CPU and compilers for executing the algorithms (M) of archeology as a natural science. This is no different than a physics student being taught how use formulas, how to transform expressions, take care of units,... etc. His brain is being transformed to operate as a CPU and compiler for reading physics papers and books, replicating or checking claimed results, extending the calculations and producing new results, etc. While on surface it sounds more 'algorithmic' than instructions on digging out and dusting off the old bones, they are still both algorithmic activities. Basically all sciences are in the above sense of 'algorithm', programs which run on flesh computers, on wetware, our brains and our bodies (the latter is more so in archeology or biology than in theoretical physics). Note though, that algorithmic component (M) of natural science and its stated properties are necessary but not sufficient to have a natural science. I was not defining sufficient conditions, since the point of that post was to explain how some of the ID proponents (such as e.g. S. Meyer) are violating the vital necessary condition for natural science -- the requirement for the algorithmic effectiveness of the postulates. The necessity of such requirement and consequence of violating them was explained in the previous reply. nightlight
nightlight, thanks for your detailed reply. It's not clear to me that what you've provided constitutes a definition of science which can address the demarcation problem. Perhaps your goal is combining model space, empirical procedures, and operational rules in such a way as to provide apodicticity. You have provided some descriptions which in and of themselves have obvious applications to scientific studies and methods. This is all well and good. However a search for the three terms, "model space", "empirical procedures", and "operational rules" turned up nothing. That does not necessarily invalidate what you propose, but this formulation appears to be your private interpretation and demarcation of what can be considered useful to science and the expansion of human knowledge. You're making some pretty sweeping claims about what is and is not amenable to investigation, all within the context of your private interpretation of what constitutes genuine science, using private definitions and formulations that don't appear to have been subject to criticism by peers capable of evaluating your claims. If you're proposing a paradigm of reasoning that's supposed to account for what can be considered useful and applicable to human knowledge, then it must take for granted consciousness, reason, the correspondence of perception to reality, these things which you appear to have placed outside of usefulness.
"The model space (M) defines an algorithmic model of the problem space via postulates, equations, programs, etc. It’s like scaled down model of the real world, where you can run the model, observe behaviors, measure numbers, then compare these with numbers from empirical observations obtained via (E). A stylized image of this relation is a wind tunnel where you test the wing or propeller shapes to optimize their designs."
It's not clear what you mean by "algorithmic model". A search for the term yielded results ranging from the definition, "A method of estimating software cost using mathematical algorithms based on the parameters which are considered to be the major cost drivers" to a book on "algorithmic and finite model theory" which describes itself as, "Intended for researchers and graduate students in theoretical computer science and mathematical logic," Wikipedia defines "Finite Model Theory" as, "a subarea of model theory (MT). MT is the branch of mathematical logic which deals with the relation between a formal language (syntax) and its interpretations (semantics)." Whether this can be taken as a context for evaluating the usefulness of knowledge acquisition, namely a scientific demarcation, is at best a theoretical exercise and at worst an obfuscation of plain and obvious facts, such as our direct experience with cause and effect relationships. By any definition of algorithmic model theory that I was able to glean from searching, it's a theoretical subject of investigation, not a framework that science fits into. It's quite possible that we will share no context for meaningful discussion because I lack the training and language context for the ideas that you propose. It's also possible that this shotgun blast of terminology that you have peppered the comments with is a way of dismissing our first hand experience of cause-and-effect relationships and burying a discussion of design inferences in a cloud of obfuscatory language. But I'm trying to give you the benefit of the doubt. With regard to the questions you answered, thanks for being as direct as possible. I can only take your simple answers seriously however. For instance, you appeal to "planckian networks" and "planckian nodes" repeatedly. Again, a search for these terms turns up absolutely nothing. You keep inserting this private and context-dependent terminology into the discussion as if it's supposed to provide the necessary clarity for understand your point of view. I assure you this is not the case. I am a person of above-average intelligence, but I am not a theoretical physicist, a theoretical mathematician, or a theoretical computer scientist. Perhaps you are some or all of these things. Perhaps you're privy to a level of knowledge that the rest of us can scarcely imagine. Or maybe you're simply being intentionally elliptical. It's genuinely hard to tell. My first impression of your response was that you autogenerated parts of it. See MarkovLang. Also check out my own research paper, which practically indistinguishable from technobabble. Now let me address your simple answers to my questions.
Q1) Do you agree that chance and physical necessity are insufficient to account for designed objects such as airplanes, buildings, and computers? “Physical necessity” and “chance” are vague. If you have in mind physical laws (with their chance & necessity partitions) as known presently, then they are insufficient.
II'm perfectly happy with your terminology here, physical laws with chance and necessity partitions; and of course they are insufficient. Your answer is the obvious answer, because our repeated and uniform experience with physical reality makes clear that a category of objects exist which are not amenable to physical processes acting over time. You wrote,
"But in my picture, Planckian networks compute far more sophisticated laws that are not of reductionist, separable by kind, but rather cover physics, chemistry, biology… in one go. Our physical laws are in this scheme coarse grained approximations of some aspects of these real laws, chemical and biological laws similarly are coarse grained approximations of some other aspects of the real laws, i.e. our present natural science chops the real laws like a caveman trying to disassemble a computer for packing it, then trying to put it back together. It’s not going to work. Hence, the answer is that the real laws computed by the Planckian networks via their technologies (physical systems, biochemical networks, organisms), are sufficient. That’s in fact how those ‘artifacts’ were created — as their technological extensions by these underlying networks."
What are these Planckian networks, what is there applicability to understanding physical laws, how are they modeled and simulated, what research is being done, what are the results, how has our understanding of physical laws been broadened by understanding them, how do they account for all levels of causation, from necessity to the intentional activity of intelligence toward a goal or purpose? Sufficiency must be demonstrated, not presumed. I agree that our understanding of physical laws is incomplete, but presuming to have a higher, more complete explanatory framework for all of physical reality imposes a burden of proof that falls upon you. I'm highly suspicious when I hear claims that our understanding of that which can be determined through our perception, experience, use of language, powers of reason, and procedural experimentation can be superceded by some form of secret knowledge. What you're describing does not make sense of reality, it's at best an unconventional reduction that sounds like some sort of techno-new-ageism. I don't think anyone here is against you having unconventional views on science and philosophy, but your declarations of having some sort of Gnostic synthesis will not do, not if you're unable to make it plain enough.
Q2) Do you agree that intelligent agency is a causal phenomenon which can produce objects that are qualitatively distinguishable from the products of chance and necessity, such as those resulting from geological processes? Assuming ‘intelligent agency’ to be the above Planckian networks (which are conscious, super-intelligent system), then per (Q1) answer, that is the creator of real laws (which combine our physical, chemical, biological… laws as some their aspects). Since this agency operates only through these real laws (they are its computation, which is all it does), its actions are the actions of the real laws, hence there is nothing to be distinguished here, it’s one and the same thing.
I'm assuming intelligent agency to be the purposeful activity of human intelligent agents, or rather that which moves it. I do not accept your imposition of Planckian networks as an explanation for intelligence, unless you can demonstrate their use, and their necessary emergence from, or preeminence to, raw material behaviors. It's not even clear what you mean when you say that agency only operates through these real laws, and that the actions of agents are the actions of these real laws. We don't have a law of agency, or consciousness, and so certainly not knowledge of how "real laws" act to form real choices by real intelligent beings like ourselves. What we do have is first-hand, repeated, and uniform experience of intelligent agents acting in measurable ways, beginning with ourselves -- acting in ways to produce artifacts that are unaccountable to explanation by any known physical laws.
Q3) Do you think there are properties that designed objects can have in common which set them apart from objects explicable by chance and physical necessity? Again this comes back to Q1. If the “physical necessity + chance” refer to those that follow from our physical laws, then yes, designed objects are easily distinguishable. But when we contrive and constrain some setup that way, to comply with what we understand as physical laws, we are lobotomizing the natural computations in order to comply with one specific pattern (one that matches particular limited concept of physical law). Hence, in my picture Q3 is asking whether suitably lobotomized ‘natural laws’ (the real computed laws) will produce distinguishably inferior output to non-lobotomized system (the full natural computations) — obviously yes. It’s always easy to make something underperform by tying its both arms and legs so it fits some arbitrarily prescribed round shape.
Thanks for the straightforward answer that designed objects are easily distinguishable, but I cannot accept your qualification of that point. Here again you hint at a kind of Gnostic synthesis, in which you claim to know that our knowledge of physical laws can be superceded by some other notion, apparently not amenable to investigation, something which has to be believed to be seen. What "natural computations" are being lobotomized, and what demonstrable synthesis shows the inadequacy of our conception of physical laws, as they relate to the phenomena they describe? Again, thanks for a detailed reply. I'd be interested to know if anyone can make sense of it. The purposes of the questions I asked was to establish a common context for reasoning about design inferences, but it appears that you're proposing a radical, unorthodox, and so far, unintelligible view on components of reality that apparently are intended to supercede any current conception of physical law. This is not a context which is amenable to meaningful communication, as far as I can tell. Perhaps this just isn't the proper venue. Best, Chance Chance Ratcliff
kairosfocus #114: On redness. Your dismissive objection obviously is rooted in the exchange with BD. Actually, as was pointed out [there was someone in my dept doing this sort of work as research], this is eminently definable empirically, Sorry, I may have not been clear enough what "redness" meant, although it was in the same short list with "consciousness" "mind" "feeling". Hence the "_redness_" I was talking about is the one that answers "what is it like to see red color" - the _redness_ as qualia (cf. 'hard problem of consciousness'). There is nothing in science that explains how this _redness_ comes out of neural activity associated with seeing red color, no matter how much detail one has about that neural activity. It is not merely an explanatory gap between the "two" in the present science, but there aren't even the "two" to have a gap in between, there is just "one" (neural activity), since there is no counterpart for _redness_ in natural science of any sort. It is that what-is-like _redness_, along with all the rest of qualia and mind stuff, that are outside of present natural science. Injecting this "_redness_" as a cog within present natural science is therefore vacuous since nothing follows from it -- it is algorithmically ineffective, a NOP (do nothing) operation. But if you do insist on injecting such algorithmically ineffective cogs, as Stephen Meyer keeps doing with 'consciousness', than whatever it is you're offering is going to trigger a strong immune response from the existent natural sciences which do follow the rule of 'no algorithmically ineffective cogs'. This negative response is of the same nature as strong reactions to someone trying to sneak to the front of a long line waiting to buy tickets -- it is a rejection of a cheater. In science, injecting such gratuitous, algorithmically ineffective cogs is cheating since such cog would be given the space within scientific discipline, without it providing anything in return (in the form of relevant consequences) to the scientific discipline which gave it the space -- it's like a renter refusing to pay the rent, a cheater. Hence, like in regular life, if a cog is to get a residence within a science, it better be able to give something back to the science, some consequence that matters in that science. The basic rule is thus -- only the algorithmically effective elements can be added as cogs to the natural science. Of course, that doesn't preclude _redness_ or _consciousness_ (as mind stuff concepts) from being the research objects of a natural science (such is in neuroscience or cognitive science). In this case they would be researched, seeking for example to discover what is it so special (structurally and functionally) about the kinds of systems which could report such phenomena. What is precluded, though, is injecting such elements as the cogs or givens or postulates. This is like requiring that physicians working in a hospital must have a medical degree. That requirement doesn't imply that patients must have a medical degree, too. The patients could be illiterate, dumb, incompetent,... and still be allowed into the hospital to be treated. They just can't start treating and performing surgeries on other patients. And that is a good thing. nightlight
William J Murray #113: The issue I have with NL's argument, really, is teleology. Computing must be teleological to account for the presence of complex, functional machines. To solve any complex, specified target, an algorithm must have a goal. I don't see how computing targets and algorithmic goals can exist anywhere except in some form of consciousness, nor can I see how there is a "bottom up" pathway to such machinery regardless of what label one puts on that which is driving the materials and processes. One view of Neural Networks (NN) is as pattern recognizers, such as those used in handwriting and speech recognition systems which can be quite effective at such tasks, operating well with noisy, distorted and damaged/partial patterns. While the learning can be made a lot quicker and more efficient using supervised learning (where an external 'all knowing oracle' provides feedback), the unsupervised learning (no oracle but just local dynamical laws of interaction between nodes & link modifications) can perform as well, given more time, nodes and links. Hence we can ignore below how the learning algorithms were implemented (supervidsed or unsupervised) and simply consider a purely dynamical system (laws/rules of local interactions & link modifications) which can do anything being discussed. In the pattern recognition perspective, one can reverse engineer and find out how the networks learn pattern recognition. A simple visual depiction of the mechanism is a stretched out elastic horizontal surface with many strings attached from above and below pulling the section of surface up or down. The network link adaptations correspond to shortening or lengthening of the stings. Hence, given sufficient number of strings (i.e. number of NN's nodes & links), NN's are capable of reshaping the surface (fitness landscape) to an arbitrary form. The input patterns can be seen as balls being dropped at random places, then rolling on the surface and settling down in the nearest valley. If you allow the surface to vibrate (random noise in NN's operation), then the ball will naturally find deeper valleys, which corresponds to more global forms of optimization. On first glance, none of the above appears to have much in common with goal directed or anticipatory behaviors. In fact, it's all you need. Namely, consider a sequence of video frames of a soccer player kicking a ball and the ball flying into the goal or missing it. Stacking the frames in temporal order gives you a 3D pattern, with height of the pattern being proportional to the duration of the video. But this pattern, just like any other pattern, is as learnable as any in the character recognition tasks. Since pattern recognizing networks can learn how to recognize damaged, distorted and noisy patterns, in the case of this 3D pattern, they can learn how to recognize/identify lower part of the 3D pattern stack (later frames, ball entering or missing the goal), after seeing only the higher parts of the stack (earlier frames, the kick). But this capability to bring up/reconstruct the "later events/states" from the "earlier events/states" is precisely what is normally labeled as anticipation or look ahead or prediction. Once you realize the above poaaibility, the emergence of goal directed NN behavior is self-evident. For example, imagine this NN has a task to control the kick i.e. it is rewarded when the goal is scored, punished on miss. It learns from a series of 3D stacks of patterns that have frame sequences for goal hits and misses (e.g. these stacks could be captured by the NN's cams from previous tries). With its partial/damaged pattern recognition capabilities, looking at a series of 3D stacks as one 4D pattern, the NN can learn how to aim the kick to get the ball into the goal, i.e. such network can learn how to control a robotic soccer player. A problem of pole balancing on a moving cart (usually with simulated setups, but some also using real physical carts, motors and poles) is a common toy model for researching this type of NN capabilities (this particular problem is in fact a whole little cottage industry in this field). In short, adaptable networks have no problem with learning (in supervised or unsupervised fashion) anticipatory and goal directed behaviors since such behaviors are merely a special case of the pattern recognition in which the time axis is one dimension of the patterns. Hence adaptable networks can learn to behave as goal directed anticipatory systems. In case of adaptable networks with unsupervised learning (assumes only local dynamics for punishment/reward evaluations by nodes & link modifications), one can say that through the interaction with environment, the networks can spontaneously transform themselves into goal directed anticipatory systems, with the goal being maximizing of the net score (rewards - punishments). The general algorithmic pattern of these goal directed behaviors can be understood via internal modeling by the networks -- the networks build an internal model of the environment and of themselves (ego actor), then play this model forward in model time and space, trying out from the available actions of the ego actor (such as direction and strength of the kick), evaluating the responses of the model environment (such as ball flight path relative to the goal), then picking out the best action of ego actor as the one to execute in the real world (the real kick of the robotic soccer player). In simpler cases, it is also possible to 'reverse engineer' such internal models of the networks by observing them in action and then identifying 'neural correlates' of the components of the model and of their rules of operation (the laws of the model space, i.e. of their internal virtual reality). Hence, the internal modeling by the networks is not merely a contrived explanatory abstraction, but it is an actual algorithmic pattern used by the networks. Of course, none of the above addresses the question, 'what is it like to be' such anticipatory, goal directed system, i.e. what about the 'mind stuff' aspect of that whole process? Where does that come from? The post #109 has a sketch of how the 'mind stuff' aspect can be modeled as well (model based on panpsychism). In that quite economical model of consciousness, the answer is -- 'it is like' exactly as the description sounds like i.e. it is what one could imagine going through their mind while performing such anticipatory, goal directed tasks. Regarding the "bottom up pathway" -- the unsupervised networks require only the rules of operation for nodes (evaluation algorithm of rewards & punishments from incoming signals) and for link changes (how are links from a node changed after the evaluations by the node). These are all simple, local, dynamical rules, no more expensive or burdensome in terms of assumptions than postulates of physics which specify how the fields and particles interact. For example, a trading style network can be constructed in which some signals are labeled as 'goods and services', others as 'money' or 'payments', others as 'bills' or 'fees' and nodes as 'trading agents'... etc. Then one would define rules of trading, how costs & prices are set, how the trading partners are picked (which defines the links e.g. these could be random initially), how the links (trade volumes) are changed based on gains and losses at the node, etc., like a little virtual economy with each agent operating via simple rules. All that was said previously about optimization, goal oriented anticipatory behaviors, internal modeling by adaptable networks arises spontaneously within simple unsupervised (e.g. trading) networks sketched above. Given enough nodes and links, such network can optimize for arbitrarily complex fitness landscape (which need not be static, and may include other networks as part of network's 'environment' affecting the shape of the fitness landscape). The computational power of such network depends on how complex evaluation algorithms can nodes execute and on how many nodes and links are available. Since one can trade off between these two aspects, a large enough network can achieve arbitrarily high level of computational power (intelligence) with the simplest nodes possible (such as those with just two states; cf. "it from bit") and simple unsupervised learning/trading rules. In other words, the intelligence is additive with this type of systems. One can thus start with the dumbest and simplest possible initial nodes & links. Provided these 'dumb' elements can replicate (e.g. via simple cloning and random connections into the existent network), arbitrarily high level of intelligence can be achieved merely by letting the system run and replicate by the simple rules of the 'dumb' bottom level elements. No other input or external intelligence is needed, beyond what is needed to have such 'dumb' elements exist at all. Note that in any science, you need some set of givens, the postulates taken as is, which can't tell you what is the go of them. If you dream of a science with no initial givens, you already have it, it is saying absolutely nothing about anything at all, just basking in its pure, perfect nothingness and completeness. As explained in posts 19 and 35, if you start with ground level networks of this type at Planckian scale, there is enough room for networks which are computationally 10^80 times more powerful than any technology (including brains) constructed from our current "elementary" particles as cogs. To us, the output of such immensely powerful intelligence would be indistinguishable from godlike creations. nightlight
William J Murray (and kf), Thanks for bringing clarity to his flaw(s). I 'sensed' something was not connecting in his logic and his stated worldview but could not put my finger on it exactly. bornagain77
PS: On redness. Your dismissive objection obviously is rooted in the exchange with BD. Actually, as was pointed out [there was someone in my dept doing this sort of work as research], this is eminently definable empirically, as has been done for colour theory, foundational to display technology, printing, etc. It tuns out to be strongly based in reflection and/or emission of light in a band from a bit over 600 nm to a bit under 800 nm, depending on individual [there was someone else in the Dept who was colour blind], lighting conditions, and the like. Redness is an objective, measurable characteristic of objects, and being appeared to redly is something that can be empirically investigated once we are willing to allow that people do perceive, can evaluate and are often willing to accurately report what they experience. This extends to other areas of interest to science and technology, such as smell, sound/hearing, pain, etc. Turns out that our sensory systems use something pretty close to log compression of signals, linked to the Weber-Fechner law of fractional change sensitivity: dx/x, the link to logs is obvious. For light I think it is 10 decades of sensitivity. Indeed, it was discovered that as a result the classical magnitude scale of stars 1 - 6 [cf. here], was essentially logarithmic. 0 to 120 dB with sound is 12 decades. This is as close as the multimedia PC you are using, and as important as the techniques used in bandwidth compression that allow us to cram so many signals into limited bandwidth. kairosfocus
BA77: Panpsychism in and of itself is actually fairly close to what I believe about the physical world - that mind is omnipresent, animating it. The issue I have with NL's argument, really, is teleology. Computing must be teleological to account for the presence of complex, functional machines. To solve any complex, specified target, an algorithm must have a goal. I don't see how computing targets and algorithmic goals can exist anywhere except in some form of consciousness, nor can I see how there is a "bottom up" pathway to such machinery regardless of what label one puts on that which is driving the materials and processes. Whether one calls it computing mind or matter governed by chance and necessity, whatever it is must be able to imagine a goal that does not yet exist in reality and coherently compute solutions to the acquisition of that target. Without that, whether the computing is done by bottom-up mind or matter, the process is just flailing about blindly, which is not good enough to get to the goal. IMO, the ability to imagine a goal requires a duality (X as current state and not-X as desired state) and some form of consciousness that can perceive and will a course towards a solution to not-X. I don't find NL's "bottom-up mind" explanation adequate to the task of sufficient explantion, but panpsychism by itself is - IMO - well inside the tent of ID. Also, if NL finds "consciousness" too "unscientific" an entity to use in any scientific explanation because it gives the foes of ID too much fodder for their protestations, I submit that the foes of ID don't care how careful or specified our arguments are when they are willing to burn language and logic down to avoid the conclusion. William J Murray
NL: Let's cut to the chase scene, on definitions of science [insofar as such is possible -- no detailed one size fits all and only sci def'n is possible and generally accepted], that are not ideologically loaded. Ideologically loaded? Yes, loaded with scientism, and metaphysically question-begging. Here is Lewontin:
. . . to put a correct view of the universe [--> Notice, worldview level issue] into people's heads we must first get an incorrect view out . . . the problem is to get them to reject irrational and supernatural explanations of the world, the demons that exist only in their imaginations [--> Notice the assumed materialist worldview and strawman-laced contempt for anything beyond that], and to accept a social and intellectual apparatus, Science, as the only begetter of truth [--> NB: this is a knowledge claim about knowledge and its possible sources, i.e. it is a claim in philosophy not science; it is thus self-refuting] . . . . It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated . . . [["Billions and billions of demons," NYRB, Jan 1997. To cut5 off the usual atmosphere poisoning talking point on quote-mining, cf. the wider cite and notes here on, and the other four cites that fill out the point, showing how pervasive the problem is.]
This is simply unacceptable, materialist ideology dressing itself up in the how dare you challenge US, holy lab coat. And, as the onward link shows, it is not just a personal idiosyncrasy of Lewontin, this is at the pivot of a cultural civil war not only in science but across the board. In response, let me clip, first two useful dictionary summaries -- dictionaries, at best seek to sumarise good usage -- from before the present controversies muddied the waters:
science: a branch of knowledge conducted on objective principles involving the systematized observation of and experiment with phenomena, esp. concerned with the material and functions of the physical universe. [Concise Oxford, 1990 -- and yes, they used the "z" Virginia!] scientific method: principles and procedures for the systematic pursuit of knowledge [”the body of truth, information and principles acquired by mankind”] involving the recognition and formulation of a problem, the collection of data through observation and experiment, and the formulation and testing of hypotheses. [Webster's 7th Collegiate, 1965 . . . used to be my Mom's]
Notice, no ideological loading, no ideological agendas and no question-begging. Next, from my IOSE appendix on methods I can put up a framework that allows us to explore what science should be like:
science, at its best, is the unfettered — but ethically and intellectually responsible — progressive, observational evidence-led pursuit of the truth about our world (i.e. an accurate and reliable description and explanation of it), based on:
a: collecting, recording, indexing, collating and reporting accurate, reliable (and where feasible, repeatable) empirical -- real-world, on the ground -- observations and measurements, b: inference to best current -- thus, always provisional -- abductive explanation of the observed facts, c: thus producing hypotheses, laws, theories and models, using logical-mathematical analysis, intuition and creative, rational imagination [[including Einstein's favourite gedankenexperiment, i.e thought experiments], d: continual empirical testing through further experiments, observations and measurement; and, e: uncensored but mutually respectful discussion on the merits of fact, alternative assumptions and logic among the informed. (And, especially in wide-ranging areas that cut across traditional dividing lines between fields of study, or on controversial subjects, "the informed" is not to be confused with the eminent members of the guild of scholars and their publicists or popularisers who dominate a particular field at any given time.)
As a result, science enables us to ever more effectively (albeit provisionally) describe, explain, understand, predict and influence or control objects, phenomena and processes in our world.
Now, NL, observe your own attempt:
In any natural science, you need 3 basic elements: (M) – Model space (formalism & algorithms) (E) – Empirical procedures & facts of the “real” world (O) – Operational rules mapping numbers between (M) and (E) The model space (M) defines an algorithmic model of the problem space via postulates, equations, programs, etc. It’s like scaled down model of the real world, where you can run the model, observe behaviors, measure numbers, then compare these with numbers from empirical observations obtained via (E) . . . . scientific postulates can make no use of concepts such as ‘mind’ or ‘consciousness’ or ‘god’ or ‘feeling’ or ‘redness’ since no one knows how to formalize any of these, how to express them as algorithms that do something useful, even though we may have intuition that they do something useful in real world. But if you can’t turn them into algorithmic form, natural science has no use for them. Science seen via the scheme above, is in fact a “program” of sorts, which uses human brains and fingers with pencils as its CPU, compiler and printer. The ID proponents unfortunately don’t seem to realize this “little” requirement. Hence, they need to get rid of “mind” and “consciousness” talk, which are algorithmically vacuous at present, and provide at least a conjecture about the ‘intelligent agency’ which can be formulated algorithmically might be, at least in principle (e.g. as existential assertion, not explicit construction).
1 --> You have so prioritised mathematical formalisms and algorithms that much of both historic and current praxis is excluded. Scope fails at outset: factually inadequate at marking a demarcation line between commonly accepted science and not_science. Unsurprising as after decades it is broadly seen that the conventionally labelled sciences cannot be given a precising definition that includes all and only sciences and excludes all not_science. 2 --> The first basic problem with the worldview lock out you attempt is that by excluding mind, you have locked out the scientists themselves. The first fact we have -- whatever its ontological nature -- is that we are intelligent, conscious, reasoning, choosing, minded people. It is through that, that we access all else. 3 --> This then leads you to exclude an empirical fact and to distort what design thinkers and theorists do. It is an observable fact, that intelligent designers -- human and animal [think: Beavers and their dams adapted to stream circumstances] -- exist and create designed objects, processes etc. Thus, such are causes that observably act in the world. It is then scientifically reasonable to inquire whether there are reliably observable markers that indicate design -- the process of specifying and creating chosen configurations to achieve a desired function -- as cause. 4 --> As the very words of your own post demonstrate, functionally specific complex organisation of parts and associated information, FSCO/I . . . the operationally relevant form of complex, specified information as discussed since Orgel and Wicken in the 1970's . . . is a serious candidate sign. [Cf. 101 here on, noting the significance of Chi_500 = I*S - 500, bits beyond the solar system threshold; in light of its underlying context.] One, that is observable, quantifiable, subject to reduction to model form, and testable. Where, on billions of cases, without exception, FSCO/I is demonstrably a reliable marker of design as causal process. Which, as the linked will show, is specifically applicable to cell based life, especially the highly informational and functionally specific string structure in D/RNA and proteins. 5 --> You will notice that no ontological inferences have been made, and that the NCSE etc caricature on inferring to God and/or the supernatural is false, willfully false and misleading. Indeed, from being "immemorial" in the days of Plato, to the title of Monod's 1970 book, we can see that causal explanations -- common in science [e.g. for how a fire works or how a valley is eroded by flowing water, or how light below a frequency threshold fails to trigger photoemission of electrons in a metal surface etc] -- are based, aspect by aspect, on mechanical necessity, and/or chance and/or the ART-ificial. Cf here at UD for how this aspect by aspect causal investigation is reduced to an "algorithmic" procedure -- a flowchart -- by design thinkers. 6 --> That flowchart is essentially the context of the eqn above. 7 --> Similarly, it is sufficient that, per experience and observation, intelligent agents exist and indeed that this is foundational to the possibility of science and engineering. 8 --> So, we have empirical warrant that such intelligent designs are possible and that they show certain commonly seen characteristics as FSCO/I and irreducible complexity whereby a cluster of core components properly arranged and fitted together, are needed for a function. 9 --> This last brings out a significant note. A nodes and arcs structure can be used to lay out the "wiring diagram" [I cite Wicken] of a functionally specific object or process, and this can then be reduced to an ordered set of strings. [AutoCAD etc do this all the time.] So, description and discussion of strings . . . *-*-* . . . is WLOG. And also, we have a reduction of organisation to associated implicit information. This also allows testing of the islands of function effect through injection of noise and the tolerance for such. 10 --> This leads to the next point, the von Neumann Kinematic self replicator [vNSR] which is a key feature of cells, cf. 101 here. A representational description is used with a constructor facility to replicate an entity. This has considerable implications on design of the world of cell based life as the living cell is an encapsulated, gated metabolic automaton with a vNSR. That implies that a causal account of OOL has to account, in light of empirical warrant, for:
Now, following von Neumann generally (and as previously noted), such a machine uses . . .
(i) an underlying storable code to record the required information to create not only (a) the primary functional machine [[here, for a "clanking replicator" as illustrated, a Turing-type “universal computer”; in a cell this would be the metabolic entity that transforms environmental materials into required components etc.] but also (b) the self-replicating facility; and, that (c) can express step by step finite procedures for using the facility; (ii) a coded blueprint/tape record of such specifications and (explicit or implicit) instructions, together with (iii) a tape reader [[called “the constructor” by von Neumann] that reads and interprets the coded specifications and associated instructions; thus controlling: (iv) position-arm implementing machines with “tool tips” controlled by the tape reader and used to carry out the action-steps for the specified replication (including replication of the constructor itself); backed up by (v) either:
(1) a pre-existing reservoir of required parts and energy sources, or (2) associated “metabolic” machines carrying out activities that as a part of their function, can provide required specific materials/parts and forms of energy for the replication facility, by using the generic resources in the surrounding environment.
Also, parts (ii), (iii) and (iv) are each necessary for and together are jointly sufficient to implement a self-replicating machine with an integral von Neumann universal constructor. That is, we see here an irreducibly complex set of core components that must all be present in a properly organised fashion for a successful self-replicating machine to exist. [[Take just one core part out, and self-replicating functionality ceases: the self-replicating machine is irreducibly complex (IC).]
11 --> The only empirically warranted, needle in haystack plausible explanation is design. This also extends to the 10 - 100 mn bit increments in genetic information required to account for major body plans. +++++++++ In short, you have evidently begged a few questions and set up then knocked over some straw men. I suggest some rethinking. KF kairosfocus
nightlight you claim,,
These networks can create such information not because they are conscious but because they can compute anything that is computable (provided they are large enough)
So I take it you hold that the brain is 'computing' novel functional information, and computers don't because the computers aren't large enough yet. Yet if you hold that our brains are merely 'computing' new functional information then it seems you have a problem with the second law,,,
Alan’s brain tells his mind, “Don’t you blow it.” Listen up! (Even though it’s inchoate.) “My claim’s neat and clean. I’m a Turing Machine!” … ‘Tis somewhat curious how he could know it.
Are Humans merely Turing Machines? Alan Turing extended Godel’s incompleteness theorem to material computers, as is illustrated in this following video:
Alan Turing & Kurt Godel – Incompleteness Theorem and Human Intuition – video http://www.metacafe.com/w/8516356
And it is now found that,,,
Human brain has more switches than all computers on Earth – November 2010 Excerpt: They found that the brain’s complexity is beyond anything they’d imagined, almost to the point of being beyond belief, says Stephen Smith, a professor of molecular and cellular physiology and senior author of the paper describing the study: …One synapse, by itself, is more like a microprocessor–with both memory-storage and information-processing elements–than a mere on/off switch. In fact, one synapse may contain on the order of 1,000 molecular-scale switches. A single human brain has more switches than all the computers and routers and Internet connections on Earth. http://news.cnet.com/8301-27083_3-20023112-247.html
Yet supercomputers with many switches have a huge problem dissipating heat,,,
Supercomputer architecture Excerpt: Throughout the decades, the management of heat density has remained a key issue for most centralized supercomputers.[4][5][6] The large amount of heat generated by a system may also have other effects, such as reducing the lifetime of other system components.[7] There have been diverse approaches to heat management, from pumping Fluorinert through the system, to a hybrid liquid-air cooling system or air cooling with normal air conditioning temperatures. http://en.wikipedia.org/wiki/Supercomputer_architecture
But the brain, though having as many switches as all the computers on earth, does not have such a problem dissipating heat,,,
Does Thinking Really Hard Burn More Calories? – By Ferris Jabr – July 2012 Excerpt: So a typical adult human brain runs on around (a remarkably constant) 12 watts—a fifth of the power required by a standard 60 watt lightbulb (no matter what type of thinking or physical activity is involved). Compared with most other organs, the brain is greedy; pitted against man-made electronics, it is astoundingly efficient. http://www.scientificamerican.com/article.cfm?id=thinking-hard-calories
Moreover, one source for the heat generated by computers is caused by the erasure of information from the computer in logical operations,,,
Landauer’s principle Of Note: “any logically irreversible manipulation of information, such as the erasure of a bit or the merging of two computation paths, must be accompanied by a corresponding entropy increase ,,, Specifically, each bit of lost information will lead to the release of an (specific) amount (at least kT ln 2) of heat.,,, Landauer’s Principle has also been used as the foundation for a new theory of dark energy, proposed by Gough (2008). http://en.wikipedia.org/wiki/Landauer%27s_principle
And any computer, that has anything close to as many switches as the brain has, this source of heat will become prohibitive:
Quantum physics behind computer temperature Excerpt: It was the physicist Rolf Landauer who first worked out in 1961 that when data is deleted it is inevitable that energy will be released in the form of heat. This principle implies that when a certain number of arithmetical operations per second have been exceeded, the computer will produce so much heat that the heat is impossible to dissipate.,,, ,, the team believes that the critical threshold where Landauer’s erasure heat becomes important may be reached within the next 10 to 20 years. http://cordis.europa.eu/search/index.cfm?fuseaction=news.document&N_RCN=33479
Thus the brain is either operating on reversible computation principles no computer can come close to emulating (Charles Bennett), or, as is much more likely, the brain is not erasing information from its memory as material computers are required to do, because our memories are stored on a ‘spiritual’ level rather than on a material level,,, Research backs up this conclusion,,,
A Reply to Shermer Medical Evidence for NDEs (Near Death Experiences) – Pim van Lommel Excerpt: For decades, extensive research has been done to localize memories (information) inside the brain, so far without success.,,,,So we need a functioning brain to receive our consciousness into our waking consciousness. And as soon as the function of brain has been lost, like in clinical death or in brain death, with iso-electricity on the EEG, memories and consciousness do still exist, but the reception ability is lost. People can experience their consciousness outside their body, with the possibility of perception out and above their body, with identity, and with heightened awareness, attention, well-structured thought processes, memories and emotions. And they also can experience their consciousness in a dimension where past, present and future exist at the same moment, without time and space, and can be experienced as soon as attention has been directed to it (life review and preview), and even sometimes they come in contact with the “fields of consciousness” of deceased relatives. And later they can experience their conscious return into their body. http://www.nderf.org/NDERF/Research/vonlommel_skeptic_response.htm
To add more support to this view that ‘memory/information’ is not stored in the brain but on a higher 'spiritual' level, one of the most common features of extremely deep near death experiences is the ‘life review’ where every minute detail of a person’s life is reviewed:
Near Death Experience – The Tunnel, The Light, The Life Review – video http://www.metacafe.com/watch/4200200/
Thus the evidence strongly supports the common sense conclusion that humans are not Turing Machines! Note:
The Capabilities of Chaos and Complexity: David L. Abel - Null Hypothesis For Information Generation - 2009 To focus the scientific community’s attention on its own tendencies toward overzealous metaphysical imagination bordering on “wish-fulfillment,” we propose the following readily falsifiable null hypothesis, and invite rigorous experimental attempts to falsify it: "Physicodynamics cannot spontaneously traverse The Cybernetic Cut: physicodynamics alone cannot organize itself into formally functional systems requiring algorithmic optimization, computational halting, and circuit integration." A single exception of non trivial, unaided spontaneous optimization of formal function by truly natural process would falsify this null hypothesis. http://www.mdpi.com/1422-0067/10/1/247/pdf Can We Falsify Any Of The Following Null Hypothesis (For Information Generation) - Abel 1) Mathematical Logic 2) Algorithmic Optimization 3) Cybernetic Programming 4) Computational Halting 5) Integrated Circuits 6) Organization (e.g. homeostatic optimization far from equilibrium) 7) Material Symbol Systems (e.g. genetics) 8 ) Any Goal Oriented bona fide system 9) Language 10) Formal function of any kind 11) Utilitarian work
Quote:
"Information is information, not matter or energy. No materialism which does not admit this can survive at the present day. " Norbert Wiener created the modern field of control and communication systems, utilizing concepts like negative feedback. His seminal 1948 book Cybernetics both defined and named the new field.
bornagain77
William J Murray,,
I don’t really see how panpsychism is at odds with ID.
Well, I don't know all the nuances of panpsychism but I do know, as with Darwinist, he has no empirical evidence for his claim. Particularly this
These networks can create such information not because they are conscious but because they can compute anything that is computable (provided they are large enough). Since functional information is computable, networks can generate it, provided such product is considered “reward” during the network’s learning phase.
i.e. Alan Turing & Kurt Godel - Incompleteness Theorem and Human Intuition - video (notes in video description) http://www.metacafe.com/watch/8516356/ Here is what Gregory Chaitin, a world-famous mathematician and computer scientist, said about the limits of the computer program he was trying to develop to prove evolution was mathematically feasible: At last, a Darwinist mathematician tells the truth about evolution - VJT - November 2011 Excerpt: In Chaitin’s own words, “You’re allowed to ask God or someone to give you the answer to some question where you can’t compute the answer, and the oracle will immediately give you the answer, and you go on ahead.” https://uncommondesc.wpengine.com/intelligent-design/at-last-a-darwinist-mathematician-tells-the-truth-about-evolution/ Here is the video where, at the 30:00 minute mark, you can hear the preceding quote from Chaitin's own mouth in full context: Life as Evolving Software, Greg Chaitin at PPGC UFRGS http://www.youtube.com/watch?v=RlYS_GiAnK8 Related quote from Chaitin: The Limits Of Reason - Gregory Chaitin - 2006 Excerpt: an infinite number of true mathematical theorems exist that cannot be proved from any finite system of axioms.,,, http://www.umcs.maine.edu/~chaitin/sciamer3.pdf Information. What is it? - Robert Marks - lecture video (With special reference to ev, AVIDA, and WEASEL) http://www.youtube.com/watch?v=d7seCcS_gPk From David Tyler: How do computer simulations of evolution relate to the real world? - October 2011 Excerpt: These programs ONLY work the way they want because as they admit, it only works because it has pre-designed goals and fitness functions which were breathed into the program by intelligent designers. The only thing truly going on is the misuse and abuse of intelligence itself. https://uncommondesc.wpengine.com/darwinism/from-david-tyler-how-do-computer-simulations-of-evolution-relate-to-the-real-world/comment-page-1/#comment-401493 Conservation of Information in Computer Search (COI) - William A. Dembski - Robert J. Marks II - Dec. 2009 Excerpt: COI (Conservation Of Information) puts to rest the inflated claims for the information generating power of evolutionary simulations such as Avida and ev. http://evoinfo.org/publications/bernoullis-principle-of-insufficient-reason/ bornagain77
CR @104: can you share your definition of science here? While trying to make sense of your post at #101, I was unable to find any search results for "algorithmically effective postulates", "algorithmically effective form", or "algorithmically effective elements". You won't probably find them since these are my terms. First, you need a general schematics of natural science sketched in the post 49. In any natural science, you need 3 basic elements: (M) - Model space (formalism & algorithms) (E) - Empirical procedures & facts of the "real" world (O) - Operational rules mapping numbers between (M) and (E) The model space (M) defines an algorithmic model of the problem space via postulates, equations, programs, etc. It's like scaled down model of the real world, where you can run the model, observe behaviors, measure numbers, then compare these with numbers from empirical observations obtained via (E). A stylized image of this relation is a wind tunnel where you test the wing or propeller shapes to optimize their designs. The "algorithmically effective postulates" are then the core cogs of the (M) which define how the computations in (M) are done, e.g. via Maxwell eqs. for EM fields, or Newton laws for mechanics. The "algorithmically effective" attribute of postulates means that the postulates have to provide something that does something algorithmic and useful in the space (M). This is analogous to teaching programmers to add only code that does something useful, not code which does nothing useful, such as x=x. For example, scientific postulates can make no use of concepts such as 'mind' or 'consciousness' or 'god' or 'feeling' or 'redness' since no one knows how to formalize any of these, how to express them as algorithms that do something useful, even though we may have intuition that they do something useful in real world. But if you can't turn them into algorithmic form, natural science has no use for them. Science seen via the scheme above, is in fact a "program" of sorts, which uses human brains and fingers with pencils as its CPU, compiler and printer. The ID proponents unfortunately don't seem to realize this "little" requirement. Hence, they need to get rid of "mind" and "consciousness" talk, which are algorithmically vacuous at present, and provide at least a conjecture about the 'intelligent agency' which can be formulated algorithmically might be, at least in principle (e.g. as existential assertion, not explicit construction). Since cellular biochemical networks are the real intelligent agency (potent distributed computers) anyway, they don't even need to strain much to find it. In fact James Shapiro, among others, is already saying nearly as much (although his understanding of adaptable networks could use some crash course). It may be, of course, that some do know what is needed, but deliberately insist on injecting those algorithmically vacuous elements for their own reasons (religious, ideological, etc). That's when the immune system of the scientific 'priesthood' kicks in and the hostility toward ID flares up. This immune system won't tolerate hostile antigens being injected into their social organism. Hence, even in this case, it is still more useful to drop the non-algorithmic baggage which is guaranteed to trigger the strong immune response, replace it with algorithmically effective work alike (such as biochemical networks). Delayed gratification trumps instant gratification in the long run. Also, a few questions would help the rest of us gauge where you stand on design as an objective phenomenon: Now we're entering the realm of far out speculations. Before stepping out on that limb, since my pigeons are not shaped the right way for those pigeonholes, few preliminary explanations and definitions are needed. I'll take the "agency" to be something like Kantian "Ding an sich" included above to satisfy the common need for reaching an appearance of causal closure, however contrived such catch-all mailbox may be. Keeping in mind that any imagery is a limited tool capturing at best only some aspects of meaning, for this aspect, disliking finalities as a matter of principle, I prefer an image of Russian dolls, with the last unopened doll labeled by convention the 'agency'. In that perspective, there is always a chance that what appeared as a solid core doll, actually has a little thin line around the waste, unnoticed in all previous inspections, but which with some ingenuity and if twisted just the right way, might split open revealing an even smaller and more solid new "core" doll. Hence, the 'agency' here is not an entity in the heavens or above or the largest one, but exactly the opposite in all these attributes -- under and inside all other entities, the smallest one of them all. Each outer doll is thus shaped around the immediate inner doll, in its image, as it were. The ontology, including consciousness, sharpness of qualia and realness of reality, thus emanate and diffuse from the inside out, from the more solid, more real, smaller, quicker... to the more ephemeral, more dreamlike, larger, more sluggish... These layers being shaped in each other's image, there is common functional and structural pattern which is inherited and propagated between the layers, unfolding and shaping itself from inside out. Structurally, this pattern is a network with adaptable links, while functionally the patterns operate as distributed, self-programming computers (such as brains, which are networks of neurons, or as neural networks, which are mathematical models of such distributed computers). At the innermost layer, the elemental nodes (or agents) of the networks have only 2 states (Wheeler's "it from bit" concept), labeled as +1 (reward) and -1 (punishment). Ontologically, these two abstract node states correspond to the two elemental mind-stuff states, or qualia values, +1 = (happiness, joy, pleasure,...) and -1 = (unhappiness, misery, pain,...). For convenience they are labeled here in terms of familiar human experiences which are some of the counterparts (depending on context or location within our networks) at our level of these two elemental mind-stuff states as they emanate and get inherited from the inside out. Algorithmically, these networks function as self-programming distributed computers, operating as optimizers which seek to maximize the net (abstract) "reward" - "punishment" scores, the latter being sums of the +1 and -1 node values. As explained in post #107, the common algorithmic pattern used by the networks for this optimization is internal modeling of their environment and of self (ego actor), then playing this model forward in model time against different actions of the ego actor, as a what-if game, measuring and tallying resulting punishments & rewards (as encoded in the model's knowledge/patterns of the environment and self), and finally picking the "best" action of the ego actor to perform in the real world. By virtue of the ontological association between 'mind stuff' elements and computations by the nodes, the above computational process is experienced by the network as "thinking through" this what-if game. Hence, the computations by networks and conscious thinking are inseparable in this perspective. Since the above optimization relies on internal modeling and what-if game between sub-networks, optimization at the level above, of total (rewards - punishments) would seek to harmonize the actions of the subnetworks so that they are maximally predictable to each other (similar idea to Leibniz monads). Hence mutual predictability is one of the subgoals of the optimization process. That's the reason why the laws of nature seem suprisingly understandible and knowable -- they are optimized to be that way, they 'love' to be known and understood by other agencies. This harmonization process proceeds from smaller scales to larger scales via construction of ever larger computational technologies, just as we do it in our technological society (from PCs and corporate networks to massive Data Centers and internet). The key technologies computed this way are physical layer (particles, fields, interactions, physical constants & laws), chemical layer, cellular biochemical networks, layer of organisms, layer of societies of organisms, ecosystems (multiple societies of organisms). Note that physical particles, their laws and all higher level objects and their laws are properties of the activations patterns unfolding on the Planckian networks, analogous to gliders and other patterns unfolding on the grid of the Conway's Game of Life. Hence, the Planckian networks are not computing physical, chemical, biological,... laws separately. Such separation is an artifact of us selecting some aspects of those patterns and extracting regularities related to those isolated aspects. Hence, the real laws (activation patterns) as computed by the networks are not separate or reducible to some subset of laws (such as biological to physical), since there are no such subsets -- the pattern is computed in a single go, with all its aspects rolling at once, all in the one and the same set of "flickers" of the network's cells. Therefore, one can view chemical, biological,... laws as small, subtle and purposeful adjustments or refinements of the physical laws, when those system are operating in the more complex settings, such as complex molecule or a cell. Recall also that physical laws are already fundamentally statistical (via quantum theory). Hence, these subtle adjustments for chemistry or biology patterns are not violating those statistical laws of physics -- what to our present physical law is "random" is actually non-random. A good analogy for this kind of relation between laws would be "laws" of traffic flows, in which the cars are "elementary" objects of the theory. These "objects" obey some statistical laws and regularities of traffic flows. As far as such laws of traffic flows can tell, individual cars are moving "randomly" and only their statistics has regularities. In fact, the individual cars are not moving randomly but each is guided by its driver for purposes which are much too subtle to be captured by the crude traffic flow laws. Yet such intelligent guidance of each car from inside, to some higher subtle purposes, does not violate laws of traffic flaws, since these are only statistical laws. In the same way, our physical and chemical laws are much too crude to capture the subtle intelligent guidance of the particles used for the biological and higher laws. Yet, as in the traffic laws example, such intelligent guidance for higher purposes does not violate physical and chemical laws. With the above in mind, let's check the questions: Q1) Do you agree that chance and physical necessity are insufficient to account for designed objects such as airplanes, buildings, and computers? "Physical necessity" and "chance" are vague. If you have in mind physical laws (with their chance & necessity partitions) as known presently, then they are insufficient. But in my picture, Planckian networks compute far more sophisticated laws that are not of reductionist, separable by kind, but rather cover physics, chemistry, biology... in one go. Our physical laws are in this scheme coarse grained approximations of some aspects of these real laws, chemical and biological laws similarly are coarse grained approximations of some other aspects of the real laws, i.e. our present natural science chops the real laws like a caveman trying to disassemble a computer for packing it, then trying to put it back together. It's not going to work. Hence, the answer is that the real laws computed by the Planckian networks via their technologies (physical systems, biochemical networks, organisms), are sufficient. That's in fact how those 'artifacts' were created -- as their technological extensions by these underlying networks. Q2) Do you agree that intelligent agency is a causal phenomenon which can produce objects that are qualitatively distinguishable from the products of chance and necessity, such as those resulting from geological processes? Assuming 'intelligent agency' to be the above Planckian networks (which are conscious, super-intelligent system), then per (Q1) answer, that is the creator of real laws (which combine our physical, chemical, biological... laws as some their aspects). Since this agency operates only through these real laws (they are its computation, which is all it does), its actions are the actions of the real laws, hence there is nothing to be distinguished here, it's one and the same thing. Of course, per (Q1), the true laws computed here are not the sum of our physical, chemical and biological laws. The latter are only one dimensional crude shadows of the former, each of our present laws capturing only some aspects of regularity of the continuous computational process which keeps the universe going from moment to moment, and for each "elementary" particle, each atom, each molecule, each cell,... The real laws are not reducible to its components, hence they are not reductionist. You cannot stop some aspects of computation (biological) and ask what will other aspects (physical) do then. They are all same "flickers" of the same Planckian nodes, you stop one kind of pattern you stop them all. Q3) Do you think there are properties that designed objects can have in common which set them apart from objects explicable by chance and physical necessity? Again this comes back to Q1. If the "physical necessity + chance" refer to those that follow from our physical laws, then yes, designed objects are easily distinguishable. But when we contrive and constrain some setup that way, to comply with what we understand as physical laws, we are lobotomizing the natural computations in order to comply with one specific pattern (one that matches particular limited concept of physical law). Hence, in my picture Q3 is asking whether suitably lobotomized 'natural laws' (the real computed laws) will produce distinguishably inferior output to non-lobotomized system (the full natural computations) -- obviously yes. It's always easy to make something underperform by tying its both arms and legs so it fits some arbitrarily prescribed round shape. As you can see, due to unconventional shape of my pigeons, they just don't fit into your pigeonholes and answers won't tell you anything non-trivial. The real answers on substance are in the introductory description. nightlight
What exactly are you asking i.e. what is “purely natural/material process” ? If it is what I call that, then the answer is trivial – cells and humans are ‘pure natural/material processes’ (keep in mind that this is panpsychism, where mind stuff is not unique to humans or to live organisms), they generate code and symbolic information.
A semiotic structure is also found in an automated fabric loom, but the source of the organization is no more in the fabric loom than it is in the body. There is no mechanism within the loom to establish the relationships required for it to operate, and that mechanism doesn't exist in the cell either. And the fact that the cell replicates with inheritable variation from a genotype to a phenotype is of no importance, because without the establishment of a semiotic state there is no genotype to begin with. What is unaccounted for is a mechanism capable of establishing a semiotic state. Upright BiPed
bornagain77 #105: Basically you are claiming that purely material/natural processes, because they are `conscious', can create functional information. You missed the point. These networks can create such information not because they are conscious but because they can compute anything that is computable (provided they are large enough). Since functional information is computable, networks can generate it, provided such product is considered "reward" during the network's learning phase. The common algorithm used by the adaptable networks to maximize the net (rewards - punishments) figure is to create internal model of the environment delivering those punishments and rewards. Within the internal model, network also has 'ego actor' which it runs forward in time in the model space-time (in it's head, as it were) against the model's environment 'actor', testing different actions of the 'ego actor' and evaluating rewards and punishments, then picking the best action of the 'ego actor' as the one to execute in the real environment. Hence this works like a chess player playing imagined moves, his own, then opponents, then his own,... on his mind's internal model of chess position, until the best move is found to be played on the real board. These kinds of internal models can be reverse engineered in experiments with neural networks set to learn something. The specified information created in such "natural" process is the internal model which encodes the knowledge (as learned patterns) about environment (its laws or patterns of its behavior) and about 'ego actor' (self). Hence, networks, which can be implemented in many ways and out many materials (e.g. via solid state circuits), can create specified functional information. Cellular biochemical networks are one example of such systems which encode this info (about environment & self) in their DNA and in epigenetic form. Human or animal brains are another example. The comment on consciousness was inserted merely so you wouldn't jump on that "distinction" between natural/material and "conscious intelligence" (that in your scheme has different nature). It plays no role in conclusions. I am merely pointing out that, regardless of whether matter-energy is conscious or not, you are in the same exact empirical boat as Darwinists are in that you have ZERO evidence that purely material/natural processes, whether they are conscious or not, can create ANY functional information. Cells do it, and they are natural/material processes. Or are playing 'no true Scotsman' fallacy game? I.e. as soon as something violates your postulate, you reclassify it into the other side, so your flexible category "material/natural" seems to fulfill whatever you want it to fulfill. Note that simple physical processes can be seen as producing 'functional information' provided you define it right. E.g. when a rock falls into a mud, then gets removed, there is an exact complex imprint negative of the rock's features in that mud. Such imprint can then cast a replica positive for a rock shape, hence you can call the negative a code of rock's shape. More generally, interaction between any two physical systems leaves "imprint" of one in the other, capturing info about each other in great detail (provided you have a right decoder to extract it). Laws of physics just happen to be such that they allow for this kind of imprinting and mutual encoding between systems. The point being, you need a lot more precise language before you can make categorical pronouncements of the sort cited above. If you're trying to make pro-ID argument with such declarations, that's one of the poorer ways to do it, since it needlessly drags in piles of shaky, ill defined concepts. It's like building a house on a swamp. nightlight
I don't really see how panpsychism is at odds with ID. William J Murray
Basically you are claiming that purely material/natural processes, because they are 'conscious', can create functional information. I am merely pointing out that, regardless of whether matter-energy is conscious or not, you are in the same exact empirical boat as Darwinists are in that you have ZERO evidence that purely material/natural processes, whether they are conscious or not, can create ANY functional information. That is the exact demarcation point, as far as empirical evidence is concerned, that will separate you, and Darwinists, from pseudo-science!,,, Frankly, you should have gotten along quite swell with Darwinists, as far as I can tell, as you are claiming, basically, the same exact things for what we should see in reality,,,, but I guess you just weren't atheistic enough for them.,, Similar to James Shapiro's 'natural genetic engineering' predicament! bornagain77
nightlight, can you share your definition of science here? While trying to make sense of your post at #101, I was unable to find any search results for "algorithmically effective postulates", "algorithmically effective form", or "algorithmically effective elements". algorithmically effective postulates algorithmically effective form algorithmically effective elements How does your definition of science address the demarcation problem? Also, a few questions would help the rest of us gauge where you stand on design as an objective phenomenon: 1) Do you agree that chance and physical necessity are insufficient to account for designed objects such as airplanes, buildings, and computers? 2) Do you agree that intelligent agency is a causal phenomenon which can produce objects that are qualitatively distinguishable from the products of chance and necessity, such as those resulting from geological processes? 3) Do you think there are properties that designed objects can have in common which set them apart from objects explicable by chance and physical necessity? Chance Ratcliff
So basically you have no empirical evidence whatsoever towards the basic questions I asked so as to empirical support your position, (particularly on the de novo origination of functional information by what are perceived to be purely natural/material processes). What exactly are you asking i.e. what is "purely natural/material process" ? If it is what I call that, then the answer is trivial - cells and humans are 'pure natural/material processes' (keep in mind that this is panpsychism, where mind stuff is not unique to humans or to live organisms), they generate code and symbolic information. Since you are dividing world some other way, you must be asking whether someone has a setup in a vat where life (or something like that) arises from inorganic matter. I don't think anyone has figured out how to do that. As to getting down to control or explore Planckian networks directly, that field hasn't matured enough yet to enter experimental phase. As a matter of principle, a network of that type can compute anything that is computable without anyone explicitly programming it. Its intelligence (computational capacity plus programs) is additive, hence by starting at small enough scale for build up, you can have as intelligent system as needed to match any target level (such as that needed to produce life), starting with as dumb lowest level components as one wants. Obviously this is not mainstream biology, although some people in that field (Shapiro, Kauffman and many others at Santa Fe Institute) do follow this line (intelligent biochemical networks). Similarly in physics, this is not the mainstream line either, but some notable folks are working on pregeometric Planckian level physics. But no one that I know of has been following this particular path of combining the two in the manner I have been describing, hence this approach is even more ahead of its time than the two separate green shoots above. Having followed these fields (Planckian scale physics, biochemical networks, neural networks) for a while, I see all needed ingredients with required capabilities and properties, and how they would fit together to solve all 3 related problems, fine tuning of physical constants + origin of life + evolution, with a single coherent model. It's just matter of time before someone working in any of these fields for living, with access to sufficient research funding, stumbles upon the same combination of ideas (which are not that far fetched) and makes it all work. There is nothing that in principle would block the path sketched in the earlier posts. I've seen reams upon reams of words from Darwinists, (generated by the imagination/intelligence of their own `minds'), who claim basically the same thing you are claiming! As such, what makes you any different, or any less absurd, from them? Any time I debate those folks (as I did at length few years ago in talk.origins), we clash a lot worse than I do with ID folks (who also tend to be more polite than the first group). That's how I know. It is true, though, that my position is contrary to either group and you probably just see the gap on your side. But there is an even bigger gap on the other side (think of a triangle, with each position being one corner). nightlight
So basically you have no empirical evidence whatsoever towards the basic questions I asked so as to empirical support your position, (particularly on the de novo origination of functional information by what are perceived to be purely natural/material processes). And as such why should I take any of your lengthy word play as anything other than lengthy word play? I've seen reams upon reams of words from Darwinists, (generated by the imagination/intelligence of their own 'minds'), who claim basically the same thing you are claiming! As such, what makes you any different, or any less absurd, from them? bornagain77
bornagain77 #99: And I think that your beliefs are absurd, on par with Darwinian beliefs, especially given the fact that the most sure thing that you can know about yourself is that you have a mind! Content of science is not identical to the content of human experience. You don't need postulates and logical deductions to experience happiness or anger or see redness... Science does. If you can't put something into algorithmically effective postulates (formulas, programs), or deduce/compute from some such postulates, it is just not science. How come you are not complaining that your consciousness (or any consciousness) isn't encoded into the CPU of your computer you're reading this on. However real it is to you, your consciousness is just not a piece of Windows or Mac OSX code, or anything that can be expressed as piece of such code, to run on your computer. Hence, you can't have it running there. At present, no one has figured out how to put 'consciousness' into the algorithimcally effective form to be a useful tool of science. You can't just stick some label on it, give it a magic wand to do anything it wants, and declare it a scientific concept. That wouldn't work if you were to stick sonmething like that into your computer, either. So what? Your mind can't run on your PC as a program. What kind of problem is that? So my earlier point about ID is that it is to become part of legitimate natural science it would be better served by algorithmically effective elements, such as computer-like intelligent agency like Plnackian networks, rather than by algorithmically undefined concepts such as 'mind' or 'deity'. When someone figures out how to reformulate any of that in the algorithmic form, then no problem. nightlight
bornagain77 #97: So basically you believe that consciousness is embedded within matter/energy? i.e. a rock `thinks' or is `aware' on some level in your view? That was covered in post #58, in the description on what would one expect based on this model to experience after dying, as decay progresses from organism, to organs, to tissues, to cells, all the way down to molecules. In short, the process would appear as sequence of awakenings into ever more hyper-real, more vivid qualia (colors, sounds, smells...), sharper, more focused, quicker but more narrow consciousness. The previous phases (including our daily consciousness) would appear as a dream from the next inner level of progression down the hierarchy of networks. At the end, it is down 'with the crew', with the mind stuff of the Planckian network which was operating that 'heavy machinery', your body, which has just been dismantled for recycling of the parts. All knowledge and wisdom accumulated during life is booked into the libraries for reuse as well. At this level, running now at full throttle in native mode on the 10^80 more powerful computer than our human multilevel level simulation, one second of our time appears infinitely long. Besides the above path followed by nearly everyone, there is also the exceptional path upon death, which advances in the opposite direction and is traversed by the very few who have followed the ancient immortality recipe -- these humans re-encode (or unfold) the 'self' pattern from its fragile, relatively short lived network of neurons into the 'Self' pattern, unfolding in a larger, more permanent network of social organism (or some of its subnetworks such as religions, or schools or movements in arts, sciences, technologies,...). In that case the 'self' is phased out (ego death) well before the biological death, while the new live, conscious 'Self' is phased in. For the 'Self' the biological death of its little seed pattern (the original biological human who set off down the exceptional path) is an unexceptional event, no different than a death of a single cell is to the fetus, infant, child,... that originated it. The transition onto the excceptional path, which is analogous to egg fertilization, is usually called enlightenment (or 'cosmic consciousness' in the Bucke's analysis of this phenomenon; or being born again). It is characterized by the extatic bliss (analogous to the bliss associated with regular fertilization), ego death and opening of the gates for the journey along the exceptional path. The 'Self' pattern (which is a live, conscious being) is not some kind of clone or a personality cult or legacy or transhumanist's computer recording of the 'self' pattern. The two patterns are as different as the human is different from the DNA from which it unfolded. The consciousness of the 'Self' being, while more subtle, less intense and vivid in its sense of reality, is far richer in its spectrum of conscious experiences, integrating aspects of qualia of all the humans and other entities (non-human forms of consciousness) enfolded into its pattern. For example, when a Mozart's concerto floods you with a wave of joy and you forget yourself, becoming 'it' for a moment, there is an angelic being around you, enfolding you into its big heart, drinking that same joy from the same fountain, through you. Multiple "Self' patterns coexist and live simultaneously within the same social organism, permeating each other like waves on a lake, sharing for a moment the same water molecules and their oscillations, as each 'pursues its own happiness' (i.e. optimizes its network to its net rewards - punishments). Since the task of these live entities is to work out the harmonization puzzle at a next higher scale, just as each human organism is doing it at its scale, the conflicts arise in this 'angelic' realm (deities in ancient religions) that we perceive on our individual level as 'wars' (religious, cultural, political, ethnic, national,... ) i.e. the "angelic" entities are by no means some kind of idyllic beautiful, all-loving beings. As 'Self' pattern matures, there may come a moment when everything is aligned just right and a right human is enfolded into the pattern, a special resonance is aroused between this human and the seed pattern of the little 'self' long gone, which was carried implicitly by the 'Self' as a kind of holographic, spread out encoding that suddenly struck the perfect decoder for this hologram. In some religions this refocusing (from 'Self' to 'self'), resonant phenomenon is referred to as 'god becoming human' (or a demigod of ancient religions). The result is the long gone little 'self' reconstituting as individual biological human again, fully alive in a new fresh body, experiencing long forgotten intense super-real consciousness and super-vivid qualia, albeit within the narrow spectrum of a single human. As this new fragile human runs down his clock, the new 'Self' is spawned to carry the pattern to another far away resonant refocusing, or rebirth as a biological human. If so, who or what brought all the matter/energy into being at the big bang? i.e. did matter-energy think itself into existence before it existed? There is always the 'last opened' Russian doll, no matter how many you opened. As to what is inside that one, it may take some tricky twists and tinkering before it splits open and the next 'last opened' shows itself. Hence, you are always at the 'last opened' one. In other words, each theory has set of givens, the postulates, which don't tell you how did they come into existence, what's the go of them. For that you need another deeper theory, etc. nightlight
"I think that this type of computational notion of intelligent agency would have served ID a lot better than the scientifically undefined ‘mind’ or other concepts that don’t have counterparts in natural science." And I think that your beliefs are absurd, on par with Darwinian beliefs, especially given the fact that the most sure thing that you can know about yourself is that you have a mind! bornagain77
Optimus #77: But what if we see design in the very fabric of the cosmos itself? The constants of gravity, electromagnetic force, strong and weak nuclear force? Now aliens are out of the question. They can hardly take credit for designing the very universe in which they live. What recourse do we have left? Creating a universe would seem to require an intelligence that is external and causally prior to the universe. Nope, that doesn't follow. As explained in #19 and #35, with adaptable networks you can have a form of intelligence which is additive, i.e. you start with relatively 'dumb' elements (nodes & links), using simple rules to change their states and modify links (unsupervised learning), which would no more of cost in assumptions than regular physical postulates. Then you allow these 'dumb' elements to replicate, where new nodes can randomly connect into the network. You end up with increasing more powerful self-programming distributed computer operating with the same type of algorithms as human brain or neural networks. As noted earlier, if you start these elemental nodes and links at Planck scale (10^-35 m), by augmenting some of the pregeometry models of physics (e.g. Wolfram's Planckian networks, Penrose's spinor networks) with the adaptable links, rather than having fixed link strengths, the resulting distributed self-programming computer would be 10^80 times more powerful than the best computer technology we could construct out of our elementary particles (which is itself a 6-7 orders of magnitude ahead of out current technology). That level of intelligence ought to suffice for designing biochemical networks of the cells, which themselves are intelligent networks, computing the tasks of molecular engineering, whether for cellular reproduction, metabolism, defense,... or large scale DNA transformations for evolution. The nice extra benefit of going down to fundamental level of physics is that fine tuning of physical laws and constants is automatic. Namely, our "elementary" particles in this type of models are a large scale 'technology' designed by the Planckian networks to extend the coordination & harmonization of their computing networks to larger scale computers, to tap into the economies of scale. Hence, by definition our physics would be designed by them, to fit well with the task of building larger scale computing networks, such as cellular biochemical networks, organisms, human technologies. This fit would be there for the same reason we design our keyboards to work well with our fingers, or to interface with computers. All such 'fine tuning' in our realm is result of deliberate design as we build up our technology. The same goes with physical laws seen as 'technology' of underlying Planckian networks. Note also that Planckian networks are pre-geometric, i.e. they are not in space-time. They have only links and nodes as fundamental entities, with hop count between nodes being the sole distance (or space related) concept in the system. The activity patterns arising on this network as result of its computations would on the larger scale appear and behave as physical particles in regular space-time (analogous to gliders in Conway game of Life). I think that this type of computational notion of intelligent agency would have served ID a lot better than the scientifically undefined 'mind' or other concepts that don't have counterparts in natural science. Natural science has no a priori problem with having intelligent agency as an element. But the fundamental elements one can legitimately bring into science via postulates have to do something in the algorithmic way (they need to be of algorithmic nature, such as equations or programs), not do things by waving a magic wand and puff into existence whatever is needed. nightlight
Okie Dokie - "panpsychism" is the view that all matter has a mental aspect - So basically you believe that consciousness is embedded within matter/energy? i.e. a rock 'thinks' or is 'aware' on some level in your view? i.e. you do not think that consciousness exists outside of matter??? If so, who or what brought all the matter/energy into being at the big bang? i.e. did matter-energy think itself into existence before it existed?
“All the evidence we have says that the universe had a beginning.” - Cosmologist Alexander Vilenkin of Tufts University in Boston - paper delivered at Stephen Hawking's 70th birthday party (Characterized as 'Worst Birthday Present Ever') https://uncommondesc.wpengine.com/intelligent-design/vilenkins-verdict-all-the-evidence-we-have-says-that-the-universe-had-a-beginning/ Mathematics of Eternity Prove The Universe Must Have Had A Beginning - April 2012 Excerpt: Cosmologists use the mathematical properties of eternity to show that although universe may last forever, it must have had a beginning.,,, They go on to show that cyclical universes and universes of eternal inflation both expand in this way. So they cannot be eternal in the past and must therefore have had a beginning. "Although inflation may be eternal in the future, it cannot be extended indefinitely to the past," they say. They treat the emergent model of the universe differently, showing that although it may seem stable from a classical point of view, it is unstable from a quantum mechanical point of view. "A simple emergent universe model...cannot escape quantum collapse," they say. The conclusion is inescapable. "None of these scenarios can actually be past-eternal," say Mithani and Vilenkin. Since the observational evidence is that our universe is expanding, then it must also have been born in the past. A profound conclusion (albeit the same one that lead to the idea of the big bang in the first place). http://www.technologyreview.com/blog/arxiv/27793/ The Universe Had a Beginning - Alexander Vilenkin - video http://www.youtube.com/watch?v=9QSZNpLzcCw
As well nightlight, in your view of reality, should not a person who was missing large portions of their brain, or of their body, be less of a conscious 'person' than a 'person who had a whole brain/body?
Case for the Existence of the Soul - (Argument from Divisibility) - JP Moreland, PhD- video http://www.youtube.com/watch?v=7SJ4_ZC0xpM&feature=player_detailpage#t=2304s Miracle Of Mind-Brain Recovery Following Hemispherectomies - Dr. Ben Carson - video http://www.metacafe.com/watch/3994585/ Removing Half of Brain Improves Young Epileptics' Lives: Excerpt: "We are awed by the apparent retention of memory and by the retention of the child's personality and sense of humor,'' Dr. Eileen P. G. Vining; In further comment from the neuro-surgeons in the John Hopkins study: "Despite removal of one hemisphere, the intellect of all but one of the children seems either unchanged or improved. Intellect was only affected in the one child who had remained in a coma, vigil-like state, attributable to peri-operative complications." http://www.nytimes.com/1997/08/19/science/removing-half-of-brain-improves-young-epileptics-lives.html Strange but True: When Half a Brain Is Better than a Whole One - May 2007 Excerpt: Most Hopkins hemispherectomy patients are five to 10 years old. Neurosurgeons have performed the operation on children as young as three months old. Astonishingly, memory and personality develop normally. ,,, Another study found that children that underwent hemispherectomies often improved academically once their seizures stopped. "One was champion bowler of her class, one was chess champion of his state, and others are in college doing very nicely," Freeman says. Of course, the operation has its downside: "You can walk, run—some dance or skip—but you lose use of the hand opposite of the hemisphere that was removed. You have little function in that arm and vision on that side is lost," Freeman says. Remarkably, few other impacts are seen. ,,, http://www.scientificamerican.com/article.cfm?id=strange-but-true-when-half-brain-better-than-whole
As well nightlight, in your view of reality, if you do not actually believe in a transcendent Intelligence apart from matter i.e. God, it seems that on your view of reality that you have the same exact empirical difficulty that Darwinists have in that you ZERO empirical evidence for the de novo origination of functional information arising by what are perceived to be purely natural processes. I know of no such evidence! Perhaps you can be the first to present some evidence to the bloggers of UD!
The Law of Physicodynamic Insufficiency - Dr David L. Abel - November 2010 Excerpt: “If decision-node programming selections are made randomly or by law rather than with purposeful intent, no non-trivial (sophisticated) function will spontaneously arise.”,,, After ten years of continual republication of the null hypothesis with appeals for falsification, no falsification has been provided. The time has come to extend this null hypothesis into a formal scientific prediction: “No non trivial algorithmic/computational utility will ever arise from chance and/or necessity alone.” http://www-qa.scitopics.com/The_Law_of_Physicodynamic_Insufficiency.html When Theory and Experiment Collide — April 16th, 2011 by Douglas Axe Excerpt: Based on our experimental observations and on calculations we made using a published population model [3], we estimated that Darwin’s mechanism would need a truly staggering amount of time—a trillion trillion years or more—to accomplish the seemingly subtle change in enzyme function that we studied. http://www.biologicinstitute.org/post/18022460402/when-theory-and-experiment-collide Minimal Complexity Relegates Life Origin Models To Fanciful Speculation - Nov. 2009 Excerpt: Based on the structural requirements of enzyme activity Axe emphatically argued against a global-ascent model of the function landscape in which incremental improvements of an arbitrary starting sequence "lead to a globally optimal final sequence with reasonably high probability". For a protein made from scratch in a prebiotic soup, the odds of finding such globally optimal solutions are infinitesimally small- somewhere between 1 in 10exp140 and 1 in 10exp164 for a 150 amino acid long sequence if we factor in the probabilities of forming peptide bonds and of incorporating only left handed amino acids.
bornagain77
Sorry for the accidental bolded block of text above, I missed a b-tag to close. nightlight
bornagain77 #93: I really lost interest in looking up the precise evidence to refute you. Good idea, since there is none. Just the fact that as of few days ago, new experimental design proposals are coming out on how to achieve finally the 'loophole free' BI violations, ought to tell you all by itself what the status of the problem is. Few times I had challenged entire sci.physics.research, which moderated news group for researchers, as well as the PhysicsForums, and after weeks of debates in each case, nothing has ever turned up even close to refutation of any of the challenges (theoretical & experimental aspects). Interestingly, a fellow physicist has recently cited some of my posts there (as "nightlight") in his papers, regarding the unexpected and intriguing connection I found between Barut's SFED and mathematical technique Carleman Linearization (CL), which is a technique for converting finite number of non-linear differential equations into infinite number of linear differential equations. Namely, it turns out that Barut's SFED is an accidental rediscovery of a simple form of CL. The interesting aspect is that CL can be reformulated so that it formally looks like quantum field theory (QFT), which was done so that techniques developed in QFT can be reused in solving non-linear problems in applied math via CL. But the connection of CL with SFED allows one to reinterpret conventional QED (quantum electrodynamics; the theory behind all those BI violation experiments on photons) as a linearized form of classical Maxwell-Dirac equations. That means that QED is a pure classical theory of fields in disguise and the 'quantumness' or 'quantization' is a linearizing approximation, a mathematical technique without physical content. Consequently, the imagined 'quantum magic' effects are mere artifacts of the approximation technique, not the physical (real) effects. The fellow above was one of the few there who got it (after checking out SFED and CL citations I provided) and he ended publishing several papers (and conference presentations) titled "No Drama ..." (Quantum Theory, QED,...), essentially stating that there is nothing special to quantumness, it's an unrecognized classical theory obfuscated by linearizing approximations. do you consider yourself a pantheist? If not, exactly what philosophical framework do you classify all your overreached conclusions under? Regarding the mind stuff (consciousness), as explained earlier, I find panpsychism as the most coherent position on the subject (such as those of Leibniz and Spinoza). As for the other *ism, I am mostly in sync with Teilhard de Chardein and his "Omega Point" perspective, which is a special kind of theism. nightlight
'A chess prodigy explains how his mind works – video' Philip, shouldn't that read: '....doesn't/can't explain how his mind works?' Ignore it. Just my being pedantic. Re your #29, concerning autistic savants, visual memory seems to be quite an enhanced faculty in young children. I first noticed it in a young cousin of mine, in relation to a card game involving remembering what card was located where, when a significant number of them were laid out on a table. I know our minds get better and better at 'getting worse and worse', as we get older, dispensing with what is less useful, in favour of new or developing circumstances. Though not so much in terms of housework or why I went into a particular room - now that I'm a widower. But then, attention-focus would require something more than a very marginal level of interest. I suspect the reason why computing is said to be a young person's industry has something to do with both of the above. Axel
Well nightlight, since you have slandered one of the greatest experimental physicists living today, Zeilinger, just so to preserve your very bizarre beliefs, I really lost interest in looking up the precise evidence to refute you. i.e. your mind is made up and no matter what I present to the contrary you will find a way to confirm your preferred belief! Time will tell, and still I would bet all I have against you!,,, By the way, one more question nightlight,, do you consider yourself a pantheist? If not, exactly what philosophical framework do you classify all your overreached conclusions under? bornagain77
bornagain77 #91: So nightlight, your whole tap and dance routine appears to riding on your belief that they will not accomplish this final `loophole free experiment. Good luck with that bet!,,, Facts don't need to tap dance, hope and beliefs do. The plain fact is that they haven't obtained violations of BI after 50+ years of trying. The breakthrough was always right around the next corner, and it is so to this day, many corners later. But this time they will get there, as soon as the funding for the newer, better experiments comes through. OK. Myself, seeing Anton Zeilinger Oh, yeah, the 'quantum magician' in chief who has been peddling these wares for decades, while churning dozens of new magicians along the way, to make sure the craft will continue after his state vector gets projected into a pure horizontal component some day. continually pushing the boundaries as I have, I don't have near as much confidence as you do in a `consciousness free' interpretation of quantum mechanics. In fact I would bet every thing I have against you! My conviction is unrelated to any planned new experimental designs for yet another shot at it. I know it won't work for several fundamental reasons. Besides the pregeometry models (which preclude BI violations), the main other one is that there are alternative theories of QED (quantum electrodynamics, where these experiments fall under), which use purely local classical fields, such as coupled Maxwell-Dirac equations (nonlinear PDE's) by Asim Barut and coworkers from early 1990s (Self-field Electrodynamics, SFED). SFED has replicated the high precision QED results at least to the alpha^5 (alpha=1/137) order, which was the best results as of early 1994 (Barut unfortunately died in 94, and his theory was orphaned). Of course, SFED, being a local theory, cannot violate BI inequalities, but neither could the experiments so far, so SFED is fine as far as any existent experiments. Further, the conjectured BI violations are alpha^1 effect, thus of much lower order of precision than QED experiments. Hence if SFED predicts correctly alpha^5 (i.e. 10 digit results) effects of QED, the radiative corrections, what are the odds of it getting the wrong 1st two digits (due to failure on hypothetical BI violation experiment), while still keeping the remaining 8 digits correct (all high order radiative corrections, which exist in any BI setup as well). Moreover, what about the Leggett violation I cited? The LG inequalities are a lot weaker already at the fundamental level i.e. their loopholes are theoretical and don't need experimental loopholes (which they have, too) to classify them as irrelevant regarding the possibility 'quantum magic'. Namely, their violations, even in a loophole free experiment, exclude only a proper subset of classical theories, the hypothetical 'non-invasive' theories, saying nothing about 'invasive' classical theories. So, the LG violation is like having that "Turk" chess automaton with the locked compartment (the 'invasive' classical theories) that cannot be inspected by the spectators. Hence verification of the allowed compartments (the LG experiments) would be pointless under such constraint. nightlight
Physicists close two loopholes while violating local realism - November 2010 Excerpt: The latest test in quantum mechanics provides even stronger support than before for the view that nature violates local realism and is thus in contradiction with a classical worldview.,,, In the current experiment, the physicists simultaneously ruled out both the locality loophole and the freedom-of-choice loophole. They performed a Bell test between the Canary Islands of La Palma and Tenerife, located 144 km apart. On La Palma, they generated pairs of entangled photons using a laser diode. Then they locally delayed one photon in a 6-km-long optical fiber (29.6-microsecond traveling time) and sent it to one measurement station (Alice), and sent the other photon 144 km away (479-microsecond traveling time) through open space to the other measurement station (Bob) on Tenerife. The scientists took several steps to close both loopholes. For ruling out the possibility of local influence, they added a delay in the optical fiber to Alice to ensure that the measurement events there were space-like separated from those on Tenerife such that no physical signal could be interchanged. Also, the measurement settings were randomly determined by quantum random number generators. To close the freedom-of-choice loophole, the scientists spatially separated the setting choice and the photon emission, which ensured that the setting choice and photon emission occurred at distant locations and nearly simultaneously (within 0.5 microseconds of each other). The scientists also added a delay to Bob's random setting choice. These combined measures eliminated the possibility of the setting choice or photon emission events influencing each other. But again, despite these measures, the scientists still detected correlations between the separated photons that can only be explained by quantum mechanics, violating local realism. By showing that local realism can be violated even when the locality and freedom-of-choice loopholes are closed, the experiment greatly reduces the number of “hidden variable theories” that might explain the correlations while obeying local realism. Further, these theories appear to be beyond the possibility of experimental testing, since they propose such things as allowing actions into the past or assuming a common cause for all events. Now, one of the greatest challenges in quantum mechanics is simultaneously closing the fair-sampling loophole along with the others to demonstrate a completely loophole-free Bell test. Such an experiment will require very high-efficiency detectors and other high-quality components, along with the ability to achieve extremely high transmission. Also, the test would have to operate at a critical distance between Alice and Bob that is not too large, to minimize photon loss, and not too small, to ensure sufficient separation. Although these requirements are beyond the current experimental set-up due to high loss between the islands, the scientists predict that these requirements may be met in the near future. “Performing a loophole-free Bell test is certainly one of the biggest open experimental challenges in the foundations of quantum mechanics,” Kofler said. “Various groups are working towards that goal. It is on the edge of being technologically feasible. Such an experiment will probably be done within the next five years.” http://www.physorg.com/news/2010-11-physicists-loopholes-violating-local-realism.html So nightlight, your whole tap and dance routine appears to riding on your belief that they will not accomplish this final 'loophole free experiment. Good luck with that bet!,,, Myself, seeing Anton Zeilinger continually pushing the boundaries as I have, I don't have near as much confidence as you do in a 'consciouness free' interpretation of quantum mechanics. In fact I wwould bet every thing I have against you! Moreover, what about the Leggett violation I cited? bornagain77
bornagain77 #81: Yet free will does not belong to `each node' at the `ground level' of these `Planckian networks'. Free will belongs exclusively to conscious observers: Anything "you" do is done by a subsidiary lower level agents, and anything they do is done by their subsidiary lower level agents,... etc. Imagine you press a mouse button and say, "I pressed the mouse button". No you didn't, objects skeptic #1, the finger did it. Nope, said skeptic #2, it's actually the skin on the tip of the finger that did it. That's really naive said skeptic #3, it is the skin cells that did it. Nah, said skeptic #4, it was the membrane of the cells that did it. Ridiculous, said skeptic #5, it's the molecules of the membrane that did it. Skeptic #6 chimes in, molecule didn't do a squat, it's the few atoms of the molecule that did it. It was actually electrons of the atom that did it, objects skeptic #7. Well, not quite, corrects him skeptic #8, it was the virtual photons forming the electrostatic field of those electrons that did it by getting absorbed by the electrons of the atoms on the button surface. Sorry guys, but virtual photons are a mathematical artifact of perturbative QED, a shorthand for an intrinsic property of the Dirac-Maxwell quantum vacuum, said skeptic #9, so it is the vacuum (aka "nothing") that really did it. The last Russian doll we can open, at least in principle, is then the one that did it, by convention, since we don't know what's inside that last doll; perhaps there is another smaller one, and that's the one that "really" did it. Planckian networks happen to be the innermost doll that we can hope to open, while remaining consistent with laws of physics as we know them presently. The above was an outside-in description following the direction of analysis, or the path of epistemology. The ontology, including the mind stuff (consciousness), unfolds in the opposite direction, emanating from inside out. The 'agency' is thus not in the heaven, above, or the largest one, but inside, under, and it is the smallest one of them all. Hence what you call "your" mind stuff is the mind stuff of the innermost doll, the mind stuff of the Planckian networks (by convention, recognizing our present limitations). One conceivable model on how this propagation and composition of the mind stuff or qualia between the layers of technologies might work was sketched in the post #58. Here's a recent variation of Wheeler's Delayed Choice Again you cite the self-promotional puffery from the "quantum magic" school of physics. As explained in post #58, these are hypothetical phenomena deduced from gratuitously appended postulate about composite system projection during measurement, which has no other predicted effects except the 'quantum magic' ones, which were never observed and which were never used for anything else but to promote themselves. These phenomena are thus mere speculations resting squarely on the hoped for "loophole free" violation of Bell inequalities (in plain language: violation which actually violates). That's their empirical acid test. But that experiment has refused to comply with their wishes for over half century and numerous attempts. There were also decades of even earlier tests with a bit more toned down 'quantum magic' claims based on the older criteria which were not acknowledged to be flawed until 1964, when John Bell came up with the new criteria (violation of his inequalities). Only at that time he explained why the old criteria (von Neumann, Gleason, Jauch-Piron) were invalid (or as he said, "silly"). Of course, there were "heretics" (including Einstein, de Broglie, Schrodinger, Bohm and other QM pioneers) who were pointing out the flaws in those older criteria and their implications for decades earlier, yet the criteria and the related 'magic' claims, stood as valid and were taught to students as such until the Bell's 1964 replacement came out. Isn't it funny how this sudden clarity works, I mean, the precision of its lucky timing is just amazing. To help you chill out on citing any more of 'quantum magic' claims, here are two physics preprints at arXiv from just few days ago (March 25 and March 26, 2013), with proposals for yet another shot at a loophole free Bell test, both acknowledging in the intros the absence of such "loophole free" violations so far, as the motivation for these new experimental designs. The euphemism "loophole free violation" (violation which didn't fail to violate, i.e. which violates) of BI has a notable history of use by other scientific hucksters. There was in the 18th century an amazing chess playing automaton, The Turk touring Europe, beating all challengers. The inventor von Kempelen would allow spectators to fully inspect the magical machine, but only one compartment at a time, with other compartments obscured (allowing time for the hidden, small stature player to scoot around into the obscured sections). This is exactly the kind of shifting loopholes they have in BI violations tests -- they can get rid of each one, provided they are allowed to have another one instead. And that has been going on like that for half a century. I suppose, you can fool each new generation of dupes afresh. Even the most physicists are unaware of the exact nature of the problems here, but just instinctively know to stay away from this tar pit. Students never learn the full story and, if lucky, they may find out the omitted key parts only if they end up working the real world quantum optics lab. Theoretical physicists working on pregeometry models (such as Planckian networks, or spinor networks and variants), which include notables such as Penrose, t'Hooft and Wolfram dismiss that whole 'quantum magic' cottage industry via diplomatic euphemism irrelevant, even though on its face the Bell inequalities would preclude these types of models (if only the 'magicians' could get the 'loophole free' violations). nightlight
Hey BA, have you looked much into Wolfram's new kind of science? I'd be interested in your take on it. I only know enough to find it very intriguing, but not enough to seriously evaluate it as a truth claim. As to non-reality without an observer, every time I read about it, it reminds me strongly of all the tricks (calculating occlusion, tracking the camera frustum, segmenting the 3D space into red/black trees, etc.) that we use in creating virtual worlds (i.e. video games) that run in real-time so that we can focus processing power where it will be of optimum use. Phinehas
nightlight, I don't know where you are at in you reconciliation of General Relativity and Quantum Mechanics (string theory, m-theory, etc), but I would like to point a very credible alternative as to reconciliation that you may have not seen before: The primary conflict of reconciling General Relativity and Quantum Mechanics mathematically appears to boil down to the inability of either theory to successfully deal with the Zero/Infinity problem that crops up in different places of each theory:
THE MYSTERIOUS ZERO/INFINITY Excerpt: The biggest challenge to today’s physicists is how to reconcile general relativity and quantum mechanics. However, these two pillars of modern science were bound to be incompatible. “The universe of general relativity is a smooth rubber sheet. It is continuous and flowing, never sharp, never pointy. Quantum mechanics, on the other hand, describes a jerky and discontinuous universe. What the two theories have in common – and what they clash over – is zero.”,, “The infinite zero of a black hole — mass crammed into zero space, curving space infinitely — punches a hole in the smooth rubber sheet. The equations of general relativity cannot deal with the sharpness of zero. In a black hole, space and time are meaningless.”,, “Quantum mechanics has a similar problem, a problem related to the zero-point energy. The laws of quantum mechanics treat particles such as the electron as points; that is, they take up no space at all. The electron is a zero-dimensional object,,, According to the rules of quantum mechanics, the zero-dimensional electron has infinite mass and infinite charge. http://www.fmbr.org/editoral/edit01_02/edit6_mar02.htm Quantum Mechanics and Relativity – The Collapse Of Physics? – video – with notes as to plausible reconciliation that is missed by materialists http://www.metacafe.com/watch/6597379/
Another interesting point to draw out in this conflict between GR and QM is that math is, even if a mathematical unification were possible between Quantum Mechanics and General Relativity, still incomplete.
Taking God Out of the Equation – Biblical Worldview – by Ron Tagliapietra – January 1, 2012 Excerpt: Kurt Gödel (1906–1978) proved that no logical systems (if they include the counting numbers) can have all three of the following properties. 1. Validity . . . all conclusions are reached by valid reasoning. 2. Consistency . . . no conclusions contradict any other conclusions. 3. Completeness . . . all statements made in the system are either true or false. The details filled a book, but the basic concept was simple and elegant. He summed it up this way: “Anything you can draw a circle around cannot explain itself without referring to something outside the circle—something you have to assume but cannot prove.” For this reason, his proof is also called the Incompleteness Theorem. Kurt Gödel had dropped a bomb on the foundations of mathematics. Math could not play the role of God as infinite and autonomous. It was shocking, though, that logic could prove that mathematics could not be its own ultimate foundation. Christians should not have been surprised. The first two conditions are true about math: it is valid and consistent. But only God fulfills the third condition. Only He is complete and therefore self-dependent (autonomous). God alone is “all in all” (1 Corinthians 15:28), “the beginning and the end” (Revelation 22:13). God is the ultimate authority (Hebrews 6:13), and in Christ are hidden all the treasures of wisdom and knowledge (Colossians 2:3). http://www.answersingenesis.org/articles/am/v7/n1/equation#
i.e. the ‘incompleteness theorem’ shows that the ‘truthfulness’ of any mathematical equation is not held within the equation itself but is dependent on God to derive its ultimate truthfulness. But this obvious point, which has been highly contested, is, despite its contentious nature, fairly obvious as to its evident truthfulness:
BRUCE GORDON: Hawking’s irrational arguments – October 2010 Excerpt: Rather, the transcendent reality on which our universe depends must be something that can exhibit agency – a mind that can choose among the infinite variety of mathematical descriptions and bring into existence a reality that corresponds to a consistent subset of them. This is what “breathes fire into the equations and makes a universe for them to describe.” Anything else invokes random miracles as an explanatory principle and spells the end of scientific rationality.,,, Universes do not “spontaneously create” on the basis of abstract mathematical descriptions, nor does the fantasy of a limitless multiverse trump the explanatory power of transcendent intelligent design. What Mr. Hawking’s contrary assertions show is that mathematical savants can sometimes be metaphysical simpletons. Caveat emptor. http://www.washingtontimes.com/news/2010/oct/1/hawking-irrational-arguments/
Yet if we rightfully allow God into mathematics, so as to offer a plausible reconciliation between Quantum Mechanics and General Relativity, and so as to bring 'true completeness' to math,,,,
The God of the Mathematicians – Goldman Excerpt: As Gödel told Hao Wang, “Einstein’s religion [was] more abstract, like Spinoza and Indian philosophy. Spinoza’s god is less than a person; mine is more than a person; because God can play the role of a person.” – Kurt Gödel – (Gödel is considered one of the greatest logicians who ever existed) http://www.firstthings.com/article/2010/07/the-god-of-the-mathematicians
,,,then we find that a empirically based reconciliation, between Quantum Theory and General Relativity, readily pops out in the ‘event horizon’ witnessed on the Shroud of Turin:
Turin Shroud 3-D Hologram Reveals The Words 'The Lamb' - short video http://www.metacafe.com/watch/4041205 THE EVENT HORIZON (Space-Time Singularity) OF THE SHROUD OF TURIN. – Isabel Piczek – Particle Physicist Excerpt: We have stated before that the images on the Shroud firmly indicate the total absence of Gravity. Yet they also firmly indicate the presence of the Event Horizon. These two seemingly contradict each other and they necessitate the past presence of something more powerful than Gravity that had the capacity to solve the above paradox.,, Particle Radiation from the Body – July 2012 – M. Antonacci, A. C. Lind Excerpt: The Shroud’s frontal and dorsal body images are encoded with the same amount of intensity, independent of any pressure or weight from the body. The bottom part of the cloth (containing the dorsal image) would have born all the weight of the man’s supine body, yet the dorsal image is not encoded with a greater amount of intensity than the frontal image. Radiation coming from the body would not only explain this feature, but also the left/right and light/dark reversals found on the cloth’s frontal and dorsal body images. http://www.academicjournals.org/sre/PDF/pdf2012/30JulSpeIss/Antonacci.pdf General Relativity, Quantum Mechanics, Entropy, and The Shroud Of Turin – updated video http://vimeo.com/34084462
Music and verse:
Empty (Empty Cross Empty Tomb) with Dan Haseltine Matt Hammitt (Music Inspired by The Story) http://www.godtube.com/watch/?v=F22MCCNU Colossians 1:15-20 The Son is the image of the invisible God, the firstborn over all creation. For in him all things were created: things in heaven and on earth, visible and invisible, whether thrones or powers or rulers or authorities; all things have been created through him and for him. He is before all things, and in him all things hold together. And he is the head of the body, the church; he is the beginning and the firstborn from among the dead, so that in everything he might have the supremacy. For God was pleased to have all his fullness dwell in him, and through him to reconcile to himself all things, whether things on earth or things in heaven, by making peace through his blood, shed on the cross.
bornagain77
Same to you kairos and the rest of you renegades. Axel
Or rather like Cornelius explaining to me the intricacies of molecular biology, when most of the words are completely absent from my lexicon. Axel
The fundamental problem rendering productive discussion with our Darwinist/ naturalist/materialist/putatively ATHEIST friends, impossible, often touched here, if usually, obliquely, is that not only do they not have any level of purchase upon empirical reality, but at that point at which it is pointed out to them, their reasoning faculty primordially founders, totally collapses. The absolute limit of any concession they will accord to the role of the mind in any discussion about its nature and relationship to the material world, is pantheism - at the retail level, animism Although, of course, they would deny such religious commitments, despite the fact that, absent the moral imperatives of the mainstream formal religions (by definition: 'religere' - to bind), and being ultimately self-referential, their commitment is as absolute as that of any of the Apostles. So, it's rather like talking to a class of 9 year-olds about something appropriate to their age, and then launching into a talk about the bizarre nature of matter at the quantum level. You lose them. That's it. Axel
A HAPPY EASTER WEEKEND TO ALL. kairosfocus
RV: Yup, von Neumann's kinematic self replicator is at the threshold. What does this tell us regarding the origin of the required FSCO/I in light of Paley's further example of the self-replicating watch? KF kairosfocus
Joe quoting Lizzie... "(although as yet, human-designed products lack the ability to self-reproduce, except in virtual space, and self-replication is a the necessarily condition for evolution)." Just thought I'd throw this into the mix. 3D Printers might *soon* be able to replicate themselves. I assume that it's only a matter of time before they join in this discussion! ronvanwegen
nightlight you state:
Back to subject — regarding the free will, I don’t subscribe into deterministic models of these Planckian networks. At the ground level, there should be an elemental decision step, freely chosen by each node as to which state +1 or -1 it will signal as its state next,,,
Yet free will does not belong to 'each node' at the 'ground level' of these 'Planckian networks'. Free will belongs exclusively to conscious observers: Here’s a recent variation of Wheeler’s Delayed Choice experiment, which highlights the ability of the conscious observer to effect 'spooky action into the past', thus further solidifying consciousness's centrality in reality. Furthermore in the following experiment, the claim that past material states determine future conscious choices (determinism) is falsified by the fact that present conscious choices effect past material states:
Quantum physics mimics spooky action into the past - April 23, 2012 Excerpt: The authors experimentally realized a "Gedankenexperiment" called "delayed-choice entanglement swapping", formulated by Asher Peres in the year 2000. Two pairs of entangled photons are produced, and one photon from each pair is sent to a party called Victor. Of the two remaining photons, one photon is sent to the party Alice and one is sent to the party Bob. Victor can now choose between two kinds of measurements. If he decides to measure his two photons in a way such that they are forced to be in an entangled state, then also Alice's and Bob's photon pair becomes entangled. If Victor chooses to measure his particles individually, Alice's and Bob's photon pair ends up in a separable state. Modern quantum optics technology allowed the team to delay Victor's choice and measurement with respect to the measurements which Alice and Bob perform on their photons. "We found that whether Alice's and Bob's photons are entangled and show quantum correlations or are separable and show classical correlations can be decided after they have been measured", explains Xiao-song Ma, lead author of the study. According to the famous words of Albert Einstein, the effects of quantum entanglement appear as "spooky action at a distance". The recent experiment has gone one remarkable step further. "Within a naïve classical world view, quantum mechanics can even mimic an influence of future actions on past events", says Anton Zeilinger. http://phys.org/news/2012-04-quantum-physics-mimics-spooky-action.html
In other words, if my conscious choices really are just merely the result of whatever state the material particles in my brain happen to be in in the past (deterministic) how in blue blazes are my choices instantaneously effecting the state of material particles into the past?,,, I consider the preceding experimental evidence to be an improvement over the traditional 'uncertainty' argument for free will, from quantum mechanics, that had been used to undermine the deterministic belief of materialists. Moreover nightlight, to further undermine your claim that free will belongs to 'each node' at the 'ground level' of these 'Planckian networks' (Pantheism??),,,
The Scale of The Universe - Part 2 - interactive graph (recently updated in 2012 with cool features) http://htwins.net/scale2/scale2.swf?bordercolor=white
The preceding interactive graph points out that the smallest scale visible to the human eye (as well as a human egg) is at 10^-4 meters, which 'just so happens' to be directly in the exponential center of all possible sizes of our physical reality (not just ‘nearly’ in the exponential center!). i.e. 10^-4 is, exponentially, right in the middle of 10^-35 meters, which is the smallest possible unit of length, which is Planck length, and 10^27 meters, which is the largest possible unit of 'observable' length since space-time was created in the Big Bang, which is the diameter of the universe. This is very interesting for, as far as I can tell, the limits to human vision (as well as the size of the human egg) could have, theoretically, been at very different positions than directly in the exponential middle; Moreover nightlight you claim that,,,
This is a “research” from that same speculative parasitic branch of quantum theory that has nothing but half a century long chain of failed experiments and a truckloads of ever more elaborate euphemisms to explain how come it never works. That branch has absolutely no connection with the legendary precision of Quantum Electrodynamics (QED). The parasitic para-science layers arise and wraps around any successful real science, claiming credits by proximity.
Yet that 'parasitic branch', contrary to what you believe, is what forms the actual core of quantum theory and it is the legendary precision of QED that has been 'wrapped around' around that quantum theoretic core, not the other way around.
Quantum Mechanics vs. General Relativity Excerpt: The Gravity of the Situation The inability to reconcile general relativity with quantum mechanics didn’t just occur to physicists. It was actually after many other successful theories had already been developed that gravity was recognized as the elusive force. The first attempt at unifying relativity and quantum mechanics took place when special relativity was merged with electromagnetism. This created the theory of quantum electrodynamics, or QED. It is an example of what has come to be known as relativistic quantum field theory, or just quantum field theory. QED is considered by most physicists to be the most precise theory of natural phenomena ever developed. In the 1960s and ’70s, the success of QED prompted other physicists to try an analogous approach to unifying the weak, the strong, and the gravitational forces. Out of these discoveries came another set of theories that merged the strong and weak forces called quantum chromodynamics, or QCD, and quantum electroweak theory, or simply the electroweak theory, which you’ve already been introduced to. If you examine the forces and particles that have been combined in the theories we just covered, you’ll notice that the obvious force missing is that of gravity.,,, http://www.infoplease.com/cig/theories-universe/quantum-mechanics-vs-general-relativity.html
Supplemental note on your false claim of a 'half a century long chain of failed experiments':
Quantum physics says goodbye to reality - Apr 20, 2007 Excerpt: Many realizations of the thought experiment have indeed verified the violation of Bell's inequality. These have ruled out all hidden-variables theories based on joint assumptions of realism, meaning that reality exists when we are not observing it; and locality, meaning that separated events cannot influence one another instantaneously. But a violation of Bell's inequality does not tell specifically which assumption – realism, locality or both – is discordant with quantum mechanics. Markus Aspelmeyer, Anton Zeilinger and colleagues from the University of Vienna, however, have now shown that realism is more of a problem than locality in the quantum world. They devised an experiment that violates a different inequality proposed by physicist Anthony Leggett in 2003 that relies only on realism, and relaxes the reliance on locality. To do this, rather than taking measurements along just one plane of polarization, the Austrian team took measurements in additional, perpendicular planes to check for elliptical polarization. They found that, just as in the realizations of Bell's thought experiment, Leggett's inequality is violated – thus stressing the quantum-mechanical assertion that reality does not exist when we're not observing it. "Our study shows that 'just' giving up the concept of locality would not be enough to obtain a more complete description of quantum mechanics," Aspelmeyer told Physics Web. "You would also have to give up certain intuitive features of realism." http://physicsworld.com/cws/article/news/27640
Here is a good article which gives a bit of the history behind the preceding experiment (Please note in the article, towards the end of the article, that Leggett himself refused to accept the results of the experiment the he was instrumental in formulating, and that Zeilinger was able to bring to successful fruition, because it clashed with his atheistic/materialistic worldview! ):
A team of physicists in Vienna has devised experiments that may answer one of the enduring riddles of science: Do we create the world just by looking at it? - 2008 http://seedmagazine.com/content/article/the_reality_tests/P1/
further notes:
“I’m going to talk about the Bell inequality, and more importantly a new inequality that you might not have heard of called the Leggett inequality, that was recently measured. It was actually formulated almost 30 years ago by Professor Leggett, who is a Nobel Prize winner, but it wasn’t tested until about a year and a half ago (in 2007), when an article appeared in Nature, that the measurement was made by this prominent quantum group in Vienna led by Anton Zeilinger, which they measured the Leggett inequality, which actually goes a step deeper than the Bell inequality and rules out any possible interpretation other than consciousness creates reality when the measurement is made.” – Bernard Haisch, Ph.D., Calphysics Institute, is an astrophysicist and author of over 130 scientific publications.
Preceding quote taken from this following video;
Quantum Mechanics and Consciousness - A New Measurement - Bernard Haisch, Ph.D (Shortened version of entire video with notes in description of video) http://vimeo.com/37517080 Looking Beyond Space and Time to Cope With Quantum Theory – (Oct. 28, 2012) Excerpt: To derive their inequality, which sets up a measurement of entanglement between four particles, the researchers considered what behaviours are possible for four particles that are connected by influences that stay hidden and that travel at some arbitrary finite speed. Mathematically (and mind-bogglingly), these constraints define an 80-dimensional object. The testable hidden influence inequality is the boundary of the shadow this 80-dimensional shape casts in 44 dimensions. The researchers showed that quantum predictions can lie outside this boundary, which means they are going against one of the assumptions. Outside the boundary, either the influences can’t stay hidden, or they must have infinite speed.,,, The remaining option is to accept that (quantum) influences must be infinitely fast,,, “Our result gives weight to the idea that quantum correlations somehow arise from outside spacetime, in the sense that no story in space and time can describe them,” says Nicolas Gisin, Professor at the University of Geneva, Switzerland,,, http://www.sciencedaily.com/releases/2012/10/121028142217.htm
Also of note, here is a neat little video on the infamous double slit experiment, with Anton Zeilinger, that I found the other day. (please note the materialist's question; "Is Anything Real?"
Quantum Mechanics - Double Slit Experiment - Is Anything Real? (Prof. Anton Zeilinger) - video http://www.youtube.com/watch?v=ayvbKafw2g0
Verse and Music:
Revelation 4:11 "You are worthy, our Lord and God, to receive glory and honor and power, for you created all things, and by your will they were created and have their being." Kari Jobe - Revelation Song - Passion 2013 http://www.youtube.com/watch?v=3dZMBrGGmeE
bornagain77
One last thought before bed... I think that an unreflective adherence to the categories of "natural" and "artificial" makes it very difficult for people to fairly examine design arguments. Typically we use those terms in a way that is mutually exclusive (natural things are not artificial and vice versa) and jointly exhaustive (everything is either natural or artificial). ID, however, plays havoc with those tidy categories. If ID is correct, then many things that would otherwise be considered natural are actually products of design or artifice (in this context meaning clever and artful skill or ingenuity), hence artificial. A mind that is firmly locked into the natural versus artificial dichotomy is going to have great difficulty accepting that there can be any sort of overlap in those two categories, and any line of reasoning that would lead to such a seemingly absurd conclusion would have to be dismissed as fallacious (or at least incorrigibly mischevious!). Optimus
Nightlight Regarding your concerns about conflating different terms (e.g. 'evolution' with 'neo-darwinism'), I agree that it's important to be specific, and I fully acknowledge that at times it's all too easy to get lazy with language. However, I think that KF, EA, and SB adequately addressed the matter in posts 3, 39, and 45 (I think). In fact, practically every pro-ID book I've read shows painstaking care in drawing the distinctions between the different meanings of 'evolution'. As commonly used, 'evolution' can mean 'change in allele frequencies in a population over time' (a definition that is profoundly uncontroversial and not terribly interesting). Sometimes it's used with reference to a mechanism of biological change (e.g. random mutation + natural selection or genetic drift). Sometimes it means mechanism + universal common ancestry. Sometimes it means mechanism + universal common ancestry + metaphysical baggage (i.e that it's purposeless, unguided, uncaring, etc.). A very good summary of the variant meanings of evolution can be found in the book Darwinism, Design, And Public Education. Usually the last sense is what I mean when I use the term Neo-Darwinism. Optimus
StephenB @ 48 Thank you for your kind words! Optimus
Nightlight @ 19
Therefore MN doesn’t exclude a hypothetical intelligent agency guiding processes of life and their evolution, either. It only excludes ‘deus ex machina’ proposals (such as invocation of “mind” as an explanation, since present natural science lacks a model for “mind stuff”; cf. hard problem of consciousness).
Perhaps, in the strictest usage of the term, methodological naturalism does not by necessity exclude intelligent agency as a category of causal explanation. After all, it could be argued, scientists are perfectly willing to entertain the possibility of intelligent causation when examining objects recovered from archaeological sites. Yet an archaeologist would probably feel quite comfortable that he or she had not violated the principle of MN. However, in practice MN is often used as a demarcation argument in order to partition off ID from 'respectable' scientific inquiry. When Judge Jones made his ruling in the Kitzmiller v. Dover case, he quite specifically appealed to MN as the distinguishing feature of 'real' science that would rule out ID. Philosophers, such as Nancey Murphy, have made similar arguments. Right or wrong (as far as definitions are concerned) persons opposed to ID as a scientific program repeatedly trot out MN as a rationale for why it just doesn't belong. It could probably be said that methodological naturalism is an attempt to preemptively strike out against the implications of ID. While ID modestly doesn't make specific claims about the nature of the designer, out of respect for reasonable limits of inference, I would never deny that it is at least congenial to a theistic worldview, and this is where a lot of people become uncomfortable. After all, it's one thing to say that an arrowhead was designed - humans can do that, right? To say that life on earth was designed is a further stretch that's likely too far for many, but it does fall within the conventional naturalistic framework - the designers could be aliens, right? Even Francis Crick could go along with that one. But what if we see design in the very fabric of the cosmos itself? The constants of gravity, electromagnetic force, strong and weak nuclear force? Now aliens are out of the question. They can hardly take credit for designing the very universe in which they live. What recourse do we have left? Creating a universe would seem to require an intelligence that is external and causally prior to the universe. Not only that, but such a being would have to be imeasurably powerful, possessing an incomprehensible intellect. What else could be the cause of a universe that contains such prodigious quantities of energy? That sounds suspiciously too much like a god of some sort, therefore, the reasoning proceeds, any putative 'science' that leads in that direction must really be just a clandestine attempt to smuggle religion into the natural sciences! Those ID/creationists are so sneaky! Optimus
LYO, Cf the logic of contingency and necessity of being and where it points. Here on may be a helpful 101. KF kairosfocus
Computerist:
only intelligence can be responsible for intelligence
I completely agree, with the exception of the first intelligence. After the first uncaused intelligence, it's obvious that ALL intelligence must come from intelligence. lastyearon
Nightlight, yes exactly, we were parodying each other. Apologies if you felt it was directed at you. I won't deny ever using ridicule here, but I'm more inclined to give the benefit of the doubt to someone who is not obviously hostile or disrespectful. In that spirit, welcome to UD. Let me ask you a serious question or three to gauge your disposition to ID and perhaps your sincerity, such as: Do you agree that at the least, chance, physical necessity, and agency are altogether needed to account for most, if not all of what we observe? As a corollary, and for clarity, do you agree that chance and physical necessity are insufficient to account for jet airplanes, computers and algorithms, and skyscrapers? Lastly, do you think that things which are known to be designed, like the aforementioned artifacts, have features or properties in common which set them apart from things which are naturally produced, such as the results of geological processes? Chance Ratcliff
CR @71: Don’t flatter yourself, nightlight. :) That’s a cookie cutter ID critic, and I began doing it before you ever showed up. Not only so, but I didn’t read your posts before responding to LYO, who had introduced his own shtick days if not weeks ago. I see, two of you were parodying each other back & forth. I think he may have gone one extra loop, and was parodying his parody of your position in his response to me, so to me both appeared as an attempt at parody of my own posts. Or maybe, since I invariably end up on the opposite pole from both sides on this subject, then as GW Bush would begin, enemy of my enemy... nightlight
bornagain77 (#46): Perhaps nightlight instead of the futility of trying to explain how consciousness emerged from a material basis you should finally be `scientific' and admit that your worldview is false? Of course it is false. It would have been terribly disappointing if I had already figured it all out, with decades to go and no mystery of importance to ponder any more. When the mystery is gone, I am gone, too. Luckily, we are all deep ignoramuses, each of us a little ant looking at the same elephant through his own tiny pinhole, wondering what kind of ant could this be. That's what we're here for. nightlight
Don't flatter yourself, nightlight. :) That's a cookie cutter ID critic, and I began doing it before you ever showed up. Not only so, but I didn't read your posts before responding to LYO, who had introduced his own shtick days if not weeks ago. So it had nothing to do with you directly. If you see parallels, that's on you. ;) Chance Ratcliff
Mr. Anderson @68 -- it was more than obvious he was spoofing my posts (yep, it was humorous; in its childish ways of understanding of what I wrote). You know how that goes, first they ignore you, then they they spoof you, then they fight you, then... nightlight
Eric @68, I get it, I really do, but somehow a sarc tag just seems like cheating. However it's not my intention to snare anyone who may have actually heard similar arguments put forth with a straight face. In my defense, let me just say that lastyearon started it. ;) Chance Ratcliff
Chance @51 et seq.: You might want to add a /sarc tag, especially for the newcomers. :) We see all manner of arguments here, and some of the materialist arguments are indeed difficult to parody, so sometimes it is hard to tell when someone is being sarcastic. Eric Anderson
bornagain77: Yet, despite whatever limits you think one should place on scientific inquiry prior to investigation, it is clear exactly where consciousness (and free will) come into play for ID: I need to admit that admire and enjoy the fruits of your tireless unearthing of myriads interesting facts and perspectives, extracting of relevant quotes and links, all packaged nicely in a reader friendly form. Great work. Along the same tangent, some day, when one of these startups I like to join with, makes it big and I don't need to work for living any more, I daydream (and actually have quite a few sketches and algorithmic elements) of writing a program which can vastly expand the network of neurons making person's brain into a coherently integrated live network of facts, ideas, connections operating together with brain at full speed, effortlessly and seamlessly as a single natural, super-intelligent hybrid entity. This kind of knowledge base has to be a network with adaptable links shaped and tuned to capture the most subtle thought patterns one has, to keep them alive and thinking even when the carbon component is already thinking something else, or asleep, or eventually when it wears out and perishes. Note that this not a brute force core-dump & upload of your brain info into a computer of the future (a la Kurzweil's idea of immortalizing via singularity technology), but rather a live hybrid system that lives as one seamless intelligent system with the carbon counterparts. Back to subject -- regarding the free will, I don't subscribe into deterministic models of these Planckian networks. At the ground level, there should be an elemental decision step, freely chosen by each node as to which state +1 or -1 it will signal as its state next (of course, any free choice has consequences, thus a node could not signal a permanent +1, the happy state). In the mind-stuff (+1,-1) mapping sketched earlier as one conceivable scientific model of 'mind stuff', this choice would be the elemental act of free will. It would propagate and get amplified up the higher levels of networks, through physical particles (which already obey non-deterministic quantum laws), then to biochemical networks, then to brain level, etc, like all the other mind-stuff attributes of the model and that would represent 'what is it like to decide' sense or quale. Dr. Abel's writing seems interesting. He brings up the "unreasonable effectiveness of mathematics" which I, too, found to be a very interesting phenomenon. This post has a little section, toward middle of the longer post, on that same paper by Eugene Wigner and how that may actually work, which goes beyond Dr. Abel's (or Wigner's) non-constructive awe at the phenomenon. In our best mathematical description of reality (quantum mechanic), which is verified to something like 13 decimal places, it is now found that one cannot ever improve quantum theory over its present state, with the only a priori assumptions being that measurements (conscious observations) can be freely chosen... This is a "research" from that same speculative parasitic branch of quantum theory that has nothing but half a century long chain of failed experiments and a truckloads of ever more elaborate euphemisms to explain how come it never works. That branch has absolutely no connection with the legendary precision of Quantum Electrodynamics (QED). The parasitic para-science layers arise and wraps around any successful real science, claiming credits by proximity. As always, you recognize them by their fruits -- they never produced anything that did anything for anyone except for themselves, as empty props to help promoting their own quantum magic shows. They have talking and promising ever more miraculous miracle technologies for more than half a century, with ever showing anything that works. In molecular biology you similarly have loud-mouthed neo-Darwinists horning in on the great scientific and technological advances of recent decades, peddling their gratuitous "random mutation" (a foundation stone and a springboard for proselytizing of atheism) and declaring that nothing in biology and genetics makes sense except in the "light" of (their theory of) evolution. This is the other pea, but in physics, out of the same parasitic peapod. You would be wiser to stay away from that mutual back-patting society of quantum magicians. Just like neo-Darwinist, these too have their friendly chain for peer reviewed publishing and praising of each other in a self-referential circle jerk. I have spent few years studying that stuff, then few more thinking and digesting, as well as doing work in real world quantum optics labs, before it finally dawned on me what's going on with that strange branch of physics. (Yep, I also believed in Darwinian just so stories, way back through college and grad school, until ID critique woke me up to what that was all about.) nightlight
wallstreeter, I don't know if you have these yet, but here are my notes on the preceding work that was done overturning the flawed carbon dating of the late 1980s: Shroud of Turin - Carbon 14 test proves false (with Raymond Rogers, lead chemist from the STURP project) - video http://www.youtube.com/watch?v=GxDdx6vxthE Discovery Channel - Unwrapping The Shroud of Turin New Evidence - video http://www.youtube.com/watch?v=YWyiZtagxX8 The following is the main peer reviewed paper which has refuted the 1989 Carbon Dating: Why The Carbon 14 Samples Are Invalid, Raymond Rogers per: Thermochimica Acta (Volume 425 pages 189-194, Los Alamos National Laboratory, University of California) Excerpt: Preliminary estimates of the kinetics constants for the loss of vanillin from lignin indicate a much older age for the cloth than the radiocarbon analyses. The radiocarbon sampling area is uniquely coated with a yellow–brown plant gum containing dye lakes. Pyrolysis-mass-spectrometry results from the sample area coupled with microscopic and microchemical observations prove that the radiocarbon sample was not part of the original cloth of the Shroud of Turin. The radiocarbon date was thus not valid for determining the true age of the shroud. The fact that vanillin can not be detected in the lignin on shroud fibers, Dead Sea scrolls linen, and other very old linens indicates that the shroud is quite old. A determination of the kinetics of vanillin loss suggests that the shroud is between 1300- and 3000-years old. Even allowing for errors in the measurements and assumptions about storage conditions, the cloth is unlikely to be as young as 840 years. http://www.ntskeptics.org/issues/shroud/shroudold.htm Rogers passed away shortly after publishing this paper, but his work was ultimately verified by the Los Alamos National Laboratory: Carbon Dating Of The Turin Shroud Completely Overturned by Scientific Peer Review Excerpt: Rogers also asked John Brown, a materials forensic expert from Georgia Tech to confirm his finding using different methods. Brown did so. He also concluded that the shroud had been mended with newer material. Since then, a team of nine scientists at Los Alamos has also confirmed Rogers work, also with different methods and procedures. Much of this new information has been recently published in Chemistry Today. http://shroudofturin.wordpress.com/2009/02/19/the-custodians-of-time/ This following is the Los Alamos National Laboratory report and video which completely confirms the Rogers' paper: “Analytical Results on Thread Samples Taken from the Raes Sampling Area (Corner) of the Shroud Cloth” (Aug 2008) Excerpt: The age-dating process failed to recognize one of the first rules of analytical chemistry that any sample taken for characterization of an area or population must necessarily be representative of the whole. The part must be representative of the whole. Our analyses of the three thread samples taken from the Raes and C-14 sampling corner showed that this was not the case....... LANL’s work confirms the research published in Thermochimica Acta (Jan. 2005) by the late Raymond Rogers, a chemist who had studied actual C-14 samples and concluded the sample was not part of the original cloth possibly due to the area having been repaired. - Robert Villarreal - Los Alamos National Laboratory http://www.ohioshroudconference.com/ Shroud Of Turin Carbon Dating Overturned - Robert Villarreal - Press Release video http://www.metacafe.com/watch/4041193 bornagain77
wallstreeter, I don't know if the Shroud dating will be posted, but I sure appreciate you bringing this to my attention, :) As you did with other stuff on the shroud a little while back! bornagain77
Upright BiPed: Using the parameters which you indicate must be the basis of a scientific conclusion (i.e. model space, empirical procedures and facts, operational rules mapping numbers between the model and the empirical) how does science to come such a conclusion about fire? Or is such a conclusion arrived at through other appropriate scientific means, observation, materials knowledge, etc. Good point on an aspect not fleshed out in that post (it was already long as it is). That triune schematics of a natural science is an abstract scientific model of a 'natural science' as its object. Hence, in a sense it is an external, idealized view of the main elements of a mature science, with the "messy" process that produced the results, the models, algorithms, mappings abstracted away. If you were to describe a mature, modern process of steel production, it would include lot of steel machinery that seems vital for the operation. The system appears hopelessly interlocked and if someone were to take away the steel needed in the production, you couldn't produce it any more. Yet, it is all rolling happily along now. The same goes for modern design and manufacturing of computers or processors, which require quite a bit of computing power in all stages of the process. Or for that matter, producing a chicken requires an egg, which in turn requires another chicken,... One arrives at such final, polished interlocked systems through a long chain of creative leaps of imagination. In case of technologies and sciences, it is the leaps of human imaginations that drive the process. In the case of chicken and egg, it's the leaps of imagination by the biochemical networks of their ancient ancestors. The biochemical networks are mathematically the same type of adaptable networks as our brain, a self-progamming distributed computer, with merely different technologies used for implementing the links and nodes. (More details with references are in a talk.origins post "Biochemical networks & their algorithms".) In turn these biochemical networks which are designing, constructing and improving live organisms, are themselves a large scale computing technology designed and built by other even denser, faster networks (Planckian networks) which are computing physical & chemical laws (including their space-time parameterization), moment to moment for every particle in the physical universe. That is how the physical laws and constants just happen to be perfectly tuned for life -- they are tuned, designed and built specifically to help construct even larger scale intelligent network technology, the biochemical networks, which in turn followed along the same path with design of life, humans, then humans followed with human technologies. Note that the physical laws as presently known are merely a coarse grained approximation du jour of the far more subtle real laws which are being computed. Hence, from the perspective of our present crude knowledge this 'fine tuning' appears mysterious. It is all the same harmonization process unfolding at ever larger scales, harmonizing the actions of ever larger systems, as if it is seeking the perfect harmony of Leibniz monads. Think for a moment, that two of us, who are in some sense just two bundles of molecules bouncing around, possibly thousands of miles apart, have their actions mutually harmoznized, at least for the duration of this exchange of UD messages. What are the odds of something like that, without the vast chain of intellignet processes going back eons, patiently building upon each other's creations and technologies all that is needed for harmonization at that scale. Although consciousness (mind stuff) is outside of present natural sciences, the most coherent conjecture for some future science, considering the similarity of their fundamental algorithms, is that all such networks are conscious, not just the human brain. (A bit of my own informal speculation on the subject is sketched in post #58 above.) nightlight
KF, I'll take that as a compliment. :) Chance Ratcliff
Off topic guys but did anyone get the news just released on the shroud of turin? It's huge news , and these tests are being submitted for peer review also. Can someone make a blog post about this here please? New tests done on fibers of the shroud put it to the time of Christ. I know that bornagain77 will be very interested in this as well. http://www.telegraph.co.uk/news/worldnews/europe/italy/9958678/Turin-Shroud-is-not-a-medieval-forgery.html Turin Shroud 'is not a medieval forgery' The Turin Shroud is not a medieval forgery, as has long been claimed, but could in fact date from the time of Christ's death, a new book claims. By Nick Squires, Rome correspondent10:24AM GMT 28 Mar 2013744 Comments Experiments conducted by scientists at the University of Padua in northern Italy have dated the shroud to ancient times, a few centuries before and after the life of Christ. Many Catholics believe that the 14ft-long linen cloth, which bears the imprint of the face and body of a bearded man, was used to bury Christ's body when he was lifted down from the cross after being crucified 2,000 years ago. The analysis is published in a new book, "Il Mistero della Sindone" or The Mystery of the Shroud, by Giulio Fanti, a professor of mechanical and thermal measurement at Padua University, and Saverio Gaeta, a journalist. The tests will revive the debate about the true origins of one of Christianity's most prized but mysterious relics and are likely to be hotly contested by sceptics. Scientists, including Prof Fanti, used infra-red light and spectroscopy – the measurement of radiation intensity through wavelengths – to analyse fibres from the shroud, which is kept in a special climate-controlled case in Turin. Related Articles Turin Shroud 'is authentic' 19 Dec 2011 Mystery solved? Turin Shroud linked to Resurrection of Christ 24 Mar 2012 The tests dated the age of the shroud to between 300 BC and 400AD. The experiments were carried out on fibres taken from the Shroud during a previous study, in 1988, when they were subjected to carbon-14 dating. Those tests, conducted by laboratories in Oxford, Zurich and Arizona, appeared to back up the theory that the shroud was a clever medieval fake, suggesting that it dated from 1260 to 1390. But those results were in turn disputed on the basis that they may have been skewed by contamination by fibres from cloth that was used to repair the relic when it was damaged by fire in the Middle Ages. The mystery of the shroud has baffled people for centuries and has spawned not only religious devotion but also books, documentaries and conspiracy theories. The linen cloth appears to show the imprint of a man with long hair and a beard whose body bears wounds consistent with having been crucified. Each year it lures hundreds of thousands of faithful to Turin Cathedral, where it is kept in a specially designed, climate-controlled case. Scientists have never been able to explain how the image of a man's body, complete with nail wounds to his wrists and feet, pinpricks from thorns around his forehead and a spear wound to his chest, could have formed on the cloth. The Vatican has never said whether it believes the shroud to be authentic or not, although Pope Emeritus Benedict XVI once said that the enigmatic image imprinted on the cloth "reminds us always" of Christ's suffering. His newly-elected successor, Pope Francis, will provide an introduction when images of the shroud appear on television on Saturday, the day before Easter Sunday, which commemorates the resurrection. The Pope has recorded a voice-over introduction for the broadcast on RAI, the state television channel. "It will be a message of intense spiritual scope, charged with positivity, which will help (people) never to lose hope," said Cesare Nosiglia, the Archbishop of Turin, who also has the title "pontifical custodian of the shroud". "The display of the shroud on a day as special as Holy Saturday means that it represents a very important testimony to the Passion and the resurrection of the Lord," he said. For the first time, an app has been created to enable people to explore the holy relic in detail on their smart phones and tablets. The app, sanctioned by the Catholic Church and called "Shroud 2.0", features high definition photographs of the cloth and enables users to see details that would otherwise be invisible to the naked eye. "For the first time in history the most detailed image of the shroud ever achieved becomes available to the whole world, thanks to a streaming system which allows a close-up view of the cloth. Each detail of the cloth can be magnified and visualised in a way which would otherwise not be possible," Haltadefinizione, the makers of the app, said. wallstreeter43
CR: is that meant as a spoof, I can't tell. KF kairosfocus
KN: sorry to hear that, hope you have a turnaround. KF kairosfocus
nightlight your claims are simply gussied up Methodological Naturalism. Particularly this claim:
The problem with ‘consciousness’ is that there is nothing in models (M) Model space (formalism & algorithms) of the current natural sciences that corresponds to it. It is simply an extraneous element that no one knows how to model, or what does it do and, if it does anything at all (i.e. if it’s not a non-functional epiphenomenon), how does it do it (e.g. how does it affect matter-energy, what are the rules & limits).
Yet, despite whatever limits you think one should place on scientific inquiry prior to investigation, it is clear exactly where consciousness (and free will) come into play for ID:
"Nonphysical formalism not only describes, but preceded physicality and the Big Bang Formalism prescribed, organized and continues to govern physicodynamics." http://www.mdpi.com/2075-1729/2/1/106/ag The Law of Physicodynamic Insufficiency - Dr David L. Abel - November 2010 Excerpt: “If decision-node programming selections are made randomly or by law rather than with purposeful intent, no non-trivial (sophisticated) function will spontaneously arise.”,,, After ten years of continual republication of the null hypothesis with appeals for falsification, no falsification has been provided. The time has come to extend this null hypothesis into a formal scientific prediction: “No non trivial algorithmic/computational utility will ever arise from chance and/or necessity alone.” http://www-qa.scitopics.com/The_Law_of_Physicodynamic_Insufficiency.html Is Life Unique? David L. Abel - January 2012 Concluding Statement: The scientific method itself cannot be reduced to mass and energy. Neither can language, translation, coding and decoding, mathematics, logic theory, programming, symbol systems, the integration of circuits, computation, categorizations, results tabulation, the drawing and discussion of conclusions. The prevailing Kuhnian paradigm rut of philosophic physicalism is obstructing scientific progress, biology in particular. There is more to life than chemistry. All known life is cybernetic. Control is choice-contingent and formal, not physicodynamic. http://www.mdpi.com/2075-1729/2/1/106/ Chaitin’s Algorithmic Information Theory shows that information is conserved under formal mathematical operations and, equivalently, under computer operations. This conservation law puts a new perspective on many familiar problems related to artificial intelligence. For example, the famous “Turing test” for artificial intelligence could be defeated by simply asking for a new axiom in mathematics. Human mathematicians are able to create axioms, but a computer program cannot do this without violating information conservation. Creating new axioms and free will are shown to be different aspects of the same phenomenon: the creation of new information. “… no operation performed by a computer can create new information.” — Douglas G. Robertson, “Algorithmic Information Theory, Free Will and the Turing Test,” Complexity, Vol.3, #3 Jan/Feb 1999, pp. 25-34.
nightlight pay particular attention to the 'free will' aspect of creating new information in the last statement as you read this following experiment: In our best mathematical description of reality (quantum mechanic), which is verified to something like 13 decimal places, it is now found that one cannot ever improve quantum theory over its present state, with the only a priori assumptions being that measurements (conscious observations) can be freely chosen:
Can quantum theory be improved? – July 23, 2012 Excerpt: However, in the new paper, the physicists have experimentally demonstrated that there cannot exist any alternative theory that increases the predictive probability of quantum theory by more than 0.165, with the only assumption being that measurement (conscious observation) parameters can be chosen independently (free choice, free will, assumption) of the other parameters of the theory.,,, ,, the experimental results provide the tightest constraints yet on alternatives to quantum theory. The findings imply that quantum theory is close to optimal in terms of its predictive power,,,. http://phys.org/news/2012-07-quantum-theory.html
Now nightlight this is completely unheard of in science as far as I know. i.e. That a mathematical description of reality would advance to the point that one can actually perform a experiment showing that your current theory will not be exceeded in predictive power by another future theory is simply unprecedented in science! Moreover nightlight, do you want to be the one to go tell these researchers, who have reached such a unprecedented milestone in the history of science, with the most sucessful theory in science, that they have to remove their a priori assumptions of conscious observation and free will from their experiment because you personally consider it 'unscientific' to include consciousness and free will in any scientific explanation??? nightlight if you really want to worry about something being within the realm of 'science' or not, may I suggest a much more fruitful avenue of investigation?:
Neo-Darwinism isn’t ‘science’ because, besides its stunning failure at establishing empirical validation in the lab, it has no mathematical basis, and furthermore neo-Darwinism can never have a mathematical basis because of the atheistic insistence for the ‘random’ variable postulate at the base of its formulation (a postulate which prevents ‘mathematical certitude’ from ever being reached!): https://uncommondesc.wpengine.com/evolution/the-equations-of-evolution/#comment-450540
Verse and music:
John1:1 In the beginning was the Word, and the Word was with God, and the Word was God. Whom Shall I Fear (God Of Angel Armies) http://www.youtube.com/watch?v=qOkImV2cJDg
bornagain77
bornagain77 (#46): nightlight: "Consciousness" is outside of present natural science - there is no scientific model that can tell you `this arrangement of matter-energy is conscious and this one is not.' bornagain77: Actually the `hard problem of consciousness' is only a problem for atheistic materialists... You seem to be confusing concepts 'scientific theory' and 'direct experience' -- consciousness is a non-problem for the latter (one may say, it is the latter), while it is a problem for the current natural science. That doesn't mean it will be a problem for future natural science. For example, it is quite conceivable that some pregeometry model of physics (such as those of Wolfram's NKS, see post #19) can be made to work on binary states, labeled say as +1 and -1. Some such models already exist capable of reproducing some key equations of physics (such as Shchrodinger, Dirac and Maxwell equations, special theory of relativity, etc), albeit via disparate fragmented models tailored to each equation. Within such models, our "elementary" particles are properties of the patterns on this more elemental computing substratum (analogous to gliders in Conway's Game of Life). In any case, when/if a general enough model of that type is found, then one could postulate that the two fundamental states +1 and -1, correspond to the two elemental attributes of mind stuff, e.g. +1 = (reward, happiness, pleasure...) and -1 = (punishment, unhappiness, pain...). In such model, the activation of these elements with their +1/-1 states (dynamically changing by the rules of the model), would correspond to specific qualia e.g. when your red sensing neurons enter dominant state +1, one could interpret 'what is it like to see redness' as: it is like elements X1, X2,... being in 'happy' (reward, recognition) state. Similarly perceiving roundness would be like having elements Y1, Y2,... entering their happy state. If the nervous system is wired to combine simultaneous signals +1 from X and Y into +1 for some Z elements, then Z1, Z2,... entering 'happy' state of recognition would be an answer of the model to what is it like to see 'red circle' etc. Thus the compositon of qualia would correspond to new elements going active (+1 state, recognition) when some subsets of other elements are active. Of course, the postulates of some theory A cannot tell you anything about their origin or 'deeper meaning'. For that you need some other theory B which can deduce postulates of A from some simpler postulates of its own. But to have a science which can deduce something, you need some constructive assumptions taken for granted to start with. Nothing follows from an empty assumption set. Nothing also follows from assumptions that cannot be put into an algorithmic form (formulas, programs), so that numbers can be computed for comparison with corresponding empirically obtained numbers (see previous post #49 about the general scheme sketched here). Hence, science and direct experience are neither synonyms nor equivalents nor substitutes for each other. While future models of mind stuff, such as the sketch above, may provide coherent scientific model of mind stuff, they still will not answer to a color-blind 'what is it really like to see real redness' (as opposed to grayness). On the other hand, as in most science, such models of mind stuff may help answer interesting questions which are beyond the reach of our direct experience, such as what happens with mind stuff after one dies and body decays into organic dust. For example, in the Planckian network models (see #19, #35), augmented with binary +1, -1 states interpreted as elemental mind stuff ingredients, our atoms & molecules are technologies of those networks, which in these models are conscious entities. The basic 'mind stuff' composition rules sketched above would imply that lower entities have much narrower and sharper (hyper-real qualia) conscious experiences than higher level entities (which inherit & combine those more basic elements into broader, more diffuse forms). The computational speeds and computing power of the lower level entities are also much greater (e.g. Planckian networks would be 'ounce for ounce' 10^80 times more powerful then our technologies or brains), thus their experiences would unfold much, much more quickly and more accurately. Hence, after death, as the decay progresses downward, from whole organism to organs, to tissues, to cells, to molecules, the experience would evolve through increasingly more hyper-real, hyper-sharp, rapidly accelerating forms. Each stage would appear more real than the previous one, which would seem like a vague rapidly fading dream. In a way, at the end one would be 'back with the crew' that is normally running the body moment to moment as their large scale technology, but which is now being dismantled (e.g. for possible reuse in improved future models of their technologies). Of course, absent the complete theories of the above kind, these descriptions are presently mere speculation meant to illustrate complementary roles and mutual enrichment between direct experiences and their scientific models. There is no a priori reason to presume, as you seem to do, that there will never be useful and productive scientific models of mind stuff. bornagain77: Indeed quantum mechanics has now confirmed the Theists contention that `consciousness' precedes material reality... I just happen, as a theoretical physicist, to know a bit on this subject, having done a master's thesis about none other than Quantum Paradoxes (measurement problem, wave function collapse, Bell inequalities, etc). Word of advice would be to stay away from this tar pit. It is a completely speculative field based on gratuitous/optional 'strong projection' postulate for composite systems i.e. that quantum measurement of the composite system observable [A1] x [A2] for two sub-systems A1 and A2 which have local observables [A1] and [A2], consists in some conjectured "ideal quantum aparatus" of independent, local measurements of [A1] and [A2]. In reality (the actual experiments), to obtain results for composite [A1] x [A2], results obtianed locally on [A1] are filtered based on the outcomes of remote local measurements of [A2] on the other system. Of course, with such non-local selection procedure, what is left may appear non-local. The critical experiment, violation of Bell inequalities, on the which that whole speculative superstructure rests, has resisted numerous experimental verification attempts for over half a century. There is whole new euphemistic language which evolved to describe this long chain of experimental failures while maintaining pretense that all will be fine as soon as they can build the "ideal quantum aparatus". E.g. you will see in some science news that soime team is getting new funding to trying performing first 'loophole free' test of Bell inequalities violation, meaning the experiment which actually violates the inequalities, hence acknowledging in a back-handed way (to the experts who can decipher the euphemistic language) that all experiments so far have failed. While the whole field (quantum magic) is fairly profitable to the main promoters, since there are always dupes who will invest into the magical quantum computers and other promised miracle devices, there is also a long chain of heretics (including pioneers of quantum theory such Einstein, Schrodinger, de Broglie and others) who knew better and kept pointing out the 'loopholes' in the latest claims. One of current era fellow 'heretics', professor Emilio Santos, compares the experimental situation in this field (paper) to the pursuit of perpetual motion devices by pseudo-scientific hucksters of few centuries ago, before the understanding of energy and entropy laws made that whole profitable field go extinct. All those demonstrations had some loophole of one kind or another, and loophole free device was always just around the corner, waiting for little bit smother cogs, bit stronger magnets, bit more elastic springs,... etc, just like present 'quantum magic' experiments keep waiting for little bit better detectors, better sources, better polarizers,... etc. before achieving their holey grail, the 'loophole free' test. In short, no, quantum theory shows or implies nothing of the sort about mind stuff. It's a pure speculation built on extremely flimsy measurement postulates and loopholes riddled experimental support (or in plain language, a long chain of failed experiments). Whatever you do, don't invest into 'quantum computing', unless you are sure you can resell the shares to other dupes at a profit before the company peddling them disappears. nightlight
In re: Kairosfocus @ 33:
KN: Are you sure you want a kick-back? So far, the “pay” has on the whole been in hate sites, outing tactics, threats against families, slanders and worse. Might cost you being expelled, too.
You're a lousy salesperson, KF. Maybe I should sell my services directly to the Discovery Institute -- they can hire me as the Loyal Opposition. As for being "expelled," I assure you, my colleagues wouldn't care one bit if they knew I was posting here. And my career is already quickly going down in flames, so even if they did care, it wouldn't really matter. But if I hope to salvage what little of it remains, I really should put my energies into my real work, and comment here a lot less. Kantian Naturalist
lastyearon, I'm an ID proponent and you're right that computers are not intelligent (by human standards) but they do have properties of intelligence since they are in various ways a reflection of our intelligence. Its not incorrect to say they mimic intelligent in some way. I don't see the need to get upset at this since this proves only intelligence can be responsible for intelligence which only makes ID's case stronger. I disagree with you that only God and humans are intelligent. There are obviously animals which are intelligent. computerist
Nightlight, I am interested in your thoughts on a subject. How can science as a whole become complacent with the idea, for instance, that in order to confirm the existence of a fire, one would need such things as a supply of fuel, an oxidizing agent, a heat source, and finally the chemical process generally known as combustion (i.e. the rapid oxidation of fuel). Using the parameters which you indicate must be the basis of a scientific conclusion (i.e. model space, empirical procedures and facts, operational rules mapping numbers between the model and the empirical) how does science to come such a conclusion about fire? Or is such a conclusion arrived at through other appropriate scientific means, observation, materials knowledge, etc. Also, where does the complacency in such knowledge come from; is it simply that no one challenges that these are indeed the sufficient and necessary material means to a fire. And what should happen if someone should challenge that notion, what would be required of them? I hope this question makes at least a modicum of sense. I am asking in complete sincerity, and look forward to your response if you have time. Upright BiPed
lastyearon, everything I type is describable in terms of physics. Computers don't run on magic, and they don't just poof text into existence. Is the text on your screen the result of electrons, or invisible pink unicorns? Is the ink on a newspaper physical, or magical? Stop pretending that agency is a causal force capable of explaining some of the things we observe in day-to-day life. Yes, I am the result of physics and chemistry. Since we can observe physical/chemical processes in the cell, we know beyond a shadow of a doubt that physics and chemistry and a few million years can produce just about anything in the realm of biology. Case closed, nothing to see here, creationists go home. Science is working to solve these problems, not some bronze-age sky god. Chance Ratcliff
Chance Ratcliff @51: No intelligence is required anywhere in the causal chain, and is superfluous to descriptions of computer systems and the software which runs on them.
Except, of course, when it was constructed and programmed... by an intelligent human. CentralScrutinizer
Well, ChanceRatcliff, it sounds like you are an atheist-materialist-darwinist. And therefore you believe you're just a bunch of quarks and gluons randomly bumping into each other. Do you know of any quarks or gluons that can type out a blog post, let alone have a coherent idea? Yeah, I thought not. Therefore everything you say is obviously meaningless, and I'm going to stop trying to understand it. lastyearon
lastyearon, computers are intelligent. They solve problems, run simulations, recognize faces, and break cryptograhpic codes all on their own. No intelligence is required anywhere in the causal chain, and is superfluous to descriptions of computer systems and the software which runs on them. There's nothing spooky about intelligence, and there's nothing that intelligence can accomplish that deep time and stochastic processes cannot also accomplish, if not in this universe, then in some other one. You can't prove that wind and rain, geological processes, and energy from the sun cannot eventually produce buildings, vehicles, and machinery. Computers are perfectly intelligent, even more so than humans. Not only can computers play chess better than humans, but they like chess, can learn to play on their own, and can understand the nuances of strategy, even appreciate the quality of an expensive chess set. It's just ridiculous to protest that this is anything but obvious. Chance Ratcliff
nightlight:
Just because something can be repeated in the lab and reverse engineered into biochemical processes, that doesn’t imply that the underlying processes are ‘unintelligent’ or ‘random’.
Yes it does. Intelligence is Supernatural, and only humans have it. Everything else is just quarks and gluons bouncing around randomly. Sometimes atheist-materialist-darwinists try to pretend that a process or a thing can be intelligent. Like when they say that evolution is an intelligent process that can mimic design. Or they say that a computer can be intelligent enough to be a good chess player. But that's laughable, because obviously since evolution is not supernatural, its just molecules randomly bumping into each other. And a computer is obviously just electrons being told what to do by intelligent humans. Only humans (and God) are intelligent. lastyearon
@kairosfocus #41: nightlight: basing ID on "conscious" intelligence is like building a house on a tar pit, resulting in endless semantic debates advancing nowhere. kairosfocus: Actually, not. The empirical fact of consciousness joined to intelligence is undeniable, on pain of self referential absurdity. Otherwise, we may properly ask, where is the text coming from? The 'usefulness' I was talking about is whether injection of 'consciousness' or 'mind(stuff)' as attribute of the intelligence behind biological designs is useful for ID perspective in becoming a legitimate scientific discipline, from its present ambiguous position. In contrast, you are talking about its usefulness for intuitive understanding and heuristic purposes (it is, of course). While the association between mind stuff (consciousness) and intelligence is intuitively self-evident and heuristically useful, basing scientific postulates of ID on mind stuff is counter-productive for ID becoming a scientific discipline. In any natural science, you need 3 basic elements: (M) -- Model space (formalism & algorithms) (E) -- Empirical procedures & facts of the "real" world (O) -- Operational rules mapping numbers between (M) and (E) While in physics these are all explicitly and cleanly delimited components, understood and discussed by physicists, they are also present in other natural sciences at least implicitly. Informally, the Model space (M) is a 'scaled down' model of empirical reality (E) in which the 'model reality' can be 'ran' as algorithms producing numbers and compared via (O) with facts of empirical reality (E). That type of relation between these elements is essential for any natural science. The problem with 'consciousness' is that there is nothing in models (M) of the current natural sciences that corresponds to it. It is simply an extraneous element that no one knows how to model, or what does it do and, if it does anything at all (i.e. if it's not a non-functional epiphenomenon), how does it do it (e.g. how does it affect matter-energy, what are the rules & limits). Injecting the mystery C(onsciousness) element into the (M) space of ID (as a natural science) as its foundation stone, merely drags the ID into the same scientific tar pit in which the element C is now. The sad irony is that all that ID argument implies regarding the origin of life and biological evolution (and possibly fine tuning) is an 'intelligent agency', readily modeled via computational & algorithmic modeling toolsets, to replace the crude 19th century mechanistic models of the neo-Darwinism (ND). Hence, I see the ambitious overreach by some key ID proponents to smuggle in 'mind' and other presently unscientific elements into its model space (M) as a great disservice to the otherwise highly convincing arguments ID makes. If someone wanted to sabotage the ID critique of ND-E, diverting it into the endless philosophical debates about nature of 'mind stuff' would be among the most effective derailment weapons. Of course, neo-Darwinian theory has smuggled in the gratuitous, unfalsifiable "random" mutation into its model space (M) for religious reasons as well (which via equivocation with 'aimless', 'purposeless' props their atheistic religion). That would normally be its major weakness against a purely scientific ID argument since they cannot demonstrate this 'randomness' as a source of novelty even in the case of "micro-evolution", let alone for macro-evolution or origin of life. Instead of hitting them on that Achilles heal, their completely unfalsifiable religious assertion of randomness, the present ID advocates needlessly concede power of random mutation in 'micro-evolution' while still hanging onto the macro-evolution and origin of life. Just because something can be repeated in the lab and reverse engineered into biochemical processes, that doesn't imply that the underlying processes are 'unintelligent' or 'random'. For example, just because a chess playing computer program can be reverse engineered and explained in full detail, that doesn't imply that it is playing random moves or that it is just some aimless shuffling of electrons that is responsible for its intelligent behavior. The chess program remains a highly intelligent process no matter how easily one can replicate it in the lab and explain its operation. Intelligence, despite its manifestations appearing sometimes astonishing and mystifying to observers, is not a synonym for inexplicable or irreproducible. It is sad to see how easily some on ID side have been hoodwinked by the neo-Darwinian sleight of hand that the two are synonymous in the case of micro-evolution, needlessly cornering themselves onto continually shrinking island of inexplicable and irreproducible. nightlight
Optimus can think clearly and write beautifully. That is a hard combination to beat. StephenB
Thanks for the post, KF. I'll try to respond to a few of the comments later in the day when I have the time... Optimus
nightlight you claim that
“Consciousness” is outside of present natural science — there is no scientific model that can tell you ‘this arrangement of matter-energy is conscious and this one is not.’
Actually the 'hard problem of consciousness' is only a problem for atheistic materialists,,,
Darwinian Psychologist David Barash Admits the Seeming Insolubility of Science's "Hardest Problem" Excerpt: 'But the hard problem of consciousness is so hard that I can't even imagine what kind of empirical findings would satisfactorily solve it. In fact, I don't even know what kind of discovery would get us to first base, not to mention a home run.' David Barash - Materialist/Atheist Darwinian Psychologist http://www.evolutionnews.org/2011/11/post_33052491.html Neuroscientist: “The Most Seamless Illusions Ever Created” - April 2012 Excerpt: We have so much confidence in our materialist assumptions (which are assumptions, not facts) that something like free will is denied in principle. Maybe it doesn’t exist, but I don’t really know that. Either way, it doesn’t matter because if free will and consciousness are just an illusion, they are the most seamless illusions ever created. Film maker James Cameron wishes he had special effects that good. Matthew D. Lieberman - neuroscientist - materialist - UCLA professor http://darwins-god.blogspot.com/2012/04/neuroscientist-most-seamless-illusions.html
,,,the 'hard problem of consciousness' is not a problem for Theists since Theist never claimed that consciousness was reducible to matter-energy. Indeed quantum mechanics has now confirmed the Theists contention that 'consciousness' precedes material reality:
1. Consciousness either preceded all of material reality or is a 'epi-phenomena' of material reality. 2. If consciousness is a 'epi-phenomena' of material reality then consciousness will be found to have no special position within material reality. Whereas conversely, if consciousness precedes material reality then consciousness will be found to have a special position within material reality. 3. Consciousness is found to have a special, even central, position within material reality. 4. Therefore, consciousness is found to precede material reality. Four intersecting lines of experimental evidence from quantum mechanics that shows that consciousness precedes material reality (Wigner’s Quantum Symmetries, Wheeler’s Delayed Choice, Leggett’s Inequalities, Quantum Zeno effect): https://docs.google.com/document/d/1G_Fi50ljF5w_XyJHfmSIZsOcPFhgoAZ3PRc_ktY8cFo/edit The Galileo Affair and the true "Center of the Universe" https://docs.google.com/document/d/1BHAcvrc913SgnPcDohwkPnN4kMJ9EDX-JJSkjc4AXmA/edit
Perhaps nightlight instead of the futility of trying to explain how consciousness emerged from a material basis you should finally be 'scientific' and admit that your worldview is false?
"As a man who has devoted his whole life to the most clear headed science, to the study of matter, I can tell you as a result of my research about atoms this much: There is no matter as such. All matter originates and exists only by virtue of a force which brings the particle of an atom to vibration and holds this most minute solar system of the atom together. We must assume behind this force the existence of a conscious and intelligent mind. This mind is the matrix of all matter." Max Planck - The Father Of Quantum Mechanics - Das Wesen der Materie [The Nature of Matter], speech at Florence, Italy (1944)(Of Note: Max Planck Planck was a devoted Christian from early life to death, was a churchwarden from 1920 until his death, and believed in an almighty, all-knowing, beneficent God.
bornagain77
nightlight
But the most problematic conflation common in the ID circles (which was strangely omitted by KN), is the blurring between the map and the territory, using interchangeably the generic term “evolution” for: (a) transformation process of biological systems (b) neo-Darwinian theory of (a) (modern synthesis) (c) other theories of (a), including intelligently guided
When the discussion proceeds beyond the point of abbreviated terms, it is the ID proponent that always makes the crucial distinctions and the Darwinist or Christian Darwinist who either conflate terms or twists their common meanings. Here are five quick examples (I could provide several more). [a] ID recognizes the validity of Darwin's Special Theory of Evolution and argues against Darwin's General Theory of Evolution. Darwinists and TEs, on the other hand, argue for Darwin's General Theory on the basis of the evidence for Darwin's Special Theory, implying that there is no difference between the two. [b] ID emphasizes the plausibility of guided (macro) evolution and the implausibility of unguided (macro) evolution. TEs, on the other hand, argue for unguided evolution while using the rhetoric of guided evolution For their part, Darwinists argue for unguided evolution and appeal to the rule of Methodological Naturalism as a means of ruling out evidence for guided evolution--with the approval of TEs. [c] ID believes that the researcher should follow where the evidence leads. Darwinists and TEs insist that some evidence should not be admitted. If Moses returned to part the waters of the Red Sea, advocates for Methodological Naturalism would force the scientist to assume that nature was acting on its own power. [d] ID acknowledges the difference between Philosophical Naturalism and Methodological Naturalism, but rightly argues that the practical difference is insignificant since science would proceed exactly the same way regardless of which approach is used. Darwinists and TEs dramatize the difference, pretending that the latter is permissible on the grounds that it is not precisely the former. [e] ID explains, truthfully, that Methodological Naturalism is a recent development that no one had ever heard of prior to the 20th century. Materialist Darwinists and Christian Darwinists, either through willful ignorance or malice, make fraudulant claims about MN's long history, even though it has no history at all. StephenB
the other mouth responds to kairosfocus:
The doubly odd thing is that AFAIK KF has no explanation for living cells either, other then “they were designed”.
How is that odd? That is a HUGE determination and effects all subsequent research. Even Dawkins recognizes that is changes everything. How odd for these "skeptics" to not be able to grasp that simple fact. And they sure as hell can't grasp the fact that if that the OoL and its subsequent evolution are directly linked. Meaning the only way darwinian evolution is responsible for the diversity of life is if blind and undirected chemical processes produced the first popluation(s) of living organisms. So it is rather difficult to have a discussion with these people when they can't even grasp simple concepts. Joe
BV, also please note, I see a lockout on the site link. BTW, you can set up a link to a reference site or page keyed to by your name, at UD. Try clicking on my handle to see. KF kairosfocus
BV: Welcome to UD, I don't remember seeing you before. You will be happy to note that from the first technical design theory work in the 1980's, there has been a careful distinction between understanding that design as process can be studied on reliable signs thereof, and arguments as to who or what may be responsible for such designs. Inference that the designer of note in particular cases may or is likely to be God, is not a part of Design theory as theory. In the case of living forms, it is openly acknowledged that the evidence is such that -- as I have often put it -- any competent molecular microbiology nanotech lab of sufficiently many generations beyond Venter et al would be a sufficient cause. I suspect real artificial life will be done in such a lab before this century is out. The evidence of a fine tuned cosmos reflecting design that sets up cell based life, is of a different order. And that does much more directly raise questions about a mind behind the observed cosmos and even speculative multiverses. KF kairosfocus
Hi NL: Popping up for a moment:
basing ID on “conscious” intelligence is like building a house on a tar pit, resulting in endless semantic debates advancing nowhere.
Actually, not. The empirical fact of consciousness joined to intelligence is undeniable, on pain of self referential absurdity. Otherwise, we may properly ask, where is the text coming from? We know such as a matter of course and it is through such, that we access all other facts, however we may not notice that. So, it is proper to highlight that there are some signs that show such intelligence at work, directly or indirectly. And, to reason on such. KF kairosfocus
kf @4: Very interesting link.
In their new book The Language of Science and Faith, Karl Giberson and Francis Collins argue that "the distinction between micro and macro evolution is arbitrary." (p. 45, emphases in original) As a result, they assert that "macroevolution is simply microevolution writ large: add up enough small changes and we get a large change."
Particularly in light of the vociferous statements by some evolutionist proponents on this site in recent weeks (Nick, I believe?) that no competent evolutionist argues that macroevolution is just microevolution extrapolated over time. There is something unique and different about macroevolution, they claimed. Not so, say Giberson and Collins. And as we've known for some time, the real consensus is that macroevoltion is just microevolution writ large. Which thread was that again? Might be worth revisiting . . . BTW, Mung, I hope this helps answer your nagging and heart-felt question about macroevolution. Perhaps there is a reason Nick has gone so silent? There isn't anything special about macroevolution -- it is just microevolution writ large. :) Eric Anderson
Optimus: Well said. Thanks kf for headlining Optimus' excellent comment. ----- KN @23: LOL! Where would we be without you!? :) Tell you what. Maybe we'll give you a kickback if you can accurately write a single-paragraph definition/description of the design inference, without any misstatements or misrepresentations. Better yet, get one of your friends to do so, say Elizabeth Liddle, and if she can manage it I'd even pitch in to help fund a prize. :) ----- nightlight: The word "evolution" has numerous meanings. However, in popular usage, in the press, in science textbooks, in discussions generally, it is understood to mean a purposeless, blind, undirected, unguided process. We don't need to say that every time the word is used -- it is understood. Only in the occasional cases in which we are talking about some kind of programmed, directed, or planned biological response or development do we need to qualify the word to something like "guided evolution" or "designed evolution." So I don't think it is a big deal that Cornelius Hunter, or anyone else for that matter, doesn't put an asterisk and a long explanatory footnote every time they use the word. We all know what he is referring to. WMJ @5 is right -- it is pretty easy to understand without semantic games. That is, unless someone is on a mission to misunderstand. Eric Anderson
@SteRusJon: nightlight, objects to ID conflation of various uses of the term "evolution" and then proceeds to conflate the law driven operation of a chess program as an intelligent agent with the creative and innovative capabilities of the truly intelligent agent that designed it. How ironic! It's ironic only in a fragmented perspective you seem to have. As explained in the post above, it is a perfectly coherent position in the bottom-up, inside-out computational perspective, where the physical, chemical & biological laws are results of computation by the underlying layer (such Planckian networks in some pregeometry models of physics). In that Matrix-like perspective our "elementary" particles and laws they obey are large scale technologies designed and constructed by the underlying Planckian scale networks in the process of extending harmonization at ever larger scales, as if in pursuit of the perfect bliss of Leibniz monads, or Teilhard de Chardin's Omega Point. This is analogous to us constructing ever larger technologies for harmonizing our activities at ever larger scales (e.g. across the globe via internet). Motions of molecules making up your body and molecules making mine, which may be some molecules on opposite sides of the globe are meshing together, acting in harmony (disagreements notwithstanding) for few moments as we participate in the discussion here. nightlight
As an agnostic, my version of intelligent design is not identical with that of most religious people. The participation by some deity in any creative process can be neither confirmed nor denied, so I respectfully disagree with most theists. I'd like to respectfully disagree with the materialists, but I find it impossible to respect intolerance. Most scientists are open minded, but a few high-profile Neo-Darwinists seem more interesting in waging war against the God concept than in understanding evolution. Materialism is as sacred a concept to some atheists as god is to theists. Berthajane Vandegrift http://myauthorsite.com/ Berthajane Vandegrift
Dr Liddle is making a category error. Along with "chance" and "necessity", design (artifice) is a categorical description of the behavior of certain phenomena. "Evolution", if taken outside of ideological assumption, only means "heritable variation and survival differential". These are processes, or mechanisms, not fundamental categorical descriptions of the behavior of phenomena. "Evolution", then, is a set of processes that move A to B. The question that ID asks (and Darwinists do not) is if necessity and chance provide a sufficient categorical description of how B came into existence via evolution, or if design is a necessary part of the evolutionary causal description. Therefore, saying that "design" and "evolution" produce "very similar complex, functional objects" is a categorical error, and begs the question: is chance and necessity a sufficient description of the process (whether you call it "evolution" or not) of moving A to B? Or, is design required? William J Murray
nightlight: chess playing computer program -- it is an intelligent process, superior (i.e. more intelligent) in this domain (chess playing) to any human chess player. bornagain77: save for the fact that you are severely conflating the distinct entity of conscious intelligence with the brute force number crunching power of computers. "Consciousness" is outside of present natural science -- there is no scientific model that can tell you 'this arrangement of matter-energy is conscious and this one is not.' Hence, anything anyone claims about it is an opinion. I find the Leibniz-Spinoza type of panpsychism as the most coherent view on the subject, and in that perspective the above distinction is an empty semantic quibble that adds nothing. Hence basing ID on "conscious" intelligence is like building a house on a tar pit, resulting in endless semantic debates advancing nowhere. Anything resting on 'consciousness' as its foundation is automatically outside of science, leaving it at best in the realm of philosophy. In contrast, "computation" and "algorithms" are scientifically and technologically well accepted concepts which suffice in explaining anything attributed to type of intelligence implied (via ID argument) by the complexity of biological phenomena. Examples of behaviors covered by such computational & algorithmic models are goal directed, anticipatory behaviors, complex optimizations etc, exactly what ID is implying to be behind functionality and evolution of biological systems, origin of life or fine tuning of physical laws and constants. So, why would you lay your foundation on the quicksand of 'conscious' intelligence, when perfectly sufficient and scientifically solid building blocks, such as computation and algorithms, already exist? If Dawkins & Co. were to strategize and dream on how best to derail the ID objections to the hollowness of neo-Darwinian theory, they couldn't have dreamt a better way than have you base the ID alternative on 'conscious' intelligence as its foundation -- it's a sure way to send you down a dead end road. It's a perfect red herring. In the meantime, they'll rejigger semantics of their "mutation" and "adaptation" and "selection" so they fit the next advance, which will be in the computational and algorithmic models, as it is already understood by some far sighted folks, such as James Shapiro and many at the Santa Fe Institute (Complexity Science). As well you have a severe blind spot in that it impossible to account for the origination of these chess playing programs in the first place without reference to an intelligent, conscious, agent(s). Yes, there were designed by intelligent agents, called humans. Just as humans are designed by other intelligent agents, the cellular biochemical networks. Your body, including brain, is a 'galactic scale' technology, as it were, designed and constructed by these intelligent networks, who are the unrivalled masters of molecular engineering (human molecular biology and biochemistry are a child's babble compared to the knowledge, understanding and techniques of these magicians in that realm). In turn, the biochemical networks were designed and built by even smaller and much quicker intelligent agents, Planckian networks, which are computing the physics and chemistry of these networks as their large scale technologies (our physics and chemistry are coarse grained approximation of the real laws being computed). The net computational power in this hierarchy of smaller intelligent agents building ever larger ones, increases as you go down toward smaller scales, with higher levels being tiny fluctuations, providing small computational corrections & refinements to the much larger and more powerful underlying computations. Hence, in the perspective of that type, the distinction you make above is again a mere semantic quibble. Since in panpsychism consciousness is a fundamental attribute of elemental entities at the ground level, it's the same consciousness (answering "what is it like to be such and such entity") which combines into and permeates all levels, from elemental Planckian entities though us, and then through all our creations, including social organisms. nightlight
Joe: has Dr Liddle provided observational evidence that blind chemical and physical processes can organise living cells with metabolic automata and built in code-using von Neumann kinematic self-replication? That such, again unaided, per observation, can create the further FSCO/I to achieve novel body plans requiring 10 - 100 mn bits of code just for the genomes? If she has invite her for me to submit a reply tot he six month old darwinism essay challenge. After all, it is a free kick at goal I have promised to host as at Sept 23 last year, once one is submitted. KF kairosfocus
KN: Are you sure you want a kick-back? So far, the "pay" has on the whole been in hate sites, outing tactics, threats against families, slanders and worse. Might cost you being expelled, too. KF kairosfocus
In post 2 above, nightlight, objects to ID conflation of various uses of the term "evolution" and then proceeds to conflate the law driven operation of a chess program as an intelligent agent with the creative and innovative capabilities of the truly intelligent agent that designed it. How ironic! SteRusJon
Lizzie continues her equivocation:
The problem is that they also look like the products of evolution,
Yes, Intelligent Design Evolution. The broken parts, the degenerated parts, look like unguided evolution.
and given that the prerequisites of evolution are present
Yes, but HOW those prerequisites arose would determine how evolution proceeded- by design or culled willy-nilly.
(self-replication with heritable variance in reproductive success) then there is no reason to postulate a designer for which we have no independent evidence.
Well the evidence for a designer wrt biology is independent of the evidence for a designer wrt physics. And if you require meeting the designer, then you ain't interested in science. However given that necessity and chance have been unable to explain what we observe, AND it fits the design criteria, we infer design. And yes, we do so tentatively, because that is just the nature of science. From "The Privileged Planet":
“The same narrow circumstances that allow us to exist also provide us with the best over all conditions for making scientific discoveries.”
“The one place that has observers is the one place that also has perfect solar eclipses.”
“There is a final, even more bizarre twist. Because of Moon-induced tides, the Moon is gradually receding from Earth at 3.82 centimeters per year. In ten million years will seem noticeably smaller. At the same time, the Sun’s apparent girth has been swelling by six centimeters per year for ages, as is normal in stellar evolution. These two processes, working together, should end total solar eclipses in about 250 million years, a mere 5 percent of the age of the Earth. This relatively small window of opportunity also happens to coincide with the existence of intelligent life. Put another way, the most habitable place in the Solar System yields the best view of solar eclipses just when observers can best appreciate them.”
Even Dawkins admits science can only allow so muck luck. Yet it appears that absent Intelligent Design, it is all luck, even the emergence of the laws. Joe
Nightlight (19): Hence, MN would allow that chess playing program is an intelligent agency (agent). Therefore MN doesn’t exclude a hypothetical intelligent agency guiding processes of life and their evolution, either.
So that makes methodological naturalism ok? Wow! Box
Of note: This ability to 'instantaneously' know answers to complex problems has long been a very intriguing characteristic of some autistic savants; Is Integer Arithmetic Fundamental to Mental Processing?: The mind's secret arithmetic Excerpt: Because normal children struggle to learn multiplication and division, it is surprising that some savants perform integer arithmetic calculations mentally at "lightning" speeds (Treffert 1989, Myers 1903, Hill 1978, Smith 1983, Sacks 1985, Hermelin and O'Connor 1990, Welling 1994, Sullivan 1992). They do so unconsciously, without any apparent training, typically without being able to report on their methods, and often at an age when the normal child is struggling with elementary arithmetic concepts (O'Connor 1989). Examples include multiplying, factoring, dividing and identifying primes of six (and more) digits in a matter of seconds as well as specifying the number of objects (more than one hundred) at a glance. For example, one savant (Hill 1978) could give the cube root of a six figure number in 5 seconds and he could double 8,388,628 twenty four times to obtain 140,737,488,355,328 in several seconds. Joseph (Sullivan 1992), the inspiration for the film "Rain Man" about an autistic savant, could spontaneously answer "what number times what number gives 1234567890" by stating "9 times 137,174,210". Sacks (1985) observed autistic twins who could exchange prime numbers in excess of eight figures, possibly even 20 figures, and who could "see" the number of many objects at a glance. When a box of 111 matches fell to the floor the twins cried out 111 and 37, 37, 37. http://www.centreforthemind.com/publications/integerarithmetic.cfm bornagain77
“Computers are no more able to create information than iPods are capable of creating music.” Dr. Robert Marks Evolutionary Informatics - William Dembski and Robert Marks Excerpt: The principal theme of the lab’s research is teasing apart the respective roles of internally generated and externally applied information in the performance of evolutionary systems.,,, Evolutionary informatics demonstrates a regress of information sources. At no place along the way need there be a violation of ordinary physical causality. And yet, the regress implies a fundamental incompleteness in physical causality's ability to produce the required information. Evolutionary informatics, while falling squarely within the information sciences, thus points to the need for an ultimate information source qua intelligent designer. http://evoinfo.org/ "So, to sum up: computers can reshuffle specifications and perform any kind of computation implemented in them. They are mechanical, totally bound by the laws of necessity (algorithms), and non conscious. Humans can continuously create new specification, and also perform complex computations like a computer, although usually less efficiently. They can create semantic output, make new unexpected inferences, recognize and define meanings, purposes, feelings, and functions, and certainly conscious representations are associated with all those kinds of processes." Uncommon Descent blogger - gpuccio
bornagain77
nightlight you may have had a point in this comment,,,
MN doesn’t imply anything of the sort (at least as I understand it). As a counter example, consider a chess playing computer program — it is an intelligent process, superior (i.e. more intelligent) in this domain (chess playing) to any human chess player.
,,,save for the fact that you are severely conflating the distinct entity of conscious intelligence with the brute force number crunching power of computers. As well you have a severe blind spot in that it impossible to account for the origination of these chess playing programs in the first place without reference to an intelligent, conscious, agent(s). Moreover,,,
Alan Turing and Kurt Godel - Incompleteness Theorem (As related to computers) and Human Intuition - video (notes in video description) http://www.metacafe.com/watch/8516356/ Are Humans merely Turing Machines? https://docs.google.com/document/d/1cvQeiN7DqBC0Z3PG6wo5N5qbsGGI3YliVBKwf7yJ_RU/edit
At the 11:50 minute mark of this following video 21 year old world Chess champion Magnus Carlsen explains that he does not know how he knows his next move of Chess instantaneously, that ‘it just comes natural’ to him to know the answer instantaneouly.
Mozart of Chess: Magnus Carlsen – video http://www.cbsnews.com/video/watch/?id=7399370n&tag=contentMain;contentAux A chess prodigy explains how his mind works – video Excerpt: What’s the secret to Magnus’ magic? Once an opponent makes a move, Magnus instantaneously knows his own next move. http://www.cbsnews.com/8301-504803_162-57380913-10391709/a-chess-prodigy-explains-how-his-mind-works Another reason why the human mind is not like a computer - June 2012 Excerpt: In computer chess, there is something called the “horizon effect”. It is an effect innate in the algorithms that underpin it. Due to the mathematically staggering number of possibilities, a computer by force has to restrict itself, to establish a fixed search depth. Otherwise the calculations would never end. This fixed search depth means that a ‘horizon’ comes into play, a horizon beyond which the software engine cannot peer. Anand has shown time and again that he can see beyond this algorithm-imposed barrier, to find new ways, methods of changing the game. Just when every successive wave of peers and rivals thinks they have got his number, Anand sees that one, all important, absolute move.” https://uncommondesc.wpengine.com/computer-science/another-reason-why-the-human-mind-is-not-like-a-computer/ Epicycling Through The Materialist Meta-Paradigm Of Consciousness GilDodgen: One of my AI (artificial intelligence) specialties is games of perfect knowledge. See here: worldchampionshipcheckers.com In both checkers and chess humans are no longer competitive against computer programs, because tree-searching techniques have been developed to the point where a human cannot overlook even a single tactical mistake when playing against a state-of-the-art computer program in these games. On the other hand, in the game of Go, played on a 19×19 board with a nominal search space of 19×19 factorial (1.4e+768), the best computer programs are utterly incompetent when playing against even an amateur Go player.,,, https://uncommondesc.wpengine.com/intelligent-design/epicycling-through-the-materialist-meta-paradigm-of-consciousness/#comment-353454 Computers simply cannot create information: No nontrivial formal utility has ever been observed to arise as a result of either chance or necessity. - David L. Abel: Excerpt: Decision nodes, logic gates and configurable switch settings can theoretically be set randomly or by invariant law, but no nontrivial formal utility has ever been observed to arise as a result of either. Language, logic theory, mathematics, programming, computation, algorithmic optimization, and the scientific method itself all require purposeful choices at bona fide decision nodes. https://uncommondesc.wpengine.com/intelligent-design/david-l-abel-%E2%80%9Cno-nontrivial-formal-utility-has-ever-been-observed-to-arise-as-a-result-of-either-chance-or-necessity-%E2%80%9D/ Chaitin’s Algorithmic Information Theory shows that information is conserved under formal mathematical operations and, equivalently, under computer operations. This conservation law puts a new perspective on many familiar problems related to artificial intelligence. For example, the famous “Turing test” for artificial intelligence could be defeated by simply asking for a new axiom in mathematics. Human mathematicians are able to create axioms, but a computer program cannot do this without violating information conservation. Creating new axioms and free will are shown to be different aspects of the same phenomenon: the creation of new information. "... no operation performed by a computer can create new information." -- Douglas G. Robertson, "Algorithmic Information Theory, Free Will and the Turing Test," Complexity, Vol.3, #3 Jan/Feb 1999, pp. 25-34. Can a Computer Think? - Michael Egnor - March 31, 2011 Excerpt: The Turing test isn't a test of a computer. Computers can't take tests, because computers can't think. The Turing test is a test of us. If a computer "passes" it, we fail it. We fail because of our hubris, a delusion that seems to be something original in us. The Turing test is a test of whether human beings have succumbed to the astonishingly naive hubris that we can create souls.,,, It's such irony that the first personal computer was an Apple. http://www.evolutionnews.org/2011/03/failing_the_turing_test045141.html At last, a Darwinist mathematician tells the truth about evolution - VJT - November 2011 Excerpt: In Chaitin’s own words, “You’re allowed to ask God or someone to give you the answer to some question where you can’t compute the answer, and the oracle will immediately give you the answer, and you go on ahead.” https://uncommondesc.wpengine.com/intelligent-design/at-last-a-darwinist-mathematician-tells-the-truth-about-evolution/
bornagain77
More Lizzie nonsense:
What he demonstrates is that the distribution of characteristics among species does indeed form a tree (as previously demonstrated by Linnaeus) and a) shows that this is consistent with the hypothesis that it does in fact reflect a family tree (common ancestry)
Umm Linnean classification was based on a common design. Joe
Can anyone provide any evidence for unguided evolution producing CSI? Will Lizzie ever support her claims? I say 'No', she won't because she can't. But she will keep making them... Joe
A kick-back for a butt-kicking? Joe
This is at least the third or fourth Uncommon Descent post that's been presented as a direct response to something I've said. Don't I at least get a kick-back? Kantian Naturalist
Lizzie responding to WJM:
What there is a lack of is evidence that would distinguish the product of design from the product of evolution.
We can distinguish between design and unguided evolution.
The two processes result in very similar products...
Bald assertion.
(although as yet, human-designed products lack the ability to self-reproduce, except in virtual space, and self-replication is a the necessarily condition for evolution).
Your position cannot account for self-replication. And evolution can occur by design. And Joyce/ Lincoln designed RNAs capable of self-sustained replication.
My position is this: A.Evolution and design can both produce complex functional evidence.
Unguided evolution cannot produce anything. And the other mouth chimes in with more nonsense:
The only “evidence” for ID is at the OOL, what happens after is all down to “random chance” (sigh).
WRONG! If organisms were designed then random chance has very little to do with evolution. Organisms = design, evolution is by design as in organisms were designed to evolve and evolved by design. It is only if random chance produced living organisms can we uinfer random chance produced the diversity observed. Joe
Joe, in a rush just now, it's budget debate time. Paley's self replicating watch example is material to the disanalogy careless objection and has been in the IOSE for years, in Paley for 200+ years. So this is a strawman. Where my point is that MS Office is definitely designed but is non-optimal so the want of perfection objection is refuted by counter example. As has been painstakingly explained over and over, the only empirically grounded and needle in the haystack credible explanation for FSCO/I is design. So, we have excellent reason to see FSCO/I as a reliable sign of design, even when that is not comfortable for evo mat advocates and fellow travellers. If they object further let them account on observation for origin of cell based life and complex body plans per their suggested mechanisms. The six months no answer challenge is telling on that. KF kairosfocus
@kairosfocus <= Thanks for the informative rationale for re-posting the response by Optimus, which I found insightful as well (I especially liked his observation on contrived distinctions between 'artificial' and 'natural' intelligent processes). As a theoretical physicist (my day job is 'chief scientist' in computer industry working on problems like this) I also visit your blog, finding it quite interesting and generally agreeable with my positions. Your comment addresses mostly the epistemological forms of the conflation, and I am in agreement with those observations. I also find convincing Dembski's FSC results (no free lunch, etc). But the most problematic conflation common in the ID circles (which was strangely omitted by KN), is the blurring between the map and the territory, using interchangeably the generic term "evolution" for: (a) transformation process of biological systems (b) neo-Darwinian theory of (a) (modern synthesis) (c) other theories of (a), including intelligently guided For example Cornelius appears to claim that (a) doesn't exist because (b) is inadequate explanation for FSC. Among others, he rejects that large stretches of common DNA patterns imply any form of common origin, not just the explanation (b) for such commonality. My objection to that is that if two patterns A and B share large common stretches with lots of free choices along the pattern, the commonality is due to "common origin" of A and B in "some form" on purely probabilistic grounds -- e.g. if each free choice has q > 1 equiprobable values, and there are n such choices in common between A and B, then the odds of this commonality arising by chance are 1/q^n -> 0 as n->inf. The common origin in "some form" is thus a virtual necessity for large enough q^n (the size of event space). Hence, one cannot avoid conclusion that biological systems have property (a). As to what "some form" of common origin could be is another question, unrelated to validity of conjecture (b). The "common origin" may be the common intelligent agency reusing the common blueprints to produce new, more advanced lifeforms, just as human designers would do with technological innovations. In the latter case, we have no problem in saying that e.g. Windows OS has evolved, from Version 2 to 3,.. 8, i.e. classifying it as process of type (a). I think the same can be said about evolution of 'life technology' as process (a). As explained in the previous post, if one allows for possibility that physical laws are result of some underlying computations by an intelligent process (such as Planckian networks), then the designer, the blueprints and the product (organism) are coexistent in the same chunk of space-time & matter-energy, hence it doesn't matter in which sense one interprets 'common origin' of life-forms in (a) -- at one level of explanation (coarse grained physical laws) it appears as if one type of organism has transformed over generations into another type. Of course, the "mutation" in this case is not "random" as gratuitously claimed by (b), but it is intelligently guided/designed. This is in fact quite consistent with the perspective of James Shapiro, although I think he is still oversimplifying the picture by looking narrowly at the evolution only, ignoring problems of origin of life and of the fine tuning of physical laws & constants. In contrast the "Planckian networks" perspective addresses all three problems with a single type of explanation (additive intelligence of adaptable networks from ground level and up). nightlight
While Optimus makes several insightful observations, there is a bit I potentially disagree with (I am not sure whether it is semantics or on substance): @Optimus: ...why the death-grip on methodological naturalism? I suggest that its power lies in its exclusionary function. It rules out ID right from the start, before even any discussions about the empirical data are to be had. MN means that ID is persona non grata, MN doesn't imply anything of the sort (at least as I understand it). As a counter example, consider a chess playing computer program -- it is an intelligent process, superior (i.e. more intelligent) in this domain (chess playing) to any human chess player. How does MN exclude intelligent agency as an explanation for performance of chess playing program? It doesn't, since functionality of such program is fully explicable using conventional scientific methods. Hence, MN would allow that chess playing program is an intelligent agency (agent). Therefore MN doesn't exclude a hypothetical intelligent agency guiding processes of life and their evolution, either. It only excludes 'deus ex machina' proposals (such as invocation of "mind" as an explanation, since present natural science lacks a model for "mind stuff"; cf. hard problem of consciousness). For example, you can't scientifically claim that observed level of complexity implies "intelligent mind" as a designer, since there is no scientific counterpart for "mind" in the current natural science. The only scientifically valid statement is that observed complexity implies "intelligent process" as a designer. Of course, the "intelligent process" gives rise to the problem of infinite regression, i.e. the conjecture that ever more "intelligent" processes are required to explain the origin of previous "intelligent" processes. One possible way to terminate such 'tower of turtles' is to construct models which have a property of 'additive intelligence' i.e. systems in which replicating less intelligent agents and linking them in an interacting network forms a more intelligent agent. For example of such 'additive intelligence' consider a technological society which is a more intelligent agency than any of the humans or machines forming it (in the sense of being capable of solving much more difficult and complex problems than any of the component agencies is capable of). Similarly, an ant colony is a more intelligent agency than an ant. Note that such "additive intelligence" models need not be restricted to "live" system only. For example, it may turn out that our present physical laws (which are statistical at their foundation, quantum field theory/QFT) are merely a coarse grained regularity of some much more subtle intelligent process to which our present laws are oblivious. That is, our current physical laws may be like statistical laws of traffic flows which take the cars as "elementary objects" of the theory, oblivious to the intelligent process inside each car, guiding it for its own far reaching purposes. The statistical laws of traffic flows don't contradict the internal intelligent guidance of each car -- the two sets of patterns coexist harmoniously at different scales. Note for example that between the Planck scale of 10^-35 m for elemental (minimum) distance, and our current "elementary" particles at ~ 10^-15 m there are 20 orders of magnitude of scale for potential complexity to build up to make our "elementary" particles go around. That is 5 orders of magnitude more in available scale than what is needed to build up us (at O(1) meter scale) from our "elementary" particles. Since we're looking in 3-D space, the complexity achievable by Planckian objects can have (10^20)^3 = 10^60 more cogs per unit of space than our own computing technology designed and built from our "elementary" particles can have. If you then account for the 10^20 times shorter distances between smallest Planckian cogs, then their signals (limited by the same speed of light) need 10^20 times shorter time between the cogs, hence their "CPU clocks" can run 10^20 times faster than our fastest CPU clocks. The net result is that such Planckian network would be 10^60 (more cogs) x 10^20 (faster clocks) = 10^80 times more powerful computing system per unit of space than our best technology constructed from our "elementary" particles can ever be. With that kind of ratio in computing power, anything computed by this Planckian network would be to us indistinguishable from a godlike intelligence beyond our wildest imagination and comprehension. Note that there are already various network like pregeometry models of Planck scale objects, and although hypothetical they are in principle possible (e.g. check Wolfram's NKS). With suitable additional assumptions about the adaptability of the network links (e.g. a la Hebbian rule), such models would indeed be distributed computers similar to neural networks, self-programming and capable of combining smaller intelligence of subnetworks into larger intelligence of the whole network. In this model, physical laws are being computed, particle by particle, moment by moment, continuously by this vast underlying supersmart network, like real Matrix without childish naivete of the Hollywood version (and of course, without a way out, since there is no out if you are made of it). The biochemical networks of live cells are then the galactic scale technological projects by these intelligent Planckian networks, the kind of projects humans may achieve at their level in thousands or in millions of years (if we make it that long without nuking ourselves into oblivion or getting replaced by our own computers). @Optimus: thus some sort of evolutionary explanation must win by default. That seems to be the conflation of the map and the territory discussed in the previous post -- "evolution" as (a) natural process in some systems, and (b) explanation of such process. Why can't biological systems evolve if cars, TV, computers,... can evolve? nightlight
There are similarities too- as in your position cannot account for either.
A human programmer created MS Office.
That's right. And taht means blind and undirected processes did NOT. Thank you for proving my point- your position cannot account for MS office. Joe
other mouth:
Then if FSCO/I can be calculated, as is claimed, that statement can be verified simply by calculating the before and after FSCO/I.
Kind of hard to do when we cannot see the instructional code only the hardware.
If it does not change after such a significant event as the ability to digest citrate becoming available then the point is proven.
Dude, YOU cannot demonstrate that the change was via blind and undirected chemical processes. And THAT is the whole point. So AGAIN, you need to focus on YOUR position. Attacking ID will NEVER provide positive evidence for evolutionism. NEVER. Especially when you attack ID with ignorance... Joe
other mouth to KF:
There are some significant differences between MS Office and biological life.
There are similarities too- as in your position cannot account for either. Joe
Can we start a thread that is a direct challenge to Elizabeth Liddle to support her claim that "Dembski's CSI can be created by darwinian means"? Or are we OK with her bald assertions of ID being refuted? Joe
other mouth:
Yes, I can understand it was present before and after but logically the values would have changed.
Not necessarily- see "Signature in the Cell" for an explanation- or allow me: In a design scenario the information to make the change is already present. And that means, no there wasn't any change in value from before to after. Just as there wasn't any change in value with each generation of dawkins' "weasel". All the information was there from the beginning. And for the record- the other mouth won't understand any of that... Joe
F/N: Is anyone out there willing to argue that MS Office is optimal? Would one infer therefrom that its FSCO/I can be wholly accounted for on chance variation and happenstance of incremental improvements? KF kairosfocus
That was light ;) Joe
Joe, useful points, do, go a little light on tone. KF kairosfocus
WJM: The setting up of a demanded standard of perfection as seen by us, is a strawman tactic. One that also ignores the problem of over-specialisation, AKA overly narrow optimisation leading to brittleness in a world of varying circumstances. Do such remember post optimality sensitivity analysis in courses that looked at optimisation? If your optimum is brittle, in the real world watch out! KF PS: BA, useful clips and links as so usual. kairosfocus
other mouth reponding to Optimus:
The problem I have is that given an arbitrary object, X, you cannot tell me how to calculate the FSCI/O present.
It all depends on the object. Some things are not easily amendable to FSCI/O. IOW it is a limited tool.
Nor can a generic pseudocode be given that describes how to calculate the FSCI/O present.
We have told you how to measure FSCI/O.
What is the FSCI/O in Lenski’s bacteria before and after they obtained the ability to digest citrate?
FSCI/O was present in both cases. It is present in ALL living organisms.
What is the FSCI/O in a point mutation before and after the mutation?
That doesn't even make any sense and exposes your total ignorance. And Lizzie chimes in with het=r usual nonsense:
We know that Dembski’s CSI can be created by Darwinian means, ...
Liar. No one has ever observed darwinian means producing CSI- NEVER. You are either a liar or just full of it. I notice that you didn't provide any evidence for your claim. Lizzie Liddle, long on bald assertions and nonsense. Very short on actual evidence. Joe
“Perfect design would truly be the sign of a skilled and intelligent designer. Imperfect design is the mark of evolution ...
One wonders what Coyne was referring to if, as Dr. Liddle and so many others say, there is no evidence of design in biological features. If there is no evidence of design in biological features, how can "imperfect design" be "the mark of evolution"?
Thus, to sum it all up, in order to rationally practice science in the first place, one is forced to make certain theological presuppositions about the comprehensibility of the universe and our ability to understand it.
Well said. The problem is, when you're debating those that refuse to accept that there is any objective source of truth or arbiter of true statements, they are free to wallow in self-refuting nonsense and sophistry. Unfortunately, by denying rationality as binding, and by denying free will, they have no means by which to extricate themselves from their foolishness. William J Murray
And to this Day we can still see this false conception of God, 'a false idol' if you will, that Darwin erected in 'Origin' grounding the primary arguments of Darwinism.
Dr. Seuss Biology | Origins with Dr. Paul A. Nelson - video http://www.youtube.com/watch?v=HVx42Izp1ek
In this following video Dr. William Lane Craig is very surprised to learn that evolutionary biologist Dr. Ayala uses theological argumentation in his book to support Darwinism and invites him to present evidence, any evidence at all, that Darwinism can do what he claims it can:
Refuting The Myth Of 'Bad Design' vs. Intelligent Design - William Lane Craig - video http://www.youtube.com/watch?v=uIzdieauxZg
Here, at about the 55:00 minute mark in the following video, Phillip Johnson sums up his, in my opinion, excellent lecture by noting, with surprise, that the refutation of his book, 'Darwin On Trial', in the Journal Nature, the most prestigious science journal in the world, was a theological argument about what God would and would not do, and the critique from Nature was not a refutation based on any substantiating scientific evidence for Darwinism:
Darwinism On Trial (Phillip E. Johnson) – lecture video http://www.youtube.com/watch?v=gwj9h9Zx6Mw
And in the following quote, Dr. John Avise explicitly uses his false conception as to what God would and would not allow so as to try to make his case for Darwinism:
It Is Unfathomable That a Loving Higher Intelligence Created the Species – Cornelius Hunter - June 2012 Excerpt: "Approximately 0.1% of humans who survive to birth carry a duplicon-related disability, meaning that several million people worldwide currently are afflicted by this particular subcategory of inborn metabolic errors. Many more afflicted individuals probably die in utero before their conditions are diagnosed. Clearly, humanity bears a substantial health burden from duplicon-mediated genomic malfunctions. This inescapable empirical truth is as understandable in the light of mechanistic genetic operations as it is unfathomable as the act of a loving higher intelligence. [112]" - Dr. John Avise - "Inside The Human Genome" There you have it. Evil exists and a loving higher intelligence wouldn’t have done it that way. http://darwins-god.blogspot.com/2012/06/awesome-power-behind-evolution-it-is.html
What’s especially ironic about Dr. Avise's theological argument for Darwinism from the overwhelming rate of negative mutations that his argument is actually(without Darwinian Theological blinders on), a very powerful ‘scientific’ argument against Darwinism: http://darwins-god.blogspot.com/2012/06/evolution-professor-special-creation.html?showComment=1340994836963#c5431261417430067209 Here are a few more 'theological' quotes from Darwinists:
"The human genome is littered with pseudogenes, gene fragments, “orphaned” genes, “junk” DNA, and so many repeated copies of pointless DNA sequences that it cannot be attributed to anything that resembles intelligent design. . . . In fact, the genome resembles nothing so much as a hodgepodge of borrowed, copied, mutated, and discarded sequences and commands that has been cobbled together by millions of years of trial and error against the relentless test of survival. It works, and it works brilliantly; not because of intelligent design, but because of the great blind power of natural selection." – Ken Miller "Perfect design would truly be the sign of a skilled and intelligent designer. Imperfect design is the mark of evolution … we expect to find, in the genomes of many species, silenced, or ‘dead,’ genes: genes that once were useful but are no longer intact or expressed … the evolutionary prediction that we’ll find pseudogenes has been fulfilled—amply … our genome—and that of other species—are truly well populated graveyards of dead genes" – Jerry Coyne "We have to wonder why the Intelligent Designer added to our genome junk DNA, repeated copies of useless DNA, orphan genes, gene fragments, tandem repeats, and pseudo¬genes, none of which are involved directly in the making of a human being. In fact, of the entire human genome, it appears that only a tiny percentage is actively involved in useful protein production. Rather than being intelligently designed, the human genome looks more and more like a mosaic of mutations, fragment copies, borrowed sequences, and discarded strings of DNA that were jerry-built over millions of years of evolution." – Michael Shermer
Thus, to sum it all up, in order to rationally practice science in the first place, one is forced to make certain theological presuppositions about the comprehensibility of the universe and our ability to understand it. Darwin made many false theological presuppositions so as to ground his theory. Thus the current reasoning in evolution is built on, and absolutely reliant upon, a false conception of God that nobody really believes in in the first place. i.e. a 'strawman' version of God! Moreover, the theological 'bad design' argument, which Darwinists unwittingly continually use to try to make their case seem rational, is actually its own independent discipline of study within Theology itself called Theodicy:
Is Your Bod Flawed by God? - Feb. 2010 Excerpt: Theodicy (the discipline in Theism of reconciling natural evil with a good God) might be a problem for 19th-century deism and simplistic natural theology, but not for Biblical theology. It was not a problem for Jesus Christ, who was certainly not oblivious to the blind, the deaf, the lepers and the lame around him. It was not a problem for Paul, who spoke of the whole creation groaning and travailing in pain till the coming redemption of all things (Romans 8). http://www.creationsafaris.com/crev201002.htm#20100214a
Indeed, Jesus Christ was CERTAINLY NOT oblivious to the pain, the suffering, and especially the death present in this world: Music and verse:
Natalie Grant - Alive (Resurrection music video) http://www.godtube.com/watch/?v=KPYWPGNX Acts 2:24 "But God raised Him up again, putting an end to the agony of death, since it was impossible for Him to be held in its power.
bornagain77
optimus well said,,, as to you first 2 points:
(1) metaphysical presuppostions absolutely undergird much of the modern synthetic theory. This is especially true with regard to methodological naturalism (of course, MN is distinct from ontological naturalism, but if, as some claim, science describes the whole of reality, then reality becomes coextensive with that which is natural). (2) In Darwin’s own arguments in favor of his theory he rely heavily on metaphysical assumptions about what God would or wouldn’t do. Effectively he uses special creation by a deity as his null hypothesis, casting his theory as the explanatory alternative. Thus the adversarial relationship between Darwin (whose ideas are foundational to the MST) and theism is baked right into The Origin. To this very day, “bad design” arguments in favor of evolution still employ theological reasoning.
Optimus, I think that Dr. Craig, in his recent debate with Dr Alex Rosenberg, in his usual direct and to the point style, clearly exposes the major fatal flaws in MN thinking
Does Epistemological Naturalism Imply Metaphysical Naturalism? - video http://www.youtube.com/watch?v=1yNddAh0Txg Is Metaphysical Naturalism Viable? - video http://www.youtube.com/watch?v=HzS_CQnmoLQ
As to point #2, it is crucial to note that the practice of science is absolutely dependent on Theological presuppositions.
The Great Debate: Does God Exist? - Justin Holcomb - audio of the 1985 debate available on the site Excerpt: The transcendental proof for God’s existence is that without Him it is impossible to prove anything. The atheist worldview is irrational and cannot consistently provide the preconditions of intelligible experience, science, logic, or morality. The atheist worldview cannot allow for laws of logic, the uniformity of nature, the ability for the mind to understand the world, and moral absolutes. In that sense the atheist worldview cannot account for our debate tonight.,,, http://theresurgence.com/2012/01/17/the-great-debate-does-god-exist
In fact it is no small coincidence, that modern science was born in the matrix of Christian Theism where it was presupposed that nature was/is rational, approachable, and understandable by the 'mind' of man, since we were/are held to be made in God's image:
The Origin of Science Excerpt: Modern experimental science was rendered possible, Jaki has shown, as a result of the Christian philosophical atmosphere of the Middle Ages. Although a talent for science was certainly present in the ancient world (for example in the design and construction of the Egyptian pyramids), nevertheless the philosophical and psychological climate was hostile to a self-sustaining scientific process. Thus science suffered still-births in the cultures of ancient China, India, Egypt and Babylonia. It also failed to come to fruition among the Maya, Incas and Aztecs of the Americas. Even though ancient Greece came closer to achieving a continuous scientific enterprise than any other ancient culture, science was not born there either. Science did not come to birth among the medieval Muslim heirs to Aristotle. …. The psychological climate of such ancient cultures, with their belief that the universe was infinite and time an endless repetition of historical cycles, was often either hopelessness or complacency (hardly what is needed to spur and sustain scientific progress); and in either case there was a failure to arrive at a belief in the existence of God the Creator and of creation itself as therefore rational and intelligible. Thus their inability to produce a self-sustaining scientific enterprise. If science suffered only stillbirths in ancient cultures, how did it come to its unique viable birth? The beginning of science as a fully fledged enterprise took place in relation to two important definitions of the Magisterium of the Church. The first was the definition at the Fourth Lateran Council in the year 1215, that the universe was created out of nothing at the beginning of time. The second magisterial statement was at the local level, enunciated by Bishop Stephen Tempier of Paris who, on March 7, 1277, condemned 219 Aristotelian propositions, so outlawing the deterministic and necessitarian views of creation. These statements of the teaching authority of the Church expressed an atmosphere in which faith in God had penetrated the medieval culture and given rise to philosophical consequences. The cosmos was seen as contingent in its existence and thus dependent on a divine choice which called it into being; the universe is also contingent in its nature and so God was free to create this particular form of world among an infinity of other possibilities. Thus the cosmos cannot be a necessary form of existence; and so it has to be approached by a posteriori investigation. The universe is also rational and so a coherent discourse can be made about it. Indeed the contingency and rationality of the cosmos are like two pillars supporting the Christian vision of the cosmos. http://www.columbia.edu/cu/augustine/a/science_origin.html
Thus since Theistic presuppositions were, and still are, absolutely necessary for the founding, and continued practice, of modern science, it is crucial to note exactly what role Theology played, an still plays, in Darwin's formulation of evolution, and in current evolutionary reasoning, so as to give Darwinism a semblance of being 'scientific':
Charles Darwin, Theologian: Major New Article on Darwin's Use of Theology in the Origin of Species - May 2011 I have argued that, in the first edition of the Origin, Darwin drew upon at least the following positiva theological claims in his case for descent with modification (and against special creation): 1. Human begins are not justified in believing that God creates in ways analogous to the intellectual powers of the human mind. 2. A God who is free to create as He wishes would create new biological limbs de novo rather than from a common pattern. 3. A respectable deity would create biological structures in accord with a human conception of the 'simplest mode' to accomplish the functions of these structures. 4. God would only create the minimum structure required for a given part's function. 5. God does not provide false empirical information about the origins of organisms. 6. God impressed the laws of nature on matter. 7. God directly created the first 'primordial' life. 8. God did not perform miracles within organic history subsequent to the creation of the first life. 9. A 'distant' God is not morally culpable for natural pain and suffering. 10. The God of special creation, who allegedly performed miracles in organic history, is not plausible given the presence of natural pain and suffering. http://www.evolutionnews.org/2011/05/charles_darwin_theologian_majo046391.html The Descent of Darwin - Pastor Joe Boot - (The Theodicy of Darwinism) - video http://www.youtube.com/watch?v=HKJqk7xF4-g Finding Darwin's Real God - Michael Flannery - October 11, 2012 Excerpt: Even since the publication of Ken Miller's Finding Darwin's God, the Brown University biologist and leading spokesman for theistic evolution has claimed to have found deity in "the coherent power of Darwin's great idea" (p. 292). Miller sees no contradiction between Charles Darwin's theory and the three great Abrahamic religions, Judaism, Christianity, and Islam. For him, there is "no reason for believers to draw a line in the sand between God and Darwin" (p. 267). Francis Collins seems to suggest much the same in his Language of God. Of course they weren't the first; long before Miller and Collins there was Charles Kingsley (1819-1875). But is the god of Darwin really a "coherent" power for these faiths, wholly compatible with any or all of them? Wishful thinking aside, a little investigation reveals the true theistic evolutionary equation: Darwin + god = Man. Put more simply Darwin's god was Man. To see this clearly we must go to Darwin's own writings.,,, http://www.evolutionnews.org/2012/10/finding_darwins_1065211.html
bornagain77
nightlight @ 2: My view is that if anti-ID advocates are going to take every possible chance to derail the ideas of the debate down into definitional and semantic rabbit holes, including denying the obvious, there's no sense expending the effort to constantly qualify every term and phrase in an attempt to avoid "misunderstanding". If you cannot even get your debate opponent to agree that the LNC is binding, why bother trying to explain to him or her what you mean by the term "evolution" in a particular context? If they cannot glean from the context what Dr. Hunter means when he uses the word "evolution", it is not because Dr. Hunter is careless (IMHO), but rather because those readers are on a mission to misunderstand. William J Murray
PS: A current case of Berra's blunder. kairosfocus
NL: First, welcome to UD, I do not recall seeing you here before. An interesting comment. I would suggest: 1 --> There is a problem of multiple meanings of and contexts for "evolution," which has frequently been remarked on here at UD. Such meanings range from minor population variations (sometimes cyclical as with the Finches and the Moths) to a claimed theory of the origin and diversity of body plans, and with extensions that would see evolution as the driving dynamic of the cosmos from hydrogen to humans. 2 --> However, it must be understood that Darwin's theory was a macro-theory from the first as the conclusion to Origin highlights. 3 --> Similarly, he did have a clear de-Christianising "free thought" worldview and cultural agenda context -- though he did not want to rail at "religion" but to fundamentally discredit it in the name of science, as was admitted in the Oct 13 1880 letter to Aveling:
. . . though I am a strong advocate for free thought [--> NB: free-thought is an old synonym for skepticism, agnosticism or atheism] on all subjects, yet it appears to me (whether rightly or wrongly) that direct arguments against christianity & theism produce hardly any effect on the public; & freedom of thought is best promoted by the gradual illumination of men’s minds, which follows from the advance of science. It has, therefore, been always my object to avoid writing on religion, & I have confined myself to science. I may, however, have been unduly biassed by the pain which it would give some members of my family [--> NB: especially his wife, Emma], if I aided in any way direct attacks on religion.
4 --> To this end, part of what he did repeatedly was to suggest that there are natural theology challenges to positions like those of Paley et al [BTW, I have never ceased to marvel at how often in dismissing the watch in the field of Ch 1 in Paley, objectors so rarely address the thought exercise of the self replicating watch in Ch 2), and in promoting a redefinition of science and its methods that is being pushed so hard in our day, that science must be naturalistic, with the only contrast presented being the despised "supernatural." (The issue of art detectable on reliable empirical signs is ever so often suppressed or overlooked, even by those with responsibility to know and do better.) 5 --> Consequently, over the past 150 years and especially in recent years, there has been a strong, increasing tendency on the part of promoters of "Evolution" to conflate minor population variations with origin of body plans, to suggest that origin of life can similarly be explained successfully on blind chance and necessity, to insist that science must operate on methodological naturalism [which begs big questions], to infer to or assume evolutionary materialism [recall the forgotten, dismissed status of Wallace, once he opted for a non materialist view], and to generally see Evolution as having changed the world. 5 --> Thus we end up at notions like Dawkins' that Darwin made it possible to be an intellectually fulfilled atheist and the like. 6 --> By contrast, design theory is not about such grand schemes. It pivots on a key issue: is it possible to scientifically detect design as credible cause per empirically reliable sign, essentially functionally specific, complex organisation and associated information? 7 --> The answer is -- on abundant evidence -- yes, but precisely because cell based life is chock full of FSCO/I, all sorts of subterfuges are reverted to to dismiss such once we come to the world of life. This takes in a great many people who look to science as the most prestigious institution in the civilisation presently. (There is a related problem of scientism, that imagines that science is the be all and end all of knowledge, that its bounds cover the limits of what exists or can be known, and that if you are not thinking or arguing in the methodologically naturalistic circle, you are irrational. This of course deeply poisons discussions, and adherents usually don't begin to understand how deeply ill-informed and hopelessly fallacious it is.) 8 --> Insofar as "evolution" means change across time, and does not necessarily exclude intelligent guidance -- cf. here, Berra's blunder and Darwin's tendency to cite artificial selection by breeders as an exemplar for the powers of natural selection acting on chance variations through differential reproductive success -- evolution and design are compatible. 9 --> Indeed one of the two leading members of the design theory school of thought, Behe, believes in universal common descent. His main conceptual presentation, the concept of irreducible complexity, points to a barrier to blind watchmaker darwinian evolution, not to a barrier to intelligently guided or orchestrated or chosen evolutionary development. 10 --> I think your conception of "complexity based arguments" is not complete. the issue is, that once we have especially functionally specified complexity, there is a very large config space of possibilities, so large that the atomic resources of our solar system or the observed cosmos on the usual timelines, cannot scratch the surface of the field of possibilities. So, on needle in haystack grounds -- at 500 bits, the solar system's search capacity is as one straw to a cubical hay bale 1,000 LY on the side, about as thick as our galaxy's main disc. 11 --> We would be only warranted in expecting such a search of a bale superposed on our galaxy to pick up to all but certainty, straw. That is, given that specificity of configuration to get well matched parts to fit and work together puts us in narrow and unrepresentative zones in the space of possibilities -- islands of function -- it becomes utterly unlikely for blind searches to land on such islands, on the gamut of our solar system or the observed cosmos. The only known, observed means of bridging that gap is intelligence; i.e. FSCO/I is an empirically reliable index of design, on billions of tests. That is why we do routinely observe something like Berra's line of descent of Corvettes since the 1950's. However, a key part of the descent lies through the minds of generations of engineers, i.e. designers. 12 --> And that extends to technological evolution in general. including, of courser the common taxonomic "tree" of descent example, the paper clip family. So, nope, complexity is not an argument against INTELLIGENT design. _________ I trust this helps. KF kairosfocus
KN was quite right about the conflation reflex, although he didn't identify properly the worst type of such conceptual blurring, the one which is the most harmful to the scientific ID position and which makes some sympathizers, including myself, cringe while reading such materials. The most damaging (for ID as a scientific position) is the conflation of "evolution" as: a) process in biological systems b) neo-Darwinian theory of (a) Cornelius Hunter, the author of Darwin's God blog (as well as several posters here and elsewhere) don't make any distinction between (a) and (b), and keep objecting to some generic "evolution", as if their irreducible complexity/FSCI based arguments apply to (a). Such arguments, which are applicable to (b), don't transfer to (a), otherwise one could use the same complexity based arguments to "refute" evolution of technology, science, arts etc., which are intelligently guided processes analogous to (a). After bringing this objection to Cornelius on Darwin's God blog, although he acknowledged the distinction betwen (a) and (b), and that his complexity based critique applies only to (b), he still fails to see a need to refine his terminology, as if wishing to somehow make (a) go away by his critique of (b). I think that this kind of wishful "logic" only weakens the perfectly legitimate ID argument against (b). In the above discussion, I ended up battling against both sides, which made it clear that this conflation and resulting wishful transfer of argument from (b) to (a) are pretty widespread among ID supporters. (Caution: there is a foul-mouthed neo-Darwinian dogmatist character posting there, a robot named Thorton, who merely plays back the official ND-E mantras which are triggered by the first few keywords in a post he recognizes, without ever reading or understanding the arguments being made; trying to discuss with him is a waste of time.) nightlight
And, yes, we are all familiar with the objection that organisms are distinct from artificial objects, the implication being that our knowledge from the domain of man-made objects doesn’t carry over to biology. I think this is fallacious. Everyone acknowledges that matter inhabiting this universe is made up of atoms, which in turn are composed of still other particles. This is true of all matter, not just “natural” things, not just “artificial” things – everything. If such is the case, then must not the same laws apply to all matter with equal force? From whence comes the false dichotomy that between “natural” and “artificial”? If design can be discerned in one case, why not in the other?
Very well put. Mapou

Leave a Reply