Uncommon Descent Serving The Intelligent Design Community

Optimus, replying to KN on ID as ideology, summarises the case for design in the natural world

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The following reply by Optimus to KN in the TSZ thread, is far too good not to headline as an excellent summary of the case for design as a scientifically legitimate view, not mere  “Creationism in a cheap tuxedo”  ideology motivated and driven by anti-materialism and/or a right-wing, theocratic, culture war mentality commonly ascribed to “Creationism” by its objectors:

______________

>> KN

It’s central to the ideological glue that holds together “the ID movement” that the following are all conflated:Darwin’s theories; neo-Darwinism; modern evolutionary theory; Epicurean materialistic metaphysics; Enlightenment-inspired secularism. (Maybe I’m missing one or two pieces of the puzzle.) In my judgment, a mind incapable of making the requisite distinctions hardly deserves to be taken seriously.

I think your analysis of the driving force behind ID is way off base. That’s not to say that persons who advocate ID (including myself) aren’t sometimes guilty of sloppy use of language, nor am I making the claim that the modern synthetic theory of evolution is synonymous with materialism or secularism. Having made that acknowledgement, though, it is demonstrably true that (1) metaphysical presuppostions absolutely undergird much of the modern synthetic theory. This is especially true with regard to methodological naturalism (of course, MN is distinct from ontological naturalism, but if, as some claim, science describes the whole of reality, then reality becomes coextensive with that which is natural). Methodological naturalism is not the end product of some experiment or series of experiments. On the contrary it is a ground rule that excludes a priori any explanation that might be classed as “non-natural”. Some would argue that it is necessary for practical reasons, after all we don’t want people atributing seasonal thunderstorms to Thor, do we? However, science could get along just as well as at present (even better in my view) if the ground rule is simply that any proposed causal explanation must be rigorously defined and that it shall not be accepted except in light of compelling evidence. Problem solved! Though some fear “supernatural explanation” (which is highly definitional) overwhelming the sciences, such concerns are frequently oversold. Interestingly, the much maligned Michael Behe makes very much the same point in his 1996 Darwin’s Black Box:

If my graduate student came into my office and said that the angel of death killed her bacterial culture, I would be disinclined to believe her…. Science has learned over the past half millenium that the universe operates with great regularity the great majority of the time, and that simple laws and predictable behavior explain most physical phenomena.
Darwin’s Black Box pg. 241

If Behe’s expression is representative of the ID community (which I would venture it is), then why the death-grip on methodological naturalism? I suggest that its power lies in its exclusionary function. It rules out ID right from the start, before even any discussions about the emprical data are to be had. MN means that ID is persona non grata, thus some sort of evolutionary explanation must win by default. (2) In Darwin’s own arguments in favor of his theory he rely heavily on metaphysical assumptions about what God would or wouldn’t do. Effectively he uses special creation by a deity as his null hypothesis, casting his theory as the explanatory alternative. Thus the adversarial relationship between Darwin (whose ideas are foundational to the MST) and theism is baked right into The Origin. To this very day, “bad design” arguments in favor of evolution still employ theological reasoning. (3) The modern synthetic theory is often used in the public debate as a prop for materialism (which I believe you acknowledged in another comment). How many times have we heard the famed Richard Dawkins quote to the effect that ‘Darwin made it possible to be an intellectually fulfilled atheist’? Very frequently evolutionary theory is impressed into service to show the superfluousness of theism or to explain away religion as an erstwhile useful phenomenon produced by natural selection (or something to that effect). Hardly can it be ignored that the most enthusiastic boosters of evolutionary theory tend to fall on the atheist/materialist/reductionist side of the spectrum (e.g. Eugenie Scott, Michael Shermer, P.Z. Meyers, Jerry Coyne, Richard Dawkins, Sam Harris, Peter Atkins, Daniel Dennett, Will Provine). My point simply stated is that it is not at all wrong-headed to draw a connection between the modern synthetic theory and the aforementioned class of metaphysical views. Can it be said that the modern synthetic theory (am I allowed just to write Neo-Darwinism for short?) doesn’t mandate nontheistic metaphysics? Sure. But it’s just as true that they often accompany each other.

In chalking up ID to a massive attack of confused cognition, you overlook the substantive reasons why many (including a number of PhD scientists) consider ID to be a cogent explanation of many features of our universe (especially the bioshpere):

-Functionally-specified complex information [FSCI] present in cells in prodigdious quantities
-Sophisticated mechanical systems at both the micro and macro level in organisms (many of which exhibit IC)
-Fine-tuning of fundamental constants
-Patterns of stasis followed by abrupt appearance (geologically speaking) in the fossil record

In my opinion the presence of FSCI/O and complex biological machinery are very powerful indicators of intelligent agency, judging from our uniform and repeated experience. Also note that none of the above reasons employ theological presuppositions. They flow naturally, inexorably from the data. And, yes, we are all familiar with the objection that organisms are distinct from artificial objects, the implication being that our knowledge from the domain of man-made objects doesn’t carry over to biology. I think this is fallacious. Everyone acknowledges that matter inhabiting this universe is made up of atoms, which in turn are composed of still other particles. This is true of all matter, not just “natural” things, not just “artificial” things – everything. If such is the case, then must not the same laws apply to all matter with equal force? From whence comes the false dichotomy that between “natural” and “artificial”? If design can be discerned in one case, why not in the other?

To this point we have not even addressed the shortcomings of the modern synthetic theory (excepting only its metaphysical moorings). They are manifold, however – evidential shortcomings (e.g. lack of empirical support), unjustified extrapolations, question-begging assumptions, ad hoc rationalizations, tolerance of “just so” stories, narratives imposed on data instead of gleaned from data, conflict with empirical data from generations of human experience with breeding, etc. If at the end of the day you truly believe that all ID has going for it is a culture war mentality, then may I politely suggest that you haven’t been paying attention.>>

______________

Well worth reflecting on, and Optimus deserves to be headlined. END

Comments
Box #162 I'm not going to let you off the hook so easily, Thanks. I appreciate challenging questions, such as those you bring up, since they make me follow paths I probably wouldn't have thought of visiting on my own. Everything is so unfathomably intelligent starting from the bottom and skyrocketing in ever more increasing intelligence that your theory rendered itself incapable of explaining non-intelligence. It is not able to explain why there is no overcrowded universe filled with Max Plancks. That reveals a highly anthropocentric perspective which enormously underestimates difficulties and amounts of computations needed for different problems in the whole picture. So, we need to get the right perspective first. The networks operating at smaller scales are computationally more powerful, with the ratio of computing powers scaling as L^4, where 1/L is the scale (of the cogs or of the elemental components intelligent agents are working with). Namely, factor L^3 is due to ability to fit L^3 times more cogs of length 1/L than of length 1 (unit) into the same space (or in the same amount of matter-energy). The additional factor L is due to shorter distances, allowing for L times quicker signaling (faster CPU clocks) between components of size 1/L than for components of size 1. But the task these more powerful, denser networks are solving is computationally far more demanding than the tasks at larger scales. Imagine someone trying to solve all the equations of physics involved in you typing a sentence it took you few seconds to compose and type. If we took all the computers in the world, dedicated them just to that task, of computing physics needed for you to type one sentence, in those few seconds they might solve actions of a few smaller molecules, and even that little only very approximately i.e. if you were to let such solutions go for a millisecond, you would unravel into components, that's how badly they would diverge from the correct behaviors. The computing gear doing that job would occupy a state size facility and require proprtionately huge power for all the gear. Yet, the Planckian networks working in the fraction of that space (just your body), compute all that physics exactly, to perfection, in real time, for every particle (every photon, electron, quark,...) in your body. So, computing physics for your functionality over few seconds is a massive and complex computational task that we can't dream to ever approaching with all of our intelligence and technology put together. The next layer, biological functions of your body as you think of and type the sentence, is a minor refinement, a droplet in the sea of the computations needed for computing its physics. In turn, the computations that you did to think up your sentence and type it, is a microscopic droplet in the sea of the biological computations that kept your body going for those few seconds. Glancing over those YouTube videos on operation of cellular nano-technology of just one molecular machine such as ATP synthase, inside one organelle in one cell (among trillions cells in your body) churning out ATP at furious 10,000 RPM pace... it's obvious that just work & computation of one cell would easily exceed any large industrial city in the amount of logistics and problem solving computations done at the human level, let alone your work in composing and typing one sentence. Hence, what for us seems like a human genius at work, whether it is Planck, Einstein,... or whoever, is an infinitesimally tiny speck of intelligent computation going on in a sea of intelligent computations by the underlying networks in that same space and time. So, producing Plancks or Einsteins is very, very small fish to fry in the more complete perspective. Similarly, our computational contribution to the harmonization of this small corner of the universe is equally infinitesimal to that which was computed by the underlying layers of networks. However small, though, our contributions are still irreplaceable and invaluable since nothing else can provide them at our human scales. We were designed and constructed to figure out and do the jobs at our scales that have to be done and that nothing else can presently do as well. Consider for example, a task of fixing a broken bone. The two sides of the fracture have broken through skin and are inch apart. However smart and powerful at molecular engineering the biochemical networks are, they can't bring those two pieces together and align them for the job of fusing the fragments to begin. For that, they need that gigantic 'dumb' brute with his little speck of computational intelligence, the surgeon, to pull the fractured pieces together, align them just right, then fix them in that position with a plate. Only then can the cellular biochemical networks get down and do their lions share of the work, fusing the two fragments at the cellular and molecular levels so they become one live bone again. While we can't dream of ever achieving anything like the latter feat, the biochemical networks without the 'dumb' brutes, such as humans, could not dream of doing on their own the first step that the 'dumb' brute did aided by his tiny speck of computation. Hence, intelligence at each level is highly specialized and optimized for the specific kinds of tasks and problems of that scale. While magnitudes of computations and resulting intelligence vastly differ at different scales, increasing as L^4 at lower scales 1/L, the specialization makes each one irreplaceable and necessary. At present, the Planckian networks have figured out no better way to do tasks and solve problems at our scale that we do, than through us the way we do them with our little specks of intelligence controlling our bulk and brute force, at least in this little corner of the universe. As to why they wouldn't design and construct millions of Max Plancks or equivalents (imagine that nightmare world), I would guess they figured such solution to be suboptimal, compared to, say, having one Max Planck equivalent on every x1 thousands farmers, x2 thousands truck drivers, x3 thousands bakers, x4 thousands of nurses, x5 thousands of cheerleaders,... Another relevant constraint on what can be done is the hierarchy of laws and prohibition against violations of lower level laws (or harmonized solutions) from the higher levels. Hence, a life cannot be conjured at will without the right ingredients produced and brought all together at the right place under the right conditions. Putting together massive star, having it go supernova to make the atomic ingredients needed for life, takes a bit of doing and then a bit waiting for the furnace to reach its temperature. While economies of scale do help (larger stars will go supernova quicker than smaller stars), it still takes a lengthy gathering of hydrogen & helium gasses via gravity, to get enough material, pack it densely enough to light the fusion, etc. Considering the 10^80 factor edge in computing power of Planckian networks over our own networks of neurons, we surely have no basis or right to second guess whether the way that is being done is the best that can be done with what is available. For us, it is a goodlike perfection and the best of all possible worlds, for all practical purposes. ... one cannot explain the whole from its parts. What we see in organisms is top-down organization from the level of the whole organism. We cannot reconstruct the pattern at any level of activity by starting from the parts and interactions at that level. There are always organizing principles that must be seen working from a larger whole into the parts. Obviously, a dumb trial and error, putting the parts every which way until a viable form comes out would be absurd. The way it is done is the way you build or make something -- you first do all the arranging in your mind, as computed by the networks of your neurons (see post #109 on body-mind aspect), where it is lot cheaper and lot quicker to figure it out and try it out than in the real physical world. The same kind of intelligent construction process goes on in the internal models that biochemical networks run in their mind before committing to the construction in the physical world. As sketched in the post #116, these networks are goal oriented anticipatory systems with the mind stuff, just like your brain, except computationally much quicker and smarter. Of course, the latter superiority is within their specialty and on their scales e.g. they can't read this sentence or type on the keyboard (for those little bits of work, they built you). Similarly, during morphogenesis, their internal model has a 'picture' of what they are constructing. That 'picture' would certainly not look like anything you see with your senses and your mind looking at the same form. But it looks like what they will perceive or sense when it is complete.nightlight
April 2, 2013
April
04
Apr
2
02
2013
12:06 AM
12
12
06
AM
PDT
Box:
Everything is so unfathomably intelligent starting from the bottom and skyrocketing in ever more increasing intelligence that your theory rendered itself incapable of explaining non-intelligence.
I'm not going to claim to understand nightlight's theory, but my take was that he was saying intelligence decreased as you moved up in size. Almost as though scaling involved some sort of information entropy. Thus, the networks we create will never be more intelligent than we are and we will never be more intelligent than the network that created us. Or something like that. :PPhinehas
April 1, 2013
April
04
Apr
1
01
2013
08:48 PM
8
08
48
PM
PDT
Box(159): If everything is so unfathomably intelligent, from the intelligent elemental building blocks that form the conscious super-intelligent Planckian networks , that form the elementary particles, all the way up, why is it that the universe is so unintelligent and lifeless?
Nightlight(161): There are limits on what can be computed in a given time on given amount of hardware, no matter how powerful the computer is. The computations are building up from smaller to higher scales as illustrated with the multi-dimensional, multi-level crossword puzzle metaphor in post #141. As the coordination of computations is extended, the economies of scale squeeze out more inefficiencies and boost the computing power of the overall system. That still only pushes the boundary of possible out a bit, but the boundary still exists.
I’m not going to let you off the hook so easily, because I’m pretty sure I’m on to something here. Your theory involves an utmost attempt to explain intelligence bottom-up. In fact it is obvious that your theory has a much better chance in succeeding than plain old naturalism. Unfortunately the looming success of your theory has become its main problem. Everything is so unfathomably intelligent starting from the bottom and skyrocketing in ever more increasing intelligence that your theory rendered itself incapable of explaining non-intelligence. It is not able to explain why there is no overcrowded universe filled with Max Plancks. You mention time as a boundary, but there are billions of stars in the galaxy that are billions of years older than the sun. But let’s forget about obtuse planets and stars, most organisms are also bad at math. Come to think of it, most people are too. I’m also arguing a more principled case against your theory, 'one cannot explain a whole from its parts'. Maybe you care to give your opinion on the examples I presented in post#154Box
April 1, 2013
April
04
Apr
1
01
2013
06:05 PM
6
06
05
PM
PDT
Box #159: If everything is so unfathomably intelligent, from the intelligent elemental building blocks that form the conscious super-intelligent Planckian networks , that form the elementary particles, all the way up, why is it that the universe is so unintelligent and lifeless? There are limits on what can be computed in a given time on given amount of hardware, no matter how powerful the computer is. The computations are building up from smaller to higher scales as illustrated with the multi-dimensional, multi-level crossword puzzle metaphor in post #141. As the coordination of computations is extended, the economies of scale squeeze out more inefficiencies and boost the computing power of the overall system. That still only pushes the boundary of possible out a bit, but the boundary still exists. At present, in this corner of the universe we (humans & our societies) are the edge of the technological advance, the best solution the Planckian network could compute around here. As we're all well aware, harmonization of computations or actions at the level of groups of humans is still quite an incomplete job. We're the technology which was designed by the Planckian networks to solve these harmonization problems, the best one they have, and we're doing it the best we know how. There isn't an omniscient, omnipotent solver or a cheat sheet to short-circuit the job. Computation has to take what it takes to complete at our scale, 1+1 cannot become 3 no matter how convenient or useful that might be sometimes. Our contribution is a small refinement, a finer tuning of the capacities and efficiencies already achieved by the heavy lifters at the levels below our, such as biochemical networks making life possible or Planckian networks making physics & chemistry possible for the latter. A grain of salt is needed here when speaking of these levels (physics, chemistry, biology) as discrete, cleanly separate concepts. These layers are an artifact of the cognitive coarse graining we have to settle with due to our human limitations in comprehending all the intricacies and finesse of the patterns computed by the Planckian networks. They're not computing laws of physics, chemistry, biology,... separately but as a whole, single live pattern advancing as computed, some features of which we label as layers of laws at different levels. That's similar to seeing a discrete set of few rainbow colors in what is in fact a continuous spectrum of virtually infinite number of distinct colors. Laws are thus not reducible between "layers" e.g. biology doesn't follow from laws of physics just a laws of social organization don't follow from biological laws of human organism. Biology is only consistent with laws of physics, but since laws of physics are of statistical nature at their foundation (quantum theory), the consistency constraint leaves plenty of room for finer tuning at higher layers i.e. for finer details of the computed whole patterns which are not captured by the coarse grained regularities we conceptualize as laws of physics. Hence, concept of "laws" altogether is a limited tool for conceptualizing and describing the full, whole patterns computed by the Planckian networks. In addition to capacity vs problem difficulty limitations, there are additional constraints on computations, the general rules of the game. The most important one is that higher levels cannot violate harmonization already achieved at the lower levels e.g. we, who are at biological level, cannot violate laws of physics, just as laws of physics cannot violate laws of computations of Planckian networks (e.g. we cannot reach down and tweak the cogs of Planckian networks through some physical contraption). Allowing for any such violations would invalidate harmonization (or solutions in crossword puzzle picture) achieved at the lower layers by computing systems which are far superior in their computing capacity to us (our computations are merely a finer tuning, little corrections to the least significant digit, as it were, to solutions computed by the heavy lifters). The resulting loss of harmonization at lower layers (via violations of laws of physics), would cost far more in lost computing capacity than the tiny addition we might be able to get in ruturn for such violation. This additional large costs would result from the loss of mutual predictability between the cogs at smaller scales (since the mutual predictability is the key lever of the economies of scale). It's the laws (or regularity of patterns) that make the predictability possible, thus their violation would drive the system into a lawless, everyone for himself, inefficient state of operation. An immediate consequence of the above rule is that actions are local, limited by the speed of light and the physical forces and laws. Hence, harmonization is local as well i.e. at larger scales the chaos still rules and that can throw a monkey wrench into any local advance. For example, as result of the large scale chaos, a large asteroid, following its own happiness, could strike Earth almost any time and we may not be able to do anything to deflect it at present. Once our technological harmonization extends into the larger solar system, then some level of such chaotic reversal can be prevented (e.g. short of another star heading our way). We are in fact an intermediate level of the technology designed by the underlying networks as a way to compute how to achieve that level of harmonization and preclude chaotic setbacks of that kind, and whatever we build for our stretch of that task is then the rest of that protective technology. Another of the implications of this bottom up superiority of laws, is that at our level, any fully harmonized social system will not be able to violate individual's 'pursuit of happiness' (which is the primary law of human individual), assuming we're at that time evaluated as a technology that should carry on. Obviously, we're still quite a bit away from computing that level of social scale harmonization. The tuning and adjustments needed for that level of harmonization will have to modify both sides i.e. while the social rules will obviously need to evolve, the humans who will live in such fully harmonized society will also very likely not include the full spectrum of human variety present today. Otherwise the sanctity of the individual's 'pursuit of happiness' could easily backfire, as you can easily imagine considering all the stuff that makes some people happy nowadays. At the extreme end, the further computations by Planckian networks and their larger scale technologies (including us and our computers) may eventually reveal that carbon based technology (humans) is altogether unsuitable (suboptimal) for the job, and silicon based or some other technology will carry on the harmonization beyond some point, just as dinosaurs and countless other carbon technologies were computed as being suboptimal at various points and were replaced with more suitable, improved technologies. All we can do is to continue contributing the harmonization process the best we can, to prove ourselves worth keeping. Regarding the main question, "where are they?" the above limitations point to one possibility -- it is not easy to produce life. Consider how much production has to happen, from supernovas cooking up heavy elements, then exploding to scatter their products so that potentially habitable planetary systems can form, provided lots of other conditions line up just the right way at the right place. Since that is apparently the best technology Planckian networks were able to compute so far for the job, it may be that life is indeed very rare. It may also be that we don't know how to recognize it in what we already see or what is reaching us. There could be high tech live entities which are a lot smaller or a lot larger than our imagination could conceive. Or they may operate at spectral ranges we don't watch for. Or appear as something we don't expect life ought to look like. It may also be that beyond certain point of technological advance, far more efficient communication technologies arise which don't scatter and waste away as much energy into the universe as our present technologies do, becoming thus invisible from far away. For example, if you look at the biological organisms, which are computed by the biochemical networks (keeping in mind the "grain of salt" above) -- we can only envy the energy efficiency of that nano-technology, which scatters and wastes very little into the stray EM radiation as it coordinates operation between trillions of cells. If our technology were to reach that level of efficiency, we would probably be EM-undetectable from Moon. Another possibility is that carbon life isn't the best solution for large scale harmonization and we're just a try that will turn out to be dead end and get discarded when that gets figured out. So, the question is a bit like asking someone "if you are so smart why aren't rich," as if being rich is the smartest thing one can do.nightlight
April 1, 2013
April
04
Apr
1
01
2013
04:37 PM
4
04
37
PM
PDT
Re the human mind and a computer's artificial intelligence, it all comes down to the activities or procedures of the agent, and whether it is the proximate exercise of the latter by the human mind, or the ultimate exercise of it by the software writer, the intelligence depends on the human will - volition; just as crucial as the other faculties of the human soul: the memory and the understanding. NL seems to have set out to explain intelligence as a product of matter, as his basic assumption, then set out to convince himself of the veracity of an ever-more imaginative and complex edifice that he proceeded to create. Without reference to human volition, his excogitations can never arrive at the truth of the matter.Axel
April 1, 2013
April
04
Apr
1
01
2013
12:51 PM
12
12
51
PM
PDT
Nightlight, If everything is so unfathomably intelligent, from the intelligent elemental building blocks that form the conscious super-intelligent Planckian networks , that form the elementary particles, all the way up, why is it that the universe is so unintelligent and lifeless? How can it not be intelligent? How can this self-organizing, self-learning, super-intelligence present itself as e.g. the obtuse planet Mars?Box
April 1, 2013
April
04
Apr
1
01
2013
04:38 AM
4
04
38
AM
PDT
Here's an interesting quote from lecture notes of Scott Aaronson: Lecture 11: Decoherence and Hidden Variables - Scott Aaronson Excerpt: Look, we all have fun ridiculing the creationists who think the world sprang into existence on October 23, 4004 BC at 9AM (presumably Babylonian time), with the fossils already in the ground, light from distant stars heading toward us, etc. But if we accept the usual picture of quantum mechanics, then in a certain sense the situation is far worse: the world (as you experience it) might as well not have existed 10^-43 seconds ago! http://www.scottaaronson.com/democritus/lec11.htmlbornagain77
April 1, 2013
April
04
Apr
1
01
2013
03:29 AM
3
03
29
AM
PDT
NL, your entire argument against quantum non-locality fails for the simple reason that the entire universe was brought into being non-locally (i.e. by a beyond space and time, matter and energy, cause). For you to argue against quantum non-locality when the entire universe originated in such fashion is 'not even wrong' to put it mildly! :) ,,,bornagain77
March 31, 2013
March
03
Mar
31
31
2013
07:10 PM
7
07
10
PM
PDT
"Creating a universe would seem to require an intelligence that is external and causally prior to the universe" nightlight
Nope, that doesn’t follow.
Of course if follows. A universe cannot create itself. In order to do that, it would have to exist before it existed, which is absurd.
As explained in #19 and #35, with adaptable networks you can have a form of intelligence which is additive, i.e. you start with relatively ‘dumb’ elements (nodes & links), using simple rules to change their states and modify links (unsupervised learning), which would no more of cost in assumptions than regular physical postulates.
Even the most optimistically conceived process of additive intelligence cannot serve as an ex nilio creator or facilitate retroactive causation.
I think that this type of computational notion of intelligent agency would have served ID a lot better than the scientifically undefined ‘mind’ or other concepts that don’t have counterparts in natural science.
ID methodology does not posit or make provisions for a scientifically undefined mind.
Natural science has no a priori problem with having intelligent agency as an element.
Natural science, as defined by the National Center for Science Education, does have a problem with intelligent agency as an explanatory element in biology, as do many other influential agencies. This is a problem.StephenB
March 31, 2013
March
03
Mar
31
31
2013
06:57 PM
6
06
57
PM
PDT
bornagain77 #147: I guess that is why NL went after Quantum Non-locality so hard (Bell's theorem violations), since it undermines his entire framework (though his framework is shaky from many different angles anyway). Not at all, I knew Bell's inequalities "violation" was a dead end long before I ever heard of neural networks or of Planck scale pregeometry models. Now that you brought that up, let me 'splain a bit how all that went. After reading hundreds of papers and dozens of books for the masters thesis on "Quantum Paradoxes" (that was in the old country), I was more perplexed about the problems than when I started, when I knew only the QM textbook material. Then I 'came to America', the land of milk and honey. After the grad school at Brown (where I worked on problems of quantum field theory and quantum gravity, doing my best to forget everything about the perplexing "quantum paradoxes"), I went to work in industry and got a chance to get into a real world quantum optics lab (a clean room instrumentation company), where they do exactly the type of coincidence experiments on photons that had supposedly "proven" (modulo loopholes) violations of Bell inequalities (BI). That's when it struck me that with all the massive reading and theorizing, all I knew about it was completely wrong. It basically comes down to what becomes instantly obvious in the real world lab -- the origin of the apparent QM non-locality (as implied by the BI "violations") is the explicitly non-local measurement procedure. Namely to simultaneously measure 2 spins as a pair (or polarizations for photons) of two photons A and B, the actual real lab procedure gets the result on photon A (clicks on 2 detectors +1 and/or -1), then it accepts or rejects result obtained on B based on the result obtained on the remote photon A, leaving the filtered pair results as the final pair event counts. Yet, the assumption behind BI derivation is that the two measurements on A and B are local and completely independent from each other. As an illutration, imagine The Master claiming telepathic powers by arranging a procedure like this: the 'sender' writes down number 1 or 2 he is thinking of, the Master who is 'receiver' in the other room, writes down his guess 1 or 2. All good and fine so far. Then the Master gets the sender's slip, puts it face up next to his, quickly glances down and after a moment of meditation to consult with higher powers, declares the judgment of the higher powers as: 'experiment is valid' (results count) or 'experiment is invalid' (result is discarded). I see. Yep, I am definitely going to invest into the Master's wireless telecommunication company that needs no electric power or Data Centers to work (analogue of quantum computing). But then some doubting Thomas starts challenging the Master's claim, pointing out the suspicious glance at the senders slip. Master dismisses it, oh, that's just an innocent loophole, a stopgap measure until we develop a more ideal coupling channel with the higher powers. The current imperfect coupling requires that both slips must sit there face up for a second. It would be absurd to imagine that the improved coupling would yield worse results, when even the current imperfect coupling already demonstrates the immense power of this transmission technology. That's precisely the kind of verbal weaving and weaseling used by the 'quantum magicians' to dismiss half a century long, uninterrupted chain of failures to obtain the "loophole free" BI violations (the reasoning about the improved technology and absurdity of doubt is literally from John Bell's paper, only translated from physics jargon to Master's experiment). This is exactly what struck me in the real world quantum optics lab, where it downed on me that somehow, through all that reading and long discussions with 3 professors, I was, as it were, kept unaware of the Master's 'quick glance' over the other slip before the meditation. It was like watching a stage magician from behind the curtain and slapping my forehead, oh, that's how he does it. That little insignificant bit of allegedly mere experimental trivia was just glossed over, somehow never reaching my consciousness. Whatever one may think of Zeilinger and the rest of the 'quantum magic' brotherhood, you can't but admire the art of verbal misdirection they have honed to absolute perfection over the decades. You can watch it hundred times from a foot away, and it will still dupe you every single time. That's how good they are. As I got more in depth of Quantum Optics, I found that there is another measurement theory (MT) that quantum opticians use, developed in 1964-5 by Roy Glauber [1], based on Quantum Electrodynamics (QED) model of photodetection. The MT of regular quantum mechanics (QM), as taught to students and as used by Bell for his theorem, was developed in 1930s by von Neumann, Bohr, Heisenberg, Schrodinger and others. Quantum Opticians use the newer one, Glauber's QED MT, because QED is a deeper theory of photons than QM, and it tells them exactly what they should get and how to get it. The Glauber's MT is a pretty heavy reading, though, with 60+ page long proof of the main result, [1], and I have yet to find a physicist working on BI violations or quantum computing, quantum crypto etc, who has ever heard of it, let alone gone through the proof. Even the quantum opticians, who use that theory in daily practice, learn it in a simplified, engineering form like a cooking recipe, without bothering with proofs. Having a particularly strong motivation for the matter because of the previous thesis subject and the resulting perplexity, I took the trouble to work my way through the long and dense primary source [1] (using up about couple weeks of evenings and weekends, in free time from a day job). The critical difference between QED MT and QM MT is that QED MT prescribes, as a result of the QED model of photon measurements, precisely the above non-local procedure for extracting the results on pair of photons (when you accept or reject results for the pair based on inspection of both results from remote photodetectors i.e. via the Master's telepathic scheme with the 'quick glance' step mandated by the theory). In contrast, the QM (which doesn't have a detailed theory of photodection or of quantized EM fields), merely postulates the existence of an "ideal apparatus" for the pair measurement (analogous to the Master's ideal 'coupling channel'), in which the results on A and B are taken locally and independently from each other (i.e. without knowing the remote result before making pair decision; or, without Master's glance on the other slip). This "ideal apparatus" is allegedly just around the corner, as soon as the technology of photodetectors catches up with the 1930s QM measurement "theory". The 1960s Glauber's MT implies that such apparatus can't exist for photons as a matter of more fundamental theory (QED). Yet, it is precisely this imagined "ideal apparatus" that allows Bell to derive his inequalities and prediction that QM violates them, on the "ideal apparatus", though. Hence, the situation is "interesting" to put it politely. On one hand, you have experiments which don't violate BI and you have deeper and newer theory (Glauber's QED measurement theory) which says: what those experiments got is exactly what QED predicts they ought to get (non-violation). On the other hand, you have a weaker (shallower and older) theory of measurement, QM MT, which says you should get BI violations, but the experiments are still imperfect and in the next few years we will get it to work "loophole free" for sure, this time, just one more round of funding and we're there. Knowing all this, I had no problem or conflicts when later these pregeometric Planck scale models came out (last 10 years mostly), since I knew with absolute certainty I need not pay the slightest attention to the Bell Inequalities constraint, they are work fiction. Folks like t'Hooft, Wolfram, Penrose and others who came up with those pregeometry models, while not familiar with much of the above (especially with the Glauber's work), simply overrode the apparent conflict by shear force of intuition which told them to just go ahead, this is much too interesting and promising to stop pursuing only because of Bell's theorem which is kind of weird anyway (t'Hooft thinks it's irrelevant for his pregeometry). --------- refs ------- 1. R. J. Glauber, "Optical coherence and photon statistics" in Quantum Optics and Electronics, ed. C. de Witt-Morett, A. Blandin, and C. Cohen-Tannoudji (Gordon and Breach, New York, 1965), pp. 63-185. (paywalled pdf, sorry, I have only a hard copy)nightlight
March 31, 2013
March
03
Mar
31
31
2013
06:57 PM
6
06
57
PM
PDT
Nightlight (152), thank you for your informative response. I intend to get back to you now that I have a clearer idea of what you are aiming for. For now I would like to repeat that one cannot explain the whole from its parts. What we see in organisms is top-down organization from the level of the whole organism. We cannot reconstruct the pattern at any level of activity by starting from the parts and interactions at that level. There are always organizing principles that must be seen working from a larger whole into the parts. I can provide you with many examples but instead I will give you just two. - We cannot explain an organisms phenotype from DNA. A monarch butterfly and its larva, for example have totally distinct body plans originating from the same DNA. - The whole – the form - can also mold multiple sets of DNA into one organism - chimerism.Box
March 31, 2013
March
03
Mar
31
31
2013
05:17 PM
5
05
17
PM
PDT
Philip, I was pretty shocked to read that even the Pontifical Academy of Sciences isn't a safe haven for scientists informed by their theistic, Christian faith/knowledge, the basis of which is now amply proven on a number of grounds. In fact, the Catholic church still seems to be adversely affected by the part-scandal, part atheist-propaganda coup of Galileo's trial. In the Gospel days, in some significant regards, the Christian faith had a different meaning to the faith of later centuries. Notably, commitment to an indigent, homeless, itinerant preacher and his motley band of apostles, all supported by a group of women followers - in right-wing, economic parlance, 'freeloaders', 'panhandlers', 'welfare-scroungers', 'stumble-bums', etc. It meant more or less openly committing oneself to Christ and his Gospel, in the teeth of the threat of banishment from the Synagogue - no small thing in a small, theocratic society. However, in reality, to some extent, the demands of faith obviously changed for most people subsequently, becoming more a matter of credence, even convention than commitment, as to borrow Francis's words, as it became increasingly self-referential and decreasingly evangelical. This led to a bizarre posture of the Church comically adverted to by the late Malcolm Muggeridge, who remarked on the way in which the Catholic church seemed to want to do everything in its power to downplay, almost to deny, the supernatural, when the Church is, in fact, a highly supernatural sacramental phenomenon, notwithstanding the egregiously scandalous periods of its lengthy, institutional history. Muggeridge claimed that if priests stood at the door of their churches on a Sunday morning with whips in their hands, menacing churchgoers, they could scarcely be more likely to drive people away from Christianity. (Perhaps, in reaction, the Charismatic movement was started, which imo rather tends to encourage a more superficial interest in the faith - although far better than denying its supernatural ethos and ambience, of course). Now, it seems to me, the Church really needs to 'get a grip' and go after atheism, bald-headed with the now manifestly indisputable scientific underpinning, not only of theism, but of the Christian faith, itself. Such articles as I have read concerning the ostension of the Holy Shroud of Turin, and quotes of churchmen in its regard, are still, downplaying, indeed, marginalising the confluence of the supernatural with the very latest molecular physics, chemistry, etc, as delineated in that YouTube video on the Shroud, and the evident signs of an event horizon and singularity having manifested. Much should be made of the innumerable indicators of the genuineness of the Shroud, including pollen only found in the Jerusalem area. Also, emphatically, the Sudarium of Oviedo, the history of which was, I believe, recorded in an unbroken fashion from the time of Christ, and the way in which it matches the blood-stains on the Shroud. The one radiocarbon testing carried out, indicating it was fraudulent, since dating from no earlier than the Middle Ages, was itself apparently fraudulent; but the matter for incredulous astonishment to all but us UDers/IDers, is that that radiocarbon testing seemingly renders all the other confirmatory evidence of no value, their verification presumably being of an inferior nature. One Catholic author even referred to appeals to scientific proof as being 'dangerous'! Of course, it is understandable that one would have to be absolutely certain of the science, in the normal applicable terms, to adduce it emphatically as proof of Christ's life, death and what looks uncommonly like some kind of scientifically identifiable resurrection. Of course, it remains of paramount importance to emphasise that our Christian faith cannot and does not rely on such scientific findings. Nevertheless, it seems to me that, while in his own day, when he walked this earth, with rare exceptions (such as raising Lazarus), Christ did not wish to convince everyone of his infinite, divine power, since his appeal would then have been and would still be a vapid and meretricious appeal to the head, instead of the heart: to the worldly intelligence, instead of the spiritual wisdom of the heart. Nevertheless, it seems to me that we are approaching a new kind of faith paradigm, should have done so, in fact, some time ago, in which science should be used to its fullest extent as a Sting, for which it seems, in part, to have been intended. There would be many people today who are not power-lovers, but would profit greatly from such encouragement to believe, in the teeth of the ubiquitous, media-driven materialists' propaganda, seeking to keep our faith separate from the 'certainties(!) of scientism' - the rationalists' reckless perversion of modern, scientific understanding, in order to disparage Christianity. As for the Pontifical Academy of Sciences, Francis needs to go through it purging it of its aggressively atheist members, like Christ driving out the money-lenders, with the whip he so carefully plaited.Axel
March 31, 2013
March
03
Mar
31
31
2013
03:51 PM
3
03
51
PM
PDT
Box #142: So quarks, photons and such designed Planckian networks? Did they use their intelligence to do that? It's the other way around. Planck scale is 10^-35 m, while our "elementary" particles are at 10^-15 m scale. Our "elementary" particles are analogous to gliders in Conway's Game of Life. The Planckian networks would correspond to computer running that program, hence they are computing our physics (along with biology and up). Check for example Wolfram's NKS or other similar networks based pregeometry models. While there isn't presently a single unifying pregeometry model of this type which could reproduce entire physics, there are isolated models for each of the major equations/laws of physics (e.g. Maxwell, Schrodinger, Dirac, Lorentz transformations). Although still fragmented, such models provide interesting clues as to what might be going on at that level. If one then considers the adjacent open questions, such as fine tuning of physical laws and origin of life problems, both requiring enormously powerful computations to navigate the whole system at the razor edge above the oblivion, the augmentation of Planck scale networks of physics models into neural networks (Planckian networks) seems the most natural hypothesis. The computing power available via such augmentation is 10^80 times greater than the best computing technology we could ever design using our elementary particles as building blocks. If you then add to the list of clues the ID implications about massive computing power needed to compute molecular nano-technology behind life and its evolution, the Planckian networks click in perfectly again providing exactly what is missing. That's three birds with one stone, at least. One could hardly imagine a stronger hint as to how it all must be put together. They don't need a brain because they just happen to be conscious right? The Planckian networks are "brain", a distributed self-programming computer, operating as a goal directed anticipatory system via internal modeling algorithms. These are all traits and capabilities available to unsupervised neural networks (see post #116 for bit more details). The key element of intelligence is built into (front loading aspect) the elemental building blocks that form the Planckian networks. As explained in posts #58 and #109, these building blocks have only two states +1 = (reward, happiness, joy, pleasure, love...) and -1 = (punishment, unhappiness, misery, pain, hate...). The descriptive terms refer to some of the 'mind stuff' manifestations of such states, as they get amplified by the hierarchy of networks up to human level. One might say the 'mind stuff' is driving the actions of the networks via optimization seeking to maximize the sums of +1s and -1s, or in human language, the pursuit of happiness is the go of it. You say it all combines, but exactly that is a huge problem for panpsychism: `how do the alleged experiences of fundamental physical entities such as quarks and photons combine to yield human conscious experience'. One cannot explain the whole from its parts. The William James' "composition problem" of panpsychism is not a problem in this model as explained posts #58, #109. In short, when your pattern recognizers for "red" are in a "happy" state (sums of +1s within the recognizer dominate), and recognizers for "round" are in a happy state, then, if there is a third recognizer "connected" to these two, it goes into a happy state, which is experienced as a "red ball" by you. Note that "connected" here is meant in a generalized sense, i.e. including not only neurons connected via axons and dendrites, but also wirelessly, via resonant superposition of electromagnetic fields without any direct cellular contact, which works as well, provided the distant neurons oscillate at the same frequency.nightlight
March 31, 2013
March
03
Mar
31
31
2013
02:49 PM
2
02
49
PM
PDT
But Phinehas, I do find one thing correct in the 'computational universe' model of Wolfram,,, The universe we measure (consciously observe) is 'information theoretic' at its base:
"It from bit symbolizes the idea that every item of the physical world has at bottom - at a very deep bottom, in most instances - an immaterial source and explanation; that which we call reality arises in the last analysis from the posing of yes-no questions and the registering of equipment-evoked responses; in short, that things physical are information-theoretic in origin." John Archibald Wheeler Why the Quantum? It from Bit? A Participatory Universe? Excerpt: In conclusion, it may very well be said that information is the irreducible kernel from which everything else flows. Thence the question why nature appears quantized is simply a consequence of the fact that information itself is quantized by necessity. It might even be fair to observe that the concept that information is fundamental is very old knowledge of humanity, witness for example the beginning of gospel according to John: "In the beginning was the Word." Anton Zeilinger - a leading expert in quantum teleportation:
But alas, as Anton Zeilinger has pointed out, Theists have, long before Wolfram was even born, been here all along:
"For the scientist who has lived by his faith in the power of reason, the story ends like a bad dream. He has scaled the mountain of ignorance; he is about to conquer the highest peak; as he pulls himself over the final rock, he is greeted by a band of theologians who have been sitting there for centuries." - Robert Jastrow
bornagain77
March 31, 2013
March
03
Mar
31
31
2013
01:54 PM
1
01
54
PM
PDT
What does the term "measurement" mean in quantum mechanics? "Measurement" or "observation" in a quantum mechanics context are really just other ways of saying that the observer is interacting with the quantum system and measuring the result in toto. http://boards.straightdope.com/sdmb/showthread.php?t=597846bornagain77
March 31, 2013
March
03
Mar
31
31
2013
01:43 PM
1
01
43
PM
PDT
Of note to the 'randomness' of free will conscious observation being different from the 'external entropic randomness' of the universe: In the beginning was the bit - New Scientist Excerpt: Zeilinger's principle leads to the intrinsic randomness found in the quantum world. Consider the spin of an electron. Say it is measured along a vertical axis (call it the z axis) and found to be pointing up. Because one bit of information has been used to make that statement, no more information can be carried by the electron's spin. Consequently, no information is available to predict the amounts of spin in the two horizontal directions (x and y axes), so they are of necessity entirely random. If you then measure the spin in one of these directions, there is an equal chance of its pointing right or left, forward or back. This fundamental randomness is what we call Heisenberg's uncertainty principle. http://www.quantum.at/fileadmin/links/newscientist/bit.htmlbornagain77
March 31, 2013
March
03
Mar
31
31
2013
01:41 PM
1
01
41
PM
PDT
'(Caution: there is a foul-mouthed neo-Darwinian dogmatist character posting there, a robot named Thorton, who merely plays back the official ND-E mantras which are triggered by the first few keywords in a post he recognizes, without ever reading or understanding the arguments being made; trying to discuss with him is a waste of time.)' Tell Joe about him, nightlife... then duck, crouch into a ball, covering your face and head as best you can.Axel
March 31, 2013
March
03
Mar
31
31
2013
01:34 PM
1
01
34
PM
PDT
Phinehas at 89:
Hey BA, have you looked much into Wolfram’s new kind of science? I’d be interested in your take on it. I only know enough to find it very intriguing, but not enough to seriously evaluate it as a truth claim.
Sorry I have not answered you sooner. Basically, I have not looked too deeply into his work but have only heard of his work in passing from a criticism I read by Scott Aaronson
Quantum Computing Promises New Insights, Not Just Supermachines - December 5, 2011 Excerpt: And yet, even though useful quantum computers might still be decades away, many of their payoffs are already arriving. For example, the mere possibility of quantum computers has all but overthrown a conception of the universe that scientists like Stephen Wolfram have championed. That conception holds that, as in the “Matrix” movies, the universe itself is basically a giant computer, twiddling an array of 1’s and 0’s in essentially the same way any desktop PC does. Quantum computing has challenged that vision by showing that if “the universe is a computer,” then even at a hard-nosed theoretical level, it’s a vastly more powerful kind of computer than any yet constructed by humankind. Indeed, the only ways to evade that conclusion seem even crazier than quantum computing itself: One would have to overturn quantum mechanics, or else find a fast way to simulate quantum mechanics using today’s computers. http://www.nytimes.com/2011/12/06/science/scott-aaronson-quantum-computing-promises-new-insights.html?pagewanted=all&_r=0
And as I pointed out yesterday we already have very good evidence that quantum computation is already being accomplished in molecular biology for 'traveling saleman' problems: https://uncommondescent.com/news/from-scitechdaily-study-describes-a-biological-transistor-for-computing-within-living-cells/#comment-451310 But one point I did not draw out yesterday, in the traveling salesman example, is that there are limits to the problems that even quantum computation can solve in molecular biology:
The Limits of Quantum Computers - Scott Aaronson - 2007 Excerpt: In the popular imagination, quantum computers would be almost magical devices, able to “solve impossible problems in an instant” by trying exponentially many solutions in parallel. In this talk, I’ll describe four results in quantum computing theory that directly challenge this view.,,, Second I’ll show that in the “black box” or “oracle” model that we know how to analyze, quantum computers could not solve NP-complete problems in polynomial time, even with the help of nonuniform “quantum advice states”,,, http://www.springerlink.com/content/0662222330115207/
And Protein folding is found to be a 'intractable NP-complete problem' by several different methods. Thus protein folding will not be able to take advantage of any advances in speed that quantum computation may offer to any other problems of computation that may be solved in polynomial time:
Combinatorial Algorithms for Protein Folding in Lattice Models: A Survey of Mathematical Results – 2009 Excerpt: Protein Folding: Computational Complexity 4.1 NP-completeness: from 10^300 to 2 Amino Acid Types 4.2 NP-completeness: Protein Folding in Ad-Hoc Models 4.3 NP-completeness: Protein Folding in the HP-Model http://www.cs.brown.edu/~sorin/pdfs/pfoldingsurvey.pdf
Thus, even though NL rejects Quantum Computation, even a 'naturalistic' view of quantum computation is found to be prevented from being able to find a 'bottom up' path to increased functional complexity at the protein level. Scott Aaronson was more specific in his critique of Wolfram here:
Wolfram's speculations of a direction towards a fundamental theory of physics have been criticized as vague and obsolete. Scott Aaronson, Assistant Professor of Electrical Engineering and Computer Science at MIT, also claims that Wolfram's methods cannot be compatible with both special relativity and Bell's theorem violations, which conflict with the observed results of Bell test experiments.[23] http://en.wikipedia.org/wiki/A_New_Kind_of_Science#The_fundamental_theory_.28NKS_Chapter_9.29
I guess that is why NL went after Quantum Non-locality so hard (Bell's theorem violations), since it undermines his entire framework (though his framework is shaky from many different angles anyway). Yet contrary to the narrative that NL has been promoting that quantum non-locality has been a failure for 50 years, the plain fact of the matter is that quantum non-locality has been making steady progress towards 100% verification, whereas those who oppose quantum non-locality because of philosophical reasons, like NL and Einstein before him, have been in steady retreat for 50 years (especially over the last decade or so).
Quantum Entanglement – The Failure Of Local Realism - Materialism - Alain Aspect - video http://www.metacafe.com/w/4744145 Quantum Measurements: Common Sense Is Not Enough, Physicists Show - July 2009 Excerpt: scientists have now proven comprehensively in an experiment for the first time that the experimentally observed phenomena cannot be described by non-contextual models with hidden variables. http://www.sciencedaily.com/releases/2009/07/090722142824.htm
(of note: hidden variables were postulated to remove the need for 'spooky' forces, as Einstein termed them — forces that act instantaneously at great distances, thereby breaking the most cherished rule of relativity theory, that nothing can travel faster than the speed of light.) In fact the foundation of quantum mechanics within science is now so solid that researchers were able to bring forth this following proof from quantum entanglement experiments;
An experimental test of all theories with predictive power beyond quantum theory – May 2011 Excerpt: Hence, we can immediately refute any already considered or yet-to-be-proposed alternative model with more predictive power than this. (Quantum Theory) http://arxiv.org/pdf/1105.0133.pdf
Moreover, Quantum Mechanics has now been extended to falsify local realism without even using quantum entanglement to do it:
‘Quantum Magic’ Without Any ‘Spooky Action at a Distance’ – June 2011 Excerpt: A team of researchers led by Anton Zeilinger at the University of Vienna and the Institute for Quantum Optics and Quantum Information of the Austrian Academy of Sciences used a system which does not allow for entanglement, and still found results which cannot be interpreted classically. http://www.sciencedaily.com/releases/2011/06/110624111942.htm
bornagain77
March 31, 2013
March
03
Mar
31
31
2013
01:33 PM
1
01
33
PM
PDT
'I don’t see how computing targets and algorithmic goals can exist anywhere except in some form of consciousness, nor can I see how there is a “bottom up” pathway to such machinery regardless of what label one puts on that which is driving the materials and processes.' Yes, William, the question of will, volition, remains unanswered, even unaddressed, by nightlight, as far, as I can understand your drift nightlife. Am I correct in thinking that you state that one cannot yet identify the prime mover, since it is the vanishingly small nucleus of a Russian doll-kind of superposition of causes? But, one day....?Axel
March 31, 2013
March
03
Mar
31
31
2013
01:30 PM
1
01
30
PM
PDT
NL: I have a bit of prep work to get on with for this evening, so I simply note that my usage of "algorithm" happens to be standard; where BTW a nodes-arcs framework notoriously describes such, per flowcharts ancient and modern [i.e. disguised forms in UML and methods/functions in OO languages -- I see If_else just got built into a Java version]]. Also, attributing designing intelligence to biochem rxn sets is a bit odd and going to particle-quantum networks is even odder. Please cf. Liebniz [IIRC] and the analogy of the Mill. KFkairosfocus
March 31, 2013
March
03
Mar
31
31
2013
12:03 PM
12
12
03
PM
PDT
Happy Easter everyone, I have just a moment this morning. It seems like nightlight is using Planck Networks as a way to account for biology and all of biology's output (designed objects) by a unifying principle underlying all physical laws. From comment #109
Q2) Do you agree that intelligent agency is a causal phenomenon which can produce objects that are qualitatively distinguishable from the products of chance and necessity, such as those resulting from geological processes? Assuming ‘intelligent agency’ to be the above Planckian networks (which are conscious, super-intelligent system), then per (Q1) answer, that is the creator of real laws (which combine our physical, chemical, biological… laws as some their aspects). Since this agency operates only through these real laws (they are its computation, which is all it does), its actions are the actions of the real laws, hence there is nothing to be distinguished here, it’s one and the same thing.
nightlight, I don't want to misrepresent you so please correct or qualify the above. CheersChance Ratcliff
March 31, 2013
March
03
Mar
31
31
2013
11:33 AM
11
11
33
AM
PDT
Axel #137: Although it is a hallmark of the atheist's credo that a single human being would be similarly insignificant and inconsequential to God; in the starkest contrast with the Christian tenet that Christ would have accepted his crucifixion for just one, single human being. This is a different kind of "front loading" than deism, where the initial mover just sets the cogworks into motion and lets go. While the additive form of intelligence is front loaded into the elemental building blocks, in this perspective there is no separation between creation and creator at any point or any place, since the creation is being upheld (computed) in existence continually, from physical level and up. This relation is analogous to that between computer (analogue of creator) running the Conway's Game of Life (analogue of universe), where gliders and other patterns are analogous to our "elementary" particles and larger objects (including us). The computer (which is creator of this toy universe) upholds it in existence at all moments and at each point of the grid. If the program were to quit its busy work even for a moment, the toy universe would perish instantly. Check for example post #100 on how this phenomenon you brought up of 'god becoming man' can be modeled in this scheme.nightlight
March 31, 2013
March
03
Mar
31
31
2013
11:16 AM
11
11
16
AM
PDT
Nightlight (35): In contrast, “computation” and “algorithms” are scientifically and technologically well accepted concepts which suffice in explaining anything attributed to type of intelligence implied (via ID argument) by the complexity of biological phenomena.
Why do you hold that FSCO/I (or information) does not encompass ‘computation’ and ‘algorithms’? And is it not so that computation and algorithms – like FSCO/I and information – refer to a designer?
Nightlight (35): As well you have a severe blind spot in that it impossible to account for the origination of these chess playing programs in the first place without reference to an intelligent, conscious, agent(s). Yes, there were designed by intelligent agents, called humans. Just as humans are designed by other intelligent agents, the cellular biochemical networks.
What is your definition of designing and agents? Cellular biochemical networks designed us?? They are agents like us? Panpsychism is truly remarkable.
Nightlight (35): Your body, including brain, is a ‘galactic scale’ technology, as it were, designed and constructed by these intelligent networks, who are the unrivalled masters of molecular engineering (human molecular biology and biochemistry are a child’s babble compared to the knowledge, understanding and techniques of these magicians in that realm).
They are masters who are intelligent, construct, design, understand and have technique and knowledge … Especially how you panpsychists adhere overview (needed for planning and design) to these networks, photons and quarks is beyond my comprehension. Are you sure you are not speaking metaphorically?
Nightlight (35): In turn, the biochemical networks were designed and built by even smaller and much quicker intelligent agents, Planckian networks, which are computing the physics and chemistry of these networks as their large scale technologies (our physics and chemistry are coarse grained approximation of the real laws being computed).
More intelligent agents. I should not be surprised, because that is panpsychism. Why would they work together? Why doesn't the whole thing just fall apart? What force holds everything together precisely for a lifetime?
Nightlight (35): Since in panpsychism consciousness is a fundamental attribute of elemental entities at the ground level, it’s the same consciousness (answering “what is it like to be such and such entity”) which combines into and permeates all levels, from elemental Planckian entities though us, and then through all our creations, including social organisms.
So quarks, photons and such designed Planckian networks? Did they use their intelligence to do that? They don’t need a brain because they just happen to be conscious right? You say it all combines, but exactly that is a huge problem for panpsychism: ‘how do the alleged experiences of fundamental physical entities such as quarks and photons combine to yield human conscious experience’. One cannot explain the whole from its parts.Box
March 31, 2013
March
03
Mar
31
31
2013
10:41 AM
10
10
41
AM
PDT
William J Murray #134: I don't really see how any of what you are saying is threatening in any way to any other ID position. It is the ID position, but the way a theoretical physicist would put it, not as biologists or biochemists or molecular biologists are doing it (as physics grad students we looked down upon those as soft and fuzzy fields for lesser minds; I've grown up a bit since then, though). It doesn't claim - or even imply - that god doesn't exist or that humans do not have autonomous free will One form of 'free will' within the scheme is as a tie-breaking mechanism within the internal model of the anticipatory system -- after evaluating prospective actions by the ego actor (counterpart of self within the model) during the what-if decision game running in the model space, if the evaluation for multiple choices is a tossup, since some action has to be taken, one is "willed" as the pick over the alternatives. Another form of free will arises when we realize that the evaluations in model space are recursive (i.e. the model space is fractal), modeling the other agents and their internal models playing their what-if game inside our model-actor of these agents, etc. In such multi-agent cases, the evaluation is highly sensitive to stopping place, e.g. what seems best at stage 1 of evaluation, may become inferior in stage 2, after we account that the other agent (playing in the model with ego actor) has realized it, too, hence his action may not be what was assumed in stage 1, thus another choice may become better in stage 2. Hence, the choice of the stopping place while navigating through the fractal space of models nested within models, is also an act of free will which affects the decision. The first form can also be seen as "free willing" the stopping place, since the alternative in the case of tie-break, which is no action until further finer evaluations are complete, is evaluated as inferior to doing something now. It is the ultimate "front loading" postulate (or "foundation loading"), with the fundamental algorithms (pattern recognition and reaction development) built into the substrate of the universe (if I'm understanding you correctly). Yep, that's exactly what it is. Just as with panpsychism, were you need some elemental 'mind stuff' at the ground level to get anything of that kind at the higher levels, here, in order to get intelligence at the higher levels, you need elemental intelligence built into the objects at the ground level. It is the ontological form of the "no free lunch" results about search algorithms. The key requirement was to find the simplest elements which have additive intelligence, and adaptable networks nicely fit that requirement (plus they resonate well with many other independent clues, including models of Planck scale physics). The main strength of the bottom-up approach is that it tackles not just the origin of intelligence guiding biological evolution, but also the origin of life and the fine tuning of laws of physics and physical constants (for life). Namely, in this picture the "elementary" particles and their laws (physics) are computational technology designed and built by the Planckian networks, the way humans or their computers may design and build technologies which span not just the globe, as they do today, but solar system and eventually galaxies. With that picture in mind, the fine tuning of physics for life is as natural and expected as the fact that cogs of technologies we build fit together correctly, the monitors and keyboards plug into and communicate with PCs, cars fit into the carwash gear, the same electric generators power vast spectrum of motors, computers and other devices,... since they are all designed to work together and combine into the next layer of technologies at a larger scale. The interesting question is what is this whole contraption (universe) trying to do, what is it building? Then, what for, why all the trouble? A little clue as to what it is doing, comes from inspecting how these networks work at our human level. Each of us belongs to multitudes of adaptable networks simultaneously, such as economic, cultural, political, ethnic, national, scientific, linguistic... Hence these larger scale adaptable networks, which are themselves intelligent agencies, each in pursuit of its own happiness, as it were (optimization of their net [rewards - punishments] score via internal modeling, anticipation, etc), are permeating each other as they unfold, each affecting the same cogs (human individuals), each tugging them their way. But these larger scale networks are shaped in the image of the lower scale intelligent networks building them, such cellular biochemical networks, which in turn are build in the shape of underlying Planckian networks which built them. The picture that this forms is like a gigantic multi-dimensional and multi-level crossword puzzle, where the smallest cells contain letters, next larger cells contain words, then sentences, then paragraphs, then chapters, then volumes, then subject areas, then libraries,... This crossword puzzle is solving itself simultaneously in all dimensions and on all levels of cells, seeking to harmonize letters so they make meaningful words in each dimension, then to harmonize multiple words so they make meaningful sentences in each dimension, then paragraphs ... across the whole gigantic hypertorus all at once. As the lower level cells harmonize and settle into solved, harmonious form, the main action, the edge between chaos and order, shifts to the next scale to be worked out. The higher scales must operate without breaking the solved cells of the previous layers, e.g. we have to operate without breaking physical, chemical and biological laws, which were solved into harmonious state in the previous phases, by networks which are computationally far more powerful than ourselves (thus having superior wisdom to our own). Now the hotspot of action is chiefly in our court to compute our little part and harmonize our level of the puzzle. Once completed, the razor edge of innovation shoots up to higher scales, thinner and sharper than ever before, leaving us behind, frozen in a perfect crystalline harmony and a permanent bliss of an electron.nightlight
March 31, 2013
March
03
Mar
31
31
2013
10:17 AM
10
10
17
AM
PDT
Further notes on 'free will': Why Quantum Physics (Uncertainty) Ends the Free Will Debate - Michio Kaku - video http://www.youtube.com/watch?v=lFLR5vNKiSw Moreover, advances in quantum mechanics have shown that 'free will choice' is even effecting the state of 'particles' into the past: Quantum physics mimics spooky action into the past - April 23, 2012 Excerpt: According to the famous words of Albert Einstein, the effects of quantum entanglement appear as "spooky action at a distance". The recent experiment has gone one remarkable step further. "Within a naïve classical world view, quantum mechanics can even mimic an influence of future actions on past events", says Anton Zeilinger. http://phys.org/news/2012-04-quantum-physics-mimics-spooky-action.html In other words, if my conscious choices really are just merely the result of whatever state the material particles in my brain happen to be in in the past (deterministic) how in blue blazes are my choices instantaneously effecting the state of material particles into the past? since our free will choices figure so prominently in how reality is actually found to be constructed in our understanding of quantum mechanics, I think a Christian perspective on just how important our choices are in this temporal life, in regards to our eternal destiny, is very fitting: Is God Good? (Free will and the problem of evil) - video http://www.youtube.com/watch?v=Rfd_1UAjeIA “There are only two kinds of people in the end: those who say to God, "Thy will be done," and those to whom God says, in the end, "Thy will be done." All that are in Hell, choose it. Without that self-choice there could be no Hell." - C.S. Lewis, The Great Divorcebornagain77
March 31, 2013
March
03
Mar
31
31
2013
08:58 AM
8
08
58
AM
PDT
Moreover NL, it seems to me that you are, besides attributing consciousness to computer programs, are claiming that computer programs, and the algorithmic information inherent within the programming of a cell specifically, is capable of creating new information. Just as James Shapiro, of 'natural genetic engineering' fame, claims. But you, just like James Shapiro, have ZERO evidence for this conjecture,,,
On Protein Origins, Getting to the Root of Our Disagreement with James Shapiro - Doug Axe - January 2012 Excerpt: I know of many processes that people talk about as though they can do the job of inventing new proteins (and of many papers that have resulted from such talk), but when these ideas are pushed to the point of demonstration, they all seem to retreat into the realm of the theoretical. http://www.evolutionnews.org/2012/01/on_protein_orig055471.html
In fact the best evidence I currently know of to support your position that algorithmic information can generate functional information is the immune system. But even this stays within Dembski's Universal Probability Bound:
Generation of Antibody Diversity is Unlike Darwinian Evolution - microbiologist Don Ewert - November 2010 Excerpt: The evidence from decades of research reveals a complex network of highly regulated processes of gene expression that leave very little to chance, but permit the generation of receptor diversity without damaging the function of the immunoglobulin protein or doing damage to other sites in the genome. http://www.evolutionnews.org/2010/11/response_to_edward_max_on_talk040661.html
bornagain77
March 31, 2013
March
03
Mar
31
31
2013
08:51 AM
8
08
51
AM
PDT
NL:
Neo-Darwinian Evolution theory (ND-E = RM + NS is the primary mechanism of evolution), carries a key parasitic element of this type, the attribute “random” in “random mutation” (RM) — that element is algorithmically ineffective since it doesn’t produce any falsifiable statement that can’t be produced by replacing “random” with “intelligently guided” (i.e. computed by an anticipatory/goal driven algorithm). In this case, the parasitic agenda carried by the gratuitous “randomness” attribute is atheism.
Save for the fact that we actually can trace the source for randomness down in this universe,,,
It is interesting to note that if one wants to build a better random number generator for a computer program then a better source of entropy is required to be found to drive the increased randomness: Cryptographically secure pseudorandom number generator Excerpt: From an information theoretic point of view, the amount of randomness, the entropy that can be generated is equal to the entropy provided by the system. But sometimes, in practical situations, more random numbers are needed than there is entropy available. http://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator “Gain in entropy always means loss of information, and nothing more.” Gilbert Newton Lewis – Eminent Chemist Thermodynamics – 3.1 Entropy Excerpt: Entropy – A measure of the amount of randomness or disorder in a system. http://www.saskschools.ca/curr_content/chem30_05/1_energy/energy3_1.htm And the maximum source of entropic randomness in the universe is found to be where gravity is greatest,,, Evolution is a Fact, Just Like Gravity is a Fact! UhOh! – January 2010 Excerpt: The results of this paper suggest gravity arises as an entropic force, once space and time themselves have emerged. Entropy of the Universe – Hugh Ross – May 2010 Excerpt: Egan and Lineweaver found that supermassive black holes are the largest contributor to the observable universe’s entropy. They showed that these supermassive black holes contribute about 30 times more entropy than what the previous research teams estimated. http://www.reasons.org/entropy-universe ,,, there is also a very strong case to be made that the cosmological constant in General Relativity, the extremely finely tuned 1 in 10^120 expansion of space-time, drives, or is deeply connected to, entropy as measured by diffusion: Big Rip Excerpt: The Big Rip is a cosmological hypothesis first published in 2003, about the ultimate fate of the universe, in which the matter of universe, from stars and galaxies to atoms and subatomic particles, are progressively torn apart by the expansion of the universe at a certain time in the future. Theoretically, the scale factor of the universe becomes infinite at a finite time in the future.,,, Thus, even though neo-Darwinian atheists may claim that evolution is as well established as Gravity, the plain fact of the matter is that General Relativity itself, which is by far our best description of Gravity, testifies very strongly against the entire concept of ‘random’ Darwinian evolution because of the destructiveness inherent therein.
Moreover we can now differentiate that entropic randomness that is found in the universe from the randomness that would be inherent with a free will conscious agent., Quantum mechanics, which is even stronger than general relativity in terms of predictive power, has a very different ‘source for randomness’, a free will source, which sets it as diametrically opposed to materialistic notion of 'external' randomness:
Can quantum theory be improved? – July 23, 2012 Excerpt: However, in the new paper, the physicists have experimentally demonstrated that there cannot exist any alternative theory that increases the predictive probability of quantum theory by more than 0.165, with the only assumption being that measurement (conscious observation) parameters can be chosen independently (free choice, free will assumption) of the other parameters of the theory.,,, ,, the experimental results provide the tightest constraints yet on alternatives to quantum theory. The findings imply that quantum theory is close to optimal in terms of its predictive power, even when the predictions are completely random. http://phys.org/news/2012-07-quantum-theory.html Needless to say, finding ‘free will conscious observation’ to be ‘built into’ quantum mechanics as a starting assumption, which is indeed the driving aspect of randomness in quantum mechanics, is VERY antithetical to the entire materialistic philosophy which demands randomness as the driving force of creativity! Moreover we have empirical evidence differentiating these sources of randomness: i.e. The Quantum Zeno Effect: Quantum Zeno effect Excerpt: The quantum Zeno effect is,,, an unstable particle, if observed continuously, will never decay. http://en.wikipedia.org/wiki/Quantum_Zeno_effect The reason why I am fascinated with this Zeno effect is, for one thing, that 'random' Entropy is, by a wide margin, the most finely tuned of initial conditions of the Big Bang: Roger Penrose discusses initial entropy of the universe. – video http://www.youtube.com/watch?v=WhGdVMBk6Zo The Physics of the Small and Large: What is the Bridge Between Them? Roger Penrose Excerpt: “The time-asymmetry is fundamentally connected to with the Second Law of Thermodynamics: indeed, the extraordinarily special nature (to a greater precision than about 1 in 10^10^123, in terms of phase-space volume) can be identified as the “source” of the Second Law (Entropy).” How special was the big bang? – Roger Penrose Excerpt: This now tells us how precise the Creator’s aim must have been: namely to an accuracy of one part in 10^10^123. (from the Emperor’s New Mind, Penrose, pp 339-345 – 1989) Moreover, it is very interesting to note just how foundational entropy is in its scope of explanatory power for current science: Shining Light on Dark Energy - October 21, 2012 Excerpt: It (Entropy) explains time; it explains every possible action in the universe;,, Even gravity, Vedral argued, can be expressed as a consequence of the law of entropy. ,,, The principles of thermodynamics are at their roots all to do with information theory. Information theory is simply an embodiment of how we interact with the universe —,,, http://crev.info/2012/10/shining-light-on-dark-energy/ Evolution is a Fact, Just Like Gravity is a Fact! UhOh! - January 2010 Excerpt: The results of this paper suggest gravity arises as an entropic force, once space and time themselves have emerged. https://uncommondescent.com/intelligent-design/evolution-is-a-fact-just-like-gravity-is-a-fact-uhoh/
Moreover:
Scientific Evidence That Mind Effects Matter - Random Number Generators - video http://www.metacafe.com/watch/4198007 I once asked a evolutionist, after showing him the preceding experiments, "Since you ultimately believe that the 'god of random chance' produced everything we see around us, what in the world is my mind doing pushing your god around?"
Thus NL, your conjecture for substituting 'intelligence' for randomness,,, (basically your conjecture is not new and is merely a Theistic Evolution compromise gussied up in different clothing),,, fails on empirical groundsbornagain77
March 31, 2013
March
03
Mar
31
31
2013
08:31 AM
8
08
31
AM
PDT
As regards the 'question of 'front-loading, and our sense that, for instance, the functioning of every single cell of every living microbe and every individual part of them, even, would manifestly be unthinkable on the part of an omniscient and omnipotent God. But is that really so? Without any logical limitation of powers, physical or mental, applicable to the Christian God, such minutiae would not necessarily constitute an almost infinitely trivial, vapid, mind-numbing distraction at all, would they? Although it is a hallmark of the atheist's credo that a single human being would be similarly insignificant and inconsequential to God; in the starkest contrast with the Christian tenet that Christ would have accepted his crucifixion for just one, single human being. This is not to deny the occurrence or possibility of the occurrence of front-loading in creation, but merely to point out our mistaken, anthropomorphic attribution (more notably among atheists, but surely latent in us all) of a susceptibility to mind-numbing by trivia, to an infinite God.Axel
March 31, 2013
March
03
Mar
31
31
2013
07:20 AM
7
07
20
AM
PDT
Optimus: …why the death-grip on methodological naturalism? I suggest that its power lies in its exclusionary function. It rules out ID right from the start, before even any discussions about the empirical data are to be had. MN means that ID is persona non grata,
Nightlight (19): MN doesn’t imply anything of the sort (at least as I understand it). As a counter example, consider a chess playing computer program — it is an intelligent process, superior (i.e. more intelligent) in this domain (chess playing) to any human chess player.
A chess playing computer program doesn’t understand chess. In fact there is no agent (consciousness) present in the program who can (or cannot) understand chess. Like in Searle’s Chinese Room there is merely a simulation of the ability to understand Chess. So it is highly debatable whether 'intelligent' is a proper description of this program.
Nightlight (19): How does MN exclude intelligent agency as an explanation for performance of chess playing program? It doesn’t, since functionality of such program is fully explicable using conventional scientific methods. Hence, MN would allow that chess playing program is an intelligent agency (agent).
I agree that MN would allow for that, but I think you will agree with me that they would be wrong to do so. The fact that they would just shows their metaphysical bias.
Nightlight (19): The net result is that such Planckian network would be 10^60 (more cogs) x 10^20 (faster clocks) (…) With that kind of ratio in computing power, anything computed by this Planckian network would be to us indistinguishable from a godlike intelligence beyond our wildest imagination and comprehension.
Indistinguishable mimicking of intelligence and personhood given extensive instructions (software) designed by us (agents).
Nightlight (128): I was only trying to point out the critical faulty cog in the scheme and how to fix it.
An interesting U-turn tactic in order to smuggle in intelligence as a respectable causal explanation for MN?Box
March 31, 2013
March
03
Mar
31
31
2013
07:09 AM
7
07
09
AM
PDT
kairosafocus #131: That is, your suggestion that you have successfully given necessary criteria of being scientific, fails. For instance, while it is a desideratum that something in science is reducible to a set of mathematical, explanatory models that have some degree of empirical support as reliable, that is not and cannot be a criterion of being science. Response 119 already clarifies why that objection is not applicable. You're using a very narrow semantics for terms "algorithmic" and "algorithm", something from mainframe and punched cards era (1960s, 1970s). Here is the basic schemata: (M) - Model space (formalism & algorithms) (E) - Empirical procedures & facts of the "real" world (O) - Operational rules mapping numbers between (M) and (E) The model space (M) is a set of algorithms for generating valid statements of that science. The generated statements need not be math or numerics (they do have to be logically coherent). The "statements" can be words, symbols, pictures, graphs, charts, numbers, formulas,... etc. It is the operational procedures (O) which assigns empirical semantics to those statements produced by (M) in whatever form they were expressed. Without algorithms of (O), the symbolic output from (M) are merely logically coherent formal statements without empirical meaning or content. The component (E) is a system for obtaining and labeling empirical facts (numbers, symbols, pictures...) relevant for that science. Hence (E) is algorithmic as well, containing instructions on how to interact with the object of that science to extract the data (numbers, pictures, words,...), something that could be in principle programmed into some future android (hence algorithmic). Right now, it is programmed into brains of students in that discipline, just as it is done with statements generating algorithms from (M). All of the above self-evident (even trivial) and is merely a convenient way (for the intended purpose) to partition the conceptual space that you and some others here may be unfamiliar with. Physicists, especially theoretical, and philosophers of science would certainly recognize that representation (model) of natural science. The necessary requirement for algorithmic effectiveness (that's my term for it) of generating rules (cogs) from (M) applies also to algorithmic elements of (O) and (E). For example, component (E) shouldn't have algorithmically ineffective elements such as: upon arrival to an archeological dig, turn your face to Mecca and go down on your knees for one minute. Injecting algorithmically ineffective elements like that into (E) is also a disqualifying flaw since it doesn't (help) produce any empirical facts for (E). The same requirement for 'algorithmic effectiveness' applies to algorithms of (O) as well. For example, instruction 'Symbol WS from the computer model M corresponds to water spirit' is a disqualifying operational rule. As in the case of analogous injection of 'consciousness' into (M), these are parasitic elements belonging to some other agenda foreign to the discipline, seeking to hitch a free ride on the back of the science. The immune system of a scientific network will rightfully reject it, unless you are dealing with thoroughly corrupt disciplines riddled with political/ideological agendas and corporate cronyism (as often seen in climate science, public health, psychiatry, pharmacology, toxicology, environmental science, sociology, women studies, ethnic studies and other 'xyz' studies ... etc). Neo-Darwinian Evolution theory (ND-E = RM + NS is the primary mechanism of evolution), carries a key parasitic element of this type, the attribute "random" in "random mutation" (RM) -- that element is algorithmically ineffective since it doesn't produce any falsifiable statement that can't be produced by replacing "random" with "intelligently guided" (i.e. computed by an anticipatory/goal driven algorithm). In this case, the parasitic agenda carried by the gratuitous "randomness" attribute is atheism. That is actually another common misstep by ID proponents -- they needlessly concede that "random" mutation completely explains "micro-evolution". To realize that it doesn't completely explain it, it suffices to consider known examples of intelligently guided evolution, such as evolution of technologies, sciences,... etc. There is micro- and macro-evolution here, too, showing from outside the same type of patterns that biological micro- and macro-evolutions show. In both domains, either degree of evolution is characterized by "mutation" i.e. the change in the construction recipe (DNA, epigenetic, or source code, manufacturing blueprints) which are associated with external/phenotypic changes, the evolution of the product. But there is nothing in any of it that implies or demonstrates that the change in the recipe is "random" i.e. that the mutation must be random. In the case of evolution of technology, we know that this is not the case, the evolutionary changes are intelligently guided. Of course, in either domain, there could be product defects caused by random errors (e.g. in blueprint or in manufacturing), which occasionally may improve the product. But that doesn't show that such "random" errors are significant, let alone the primary or the sole mechanism of evolution (micro or macro), as ND-E postulates in order to be able to claim the complete absence of intelligent guidance. If you think that some natural science doesn't fall into the above triune pattern or fails on 'algorithmic effectiveness' requirement in any component (M), (E) or (O), show me a counterexample, keeping in mind that "algorithm" as I am using it, isn't only about math or numbers. Of course, if you include any of the mentioned parasites riddled examples, the 'algorithmically ineffective' elements will always be a parasitic agenda hitching a free ride on the backs of the honest scientific work, hence it isn't a counter-example to the requirement (but merely an example of some traits of human nature).nightlight
March 31, 2013
March
03
Mar
31
31
2013
07:00 AM
7
07
00
AM
PDT
1 3 4 5 6 7 10

Leave a Reply