Uncommon Descent Serving The Intelligent Design Community

Optimus, replying to KN on ID as ideology, summarises the case for design in the natural world

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The following reply by Optimus to KN in the TSZ thread, is far too good not to headline as an excellent summary of the case for design as a scientifically legitimate view, not mere  “Creationism in a cheap tuxedo”  ideology motivated and driven by anti-materialism and/or a right-wing, theocratic, culture war mentality commonly ascribed to “Creationism” by its objectors:

______________

>> KN

It’s central to the ideological glue that holds together “the ID movement” that the following are all conflated:Darwin’s theories; neo-Darwinism; modern evolutionary theory; Epicurean materialistic metaphysics; Enlightenment-inspired secularism. (Maybe I’m missing one or two pieces of the puzzle.) In my judgment, a mind incapable of making the requisite distinctions hardly deserves to be taken seriously.

I think your analysis of the driving force behind ID is way off base. That’s not to say that persons who advocate ID (including myself) aren’t sometimes guilty of sloppy use of language, nor am I making the claim that the modern synthetic theory of evolution is synonymous with materialism or secularism. Having made that acknowledgement, though, it is demonstrably true that (1) metaphysical presuppostions absolutely undergird much of the modern synthetic theory. This is especially true with regard to methodological naturalism (of course, MN is distinct from ontological naturalism, but if, as some claim, science describes the whole of reality, then reality becomes coextensive with that which is natural). Methodological naturalism is not the end product of some experiment or series of experiments. On the contrary it is a ground rule that excludes a priori any explanation that might be classed as “non-natural”. Some would argue that it is necessary for practical reasons, after all we don’t want people atributing seasonal thunderstorms to Thor, do we? However, science could get along just as well as at present (even better in my view) if the ground rule is simply that any proposed causal explanation must be rigorously defined and that it shall not be accepted except in light of compelling evidence. Problem solved! Though some fear “supernatural explanation” (which is highly definitional) overwhelming the sciences, such concerns are frequently oversold. Interestingly, the much maligned Michael Behe makes very much the same point in his 1996 Darwin’s Black Box:

If my graduate student came into my office and said that the angel of death killed her bacterial culture, I would be disinclined to believe her…. Science has learned over the past half millenium that the universe operates with great regularity the great majority of the time, and that simple laws and predictable behavior explain most physical phenomena.
Darwin’s Black Box pg. 241

If Behe’s expression is representative of the ID community (which I would venture it is), then why the death-grip on methodological naturalism? I suggest that its power lies in its exclusionary function. It rules out ID right from the start, before even any discussions about the emprical data are to be had. MN means that ID is persona non grata, thus some sort of evolutionary explanation must win by default. (2) In Darwin’s own arguments in favor of his theory he rely heavily on metaphysical assumptions about what God would or wouldn’t do. Effectively he uses special creation by a deity as his null hypothesis, casting his theory as the explanatory alternative. Thus the adversarial relationship between Darwin (whose ideas are foundational to the MST) and theism is baked right into The Origin. To this very day, “bad design” arguments in favor of evolution still employ theological reasoning. (3) The modern synthetic theory is often used in the public debate as a prop for materialism (which I believe you acknowledged in another comment). How many times have we heard the famed Richard Dawkins quote to the effect that ‘Darwin made it possible to be an intellectually fulfilled atheist’? Very frequently evolutionary theory is impressed into service to show the superfluousness of theism or to explain away religion as an erstwhile useful phenomenon produced by natural selection (or something to that effect). Hardly can it be ignored that the most enthusiastic boosters of evolutionary theory tend to fall on the atheist/materialist/reductionist side of the spectrum (e.g. Eugenie Scott, Michael Shermer, P.Z. Meyers, Jerry Coyne, Richard Dawkins, Sam Harris, Peter Atkins, Daniel Dennett, Will Provine). My point simply stated is that it is not at all wrong-headed to draw a connection between the modern synthetic theory and the aforementioned class of metaphysical views. Can it be said that the modern synthetic theory (am I allowed just to write Neo-Darwinism for short?) doesn’t mandate nontheistic metaphysics? Sure. But it’s just as true that they often accompany each other.

In chalking up ID to a massive attack of confused cognition, you overlook the substantive reasons why many (including a number of PhD scientists) consider ID to be a cogent explanation of many features of our universe (especially the bioshpere):

-Functionally-specified complex information [FSCI] present in cells in prodigdious quantities
-Sophisticated mechanical systems at both the micro and macro level in organisms (many of which exhibit IC)
-Fine-tuning of fundamental constants
-Patterns of stasis followed by abrupt appearance (geologically speaking) in the fossil record

In my opinion the presence of FSCI/O and complex biological machinery are very powerful indicators of intelligent agency, judging from our uniform and repeated experience. Also note that none of the above reasons employ theological presuppositions. They flow naturally, inexorably from the data. And, yes, we are all familiar with the objection that organisms are distinct from artificial objects, the implication being that our knowledge from the domain of man-made objects doesn’t carry over to biology. I think this is fallacious. Everyone acknowledges that matter inhabiting this universe is made up of atoms, which in turn are composed of still other particles. This is true of all matter, not just “natural” things, not just “artificial” things – everything. If such is the case, then must not the same laws apply to all matter with equal force? From whence comes the false dichotomy that between “natural” and “artificial”? If design can be discerned in one case, why not in the other?

To this point we have not even addressed the shortcomings of the modern synthetic theory (excepting only its metaphysical moorings). They are manifold, however – evidential shortcomings (e.g. lack of empirical support), unjustified extrapolations, question-begging assumptions, ad hoc rationalizations, tolerance of “just so” stories, narratives imposed on data instead of gleaned from data, conflict with empirical data from generations of human experience with breeding, etc. If at the end of the day you truly believe that all ID has going for it is a culture war mentality, then may I politely suggest that you haven’t been paying attention.>>

______________

Well worth reflecting on, and Optimus deserves to be headlined. END

Comments
NL, That's very interesting, stimulating stuff. I appreciate the thoughtful responses and the time you've invested here. I don't really see how any of what you are saying is threatening in any way to any other ID position. It seems to me you are offering the same kind of description of the way intelligence computes and generates ID phenomena as Newton offered in his mathematical description of gravity. It is a model for describing the mechanism of intelligent ordering towards design - nothing more, really. It doesn't claim - or even imply - that god doesn't exist or that humans do not have autonomous free will (which, IMO, is a commodity distinct from intelligence anyway). It is the ultimate "front loading" postulate (or "foundation loading"), with the fundamental algorithms (pattern recognition and reaction development) built into the substrate of the universe (if I'm understanding you correctly). It's always been my view that "mind" and "soul" are two different things, and that mind has "computational intelligence", but not free will. IMO, mind is the software, body is the hardware, and soul is the operator.William J Murray
March 31, 2013
March
03
Mar
31
31
2013
05:28 AM
5
05
28
AM
PDT
semi related: podcast - "What's at Stake for Science Education" http://intelligentdesign.podomatic.com/entry/2013-03-29T16_49_10-07_00bornagain77
March 31, 2013
March
03
Mar
31
31
2013
04:29 AM
4
04
29
AM
PDT
PS: Let us recall, the criteria of being scientific set out by NL, which was first addressed at 112 above:
In any natural science, you need 3 basic elements: (M) – Model space (formalism & algorithms) (E) – Empirical procedures & facts of the “real” world (O) – Operational rules mapping numbers between (M) and (E) The model space (M) defines an algorithmic model of the problem space via postulates, equations, programs, etc. It’s like scaled down model of the real world, where you can run the model, observe behaviors, measure numbers, then compare these with numbers from empirical observations obtained via (E) . . . . scientific postulates can make no use of concepts such as ‘mind’ or ‘consciousness’ or ‘god’ or ‘feeling’ or ‘redness’ since no one knows how to formalize any of these, how to express them as algorithms that do something useful, even though we may have intuition that they do something useful in real world. But if you can’t turn them into algorithmic form, natural science has no use for them. Science seen via the scheme above, is in fact a “program” of sorts, which uses human brains and fingers with pencils as its CPU, compiler and printer. The ID proponents unfortunately don’t seem to realize this “little” requirement. Hence, they need to get rid of “mind” and “consciousness” talk, which are algorithmically vacuous at present, and provide at least a conjecture about the ‘intelligent agency’ which can be formulated algorithmically might be, at least in principle (e.g. as existential assertion, not explicit construction).
Notice the first problem, what modelling and algorithms are about. It is simply not the case that scientific work is always like that, though it is often desirable to have mathematical models. The implied methodological naturalism should also be apparent and merits the correction in 112. However, I fail to see where NL has taken the concerns on board and has cogently responded. His attempt to suggest an idiosyncratic usage for algorithm, and to specify redness to the experience of being appeared to redly, fail. Indeed [as was highlighted already], there is a lot of work that is not only scientific but legitimately physics, that uses the intelligent responses of participants, to empirically investigate then analyse important phenomena, such as colour, sound, etc. One consequence of this has been the understanding that our sensory apparatus often uses in effect roughly log compression, such as in the Weber-Fechner law, where increments of noticeable difference are fractional, i.e dx/x is a constant ratio across the scale of a phenomenon. Thus the appearance and effectiveness of log metrics for things like sound [the dB scale for sound relative to a reference level of 10^-12 W/sq m], and the scaling of the magnitude criteria for stars that was based on apparent size/brightness, that then turned out to be in effect logarithmic.kairosfocus
March 31, 2013
March
03
Mar
31
31
2013
12:20 AM
12
12
20
AM
PDT
NL: Pardon, I must again -- cf. 112 and ff above -- observe that in effect you are trying to impose a criterion that fails the test of factual adequacy relative to what has been historically acceptable as science, and what has been acceptable as science across its range. That is, your suggestion that you have successfully given necessary criteria of being scientific, fails. For instance, while it is a desideratum that something in science is reducible to a set of mathematical, explanatory models that have some degree of empirical support as reliable, that is not and cannot be a criterion of being science. For crucial instance, one has to amass sufficient empirical data before any mathematical analysis or model can be developed [observe --> hypothesise], and it will not do to dismiss that first exploration and sketching out of apparent patterns as "not science." Similarly, empirical or logical laws which may be inherently qualitative can be scientific or scientifically relevant. As a simple example of the latter, it is a logical point that that which is coloured is necessarily (not merely observed to be) extensive in space. Likewise, the significance of drawing a distinction {A|NOT_A} and what follows from it in logic is an underlying requisite of science. [Notice, I have here given a genuine necessary criterion, by way of counter-instance to your claimed cluster of necessary criteria.] For the former, let us observe that the taxonomy of living forms is not inherently a mathematical model or framework, but a recognition of memberships on genus/difference, per keys that are creatively developed on empirical investigation, not givens set by a model. Going further, scientific work typically seeks to describe accurately, explain, predict or influence and thus allow for control. An exploratory description is scientific, even if that is not mathematical. In that context, let us note a pattern of pre-mathematical, qualitative observations c. 1970 - 82, by eminent scientists, that lie at the root of key design theory concepts:
WICKEN, 1979: ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in.)] ORGEL, 1973: . . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [[The Origins of Life (John Wiley, 1973), p. 189.] HOYLE, 1982: Once we see that life is cosmic it is sensible to suppose that intelligence is cosmic. Now problems of order, such as the sequences of amino acids in the chains which constitute the enzymes and other proteins, are precisely the problems that become easy once a directed intelligence enters the picture, as was recognised long ago by James Clerk Maxwell in his invention of what is known in physics as the Maxwell demon. The difference between an intelligent ordering, whether of words, fruit boxes, amino acids, or the Rubik cube, and merely random shufflings can be fantastically large, even as large as a number that would fill the whole volume of Shakespeare’s plays with its zeros. So if one proceeds directly and straightforwardly in this matter, without being deflected by a fear of incurring the wrath of scientific opinion, one arrives at the conclusion that biomaterials with their amazing measure or order must be the outcome of intelligent design. No other possibility I have been able to think of in pondering this issue over quite a long time seems to me to have anything like as high a possibility of being true.” [[Evolution from Space (The Omni Lecture[ --> Jan 12th 1982]), Enslow Publishers, 1982, pg. 28.]
None of these is a mathematical model or an algorithm in the crucial meaning of the term. However, each, in an unquestionably scientific context of investigation, is highlighting a key qualitative observation that allows us to then go on to analyse and develop models. It would be improper and inviting of fallacious selectively hyperskeptical dismissal to suggest that absent the full panoply of mathematicisation, operational definitions, models, fully laid out bodies of empirical data, etc, such is not scientific. Similarly, it is false to suggest that such inferences amount to an alleged improper injection of the supernatural, or the like. These concerns were already outlined. However, we can go on a bit. By 2005, having first used the no free lunch theorems to identify the significance of complex specified information, William Dembski proposed a general quantification:
define phi_S as . . . the number of patterns for which [agent] S’s semiotic description of them is at least as simple as S’s semiotic description of [a pattern or target zone] T. [26] . . . . where M is the number of semiotic agents [S's] that within a context of inquiry might also be witnessing events and N is the number of opportunities for such events to happen . . . . [where also] computer scientist Seth Lloyd has shown that 10^120 constitutes the maximal number of bit operations that the known, observable universe could have performed throughout its entire multi-billion year history.[31] . . . [Then] for any context of inquiry in which S might be endeavoring to determine whether an event that conforms to a pattern T happened by chance, M·N will be bounded above by 10^120. We thus define the specified complexity [chi] of T given [chance hypothesis] H [in bits] . . . as [the negative base-2 log of the conditional probability P(T|H) multiplied by the number of similar cases phi_S(t) and also by the maximum number of binary search-events in our observed universe 10^120] Chi = – log2[10^120 ·phi_S(T)·P(T|H)].
I continued, in the always linked note:
When 1 >/= chi, the probability of the observed event in the target zone or a similar event is at least 1/2, so the available search resources of the observed cosmos across its estimated lifespan are in principle adequate for an observed event [E] in the target zone to credibly occur by chance. But if chi significantly exceeds 1 bit [i.e. it is past a threshold that as shown below, ranges from about 400 bits to 500 bits -- i.e. configuration spaces of order 10^120 to 10^150], that becomes increasingly implausible. The only credibly known and reliably observed cause for events of this last class is intelligently directed contingency, i.e. design. Given the scope of the Abel plausibility bound for our solar system, where available probabilistic resources qWs = 10^43 Planck-time quantum [not chemical -- much, much slower] events per second x 10^17 s since the big bang x 10^57 atom-level particles in the solar system Or, qWs = 10^117 possible atomic-level events [--> and perhaps 10^87 "ionic reaction chemical time" events, of 10^-14 or so s], . . . that is unsurprising.
From this, a couple of years back now, for operational use [and in the context of addressing exchanges on what chi is about] several of us in and around UD have deduced a more practically useful form, by taking a log reduction and giving useful upper bounds at Solar System level:
Chi = – log2[10^120 ·phi_S(T)·P(T|H)]. xx: To simplify and build a more "practical" mathematical model, we note that information theory researchers Shannon and Hartley showed us how to measure information by changing probability into a log measure that allows pieces of information to add up naturally:
Ip = - log p, in bits if the base is 2. That is where the now familiar unit, the bit, comes from. Where we may observe from say -- as just one of many examples of a standard result -- Principles of Comm Systems, 2nd edn, Taub and Schilling (McGraw Hill, 1986), p. 512, Sect. 13.2:
Let us consider a communication system in which the allowable messages are m1, m2, . . ., with probabilities of occurrence p1, p2, . . . . Of course p1 + p2 + . . . = 1. Let the transmitter select message mk of probability pk; let us further assume that the receiver has correctly identified the message [[--> My nb: i.e. the a posteriori probability in my online discussion here is 1]. Then we shall say, by way of definition of the term information, that the system has communicated an amount of information Ik given by I_k = (def) log_2 1/p_k (13.2-1)
xxi: So, since 10^120 ~ 2^398, we may "boil down" the Dembski metric using some algebra -- i.e. substituting and simplifying the three terms in order -- as log(p*q*r) = log(p) + log(q ) + log(r) and log(1/p) = – log (p): Chi = – log2(2^398 * D2 * p), in bits, and where also D2 = phi_S(T) Chi = Ip – (398 + K2), where now: log2 (D2 ) = K2 That is, chi is a metric of bits from a zone of interest, beyond a threshold of "sufficient complexity to not plausibly be the result of chance," (398 + K2). So, (a) since (398 + K2) tends to at most 500 bits on the gamut of our solar system [[our practical universe, for chemical interactions! ( . . . if you want , 1,000 bits would be a limit for the observable cosmos)] and (b) as we can define and introduce a dummy variable for specificity, S, where (c) S = 1 or 0 according as the observed configuration, E, is on objective analysis specific to a narrow and independently describable zone of interest, T: Chi = Ip*S – 500, in bits beyond a "complex enough" threshold NB: If S = 0, this locks us at Chi = - 500; and, if Ip is less than 500 bits, Chi will be negative even if S is positive.
--> E.g.: a string of 501 coins tossed at random will have S = 0, but if the coins are arranged to spell out a message in English using the ASCII code [[notice independent specification of a narrow zone of possible configurations, T], Chi will -- unsurprisingly -- be positive. --> Following the logic of the per aspect necessity vs chance vs design causal factor explanatory filter, the default value of S is 0, i.e. it is assumed that blind chance and/or mechanical necessity are adequate to explain a phenomenon of interest. --> S goes to 1 when we have objective grounds -- to be explained case by case -- to assign that value. --> That is, we need to justify why we think the observed cases E come from a narrow zone of interest, T, that is independently describable, not just a list of members E1, E2, E3 . . . ; in short, we must have a reasonable criterion that allows us to build or recognise cases Ei from T, without resorting to an arbitrary list. --> A string at random is a list with one member, but if we pick it as a password, it is now a zone with one member. (Where also, a lottery, is a sort of inverse password game where we pay for the privilege; and where the complexity has to be carefully managed to make it winnable. ) --> An obvious example of such a zone T, is code symbol strings of a given length that work in a programme or communicate meaningful statements in a language based on its grammar, vocabulary etc. This paragraph is a case in point, which can be contrasted with typical random strings ( . . . 68gsdesnmyw . . . ) or repetitive ones ( . . . ftftftft . . . ); where we can also see by this case how such a case can enfold random and repetitive sub-strings. --> Arguably -- and of course this is hotly disputed -- DNA protein and regulatory codes are another. Design theorists argue that the only observed adequate cause for such is a process of intelligently directed configuration, i.e. of design, so we are justified in taking such a case as a reliable sign of such a cause having been at work. (Thus, the sign then counts as evidence pointing to a perhaps otherwise unknown designer having been at work.) --> So also, to overthrow the design inference, a valid counter example would be needed, a case where blind mechanical necessity and/or blind chance produces such functionally specific, complex information. (Points xiv - xvi above outline why that will be hard indeed to come up with. There are literally billions of cases where FSCI is observed to come from design.)
xxii: So, we have some reason to suggest that if something, E, is based on specific information describable in a way that does not just quote E and requires at least 500 specific bits to store the specific information, then the most reasonable explanation for the cause of E is that it was designed. The metric may be directly applied to biological cases: Using Durston’s Fits values -- functionally specific bits -- from his Table 1 [--> in the context of his published analysis on Shannon's metric of average info per symbol, H applied to various functionality-relevant states of biologically relevant strings . . . -*-*-*- . . . ], to quantify I, so also accepting functionality on specific sequences as showing specificity giving S = 1, we may apply the simplified Chi_500 metric of bits beyond the threshold: RecA: 242 AA, 832 fits, Chi: 332 bits beyond SecY: 342 AA, 688 fits, Chi: 188 bits beyond Corona S2: 445 AA, 1285 fits, Chi: 785 bits beyond xxiii: And, this raises the controversial question that biological examples such as DNA -- which in a living cell is much more complex than 500 bits -- may be designed to carry out particular functions in the cell and the wider organism.
In short, there is in fact a framework of quantification and related analysis that undergirds the observation that FSCO/I is characteristically produced by intelligent design, and that the other main source of high contingency outcomes, chance processes [perhaps aided by mechanical necessity, which in itself will not lead to highly contingent outcomes] wandering across a configuration space, does not plausibly have access to the atomic and temporal resources to sample enough of the space that we can reasonably expect to see anything from a special, narrow zone popping up. That is, once we have sufficient complexity in terms of bit depth, a chance contingency driven search will predictably pick up gibberish, not meaningful strings that are based on multi part complex functional organisation. Where also, given how a nodes and arcs based architecture/wiring diagram for a functionally specific system can be reduced to a structured set of strings, analysis on strings is WLOG. In this overall context, I must also point out that intelligent designers, human and non human [I think here of beavers especially, as already noted], are an empirical fact. They can be observed, and their works can be observed. It is reasonable to ask, whether such works have distinguishing, characteristic signs that indicate their cause. And so, however imperfect suggestions such as CSI, IC or FSCO/I may be, they are therefore reasonable and scientific responses to a legitimate scientific question. [Correctness cannot be a criterion of being scientific, for many good reasons.] KFkairosfocus
March 30, 2013
March
03
Mar
30
30
2013
11:42 PM
11
11
42
PM
PDT
Nighlight @129, it was to make a point about technospeak. The link turned out to be temporary. See here. When a conversation becomes dominated by jargon, especially when one is not speaking among peers, overly technical language serves more to obfuscate than to enlighten.Chance Ratcliff
March 30, 2013
March
03
Mar
30
30
2013
07:41 PM
7
07
41
PM
PDT
Chance Ratcliff #118: Also check out my own research paper, That link is broken. What's it about?nightlight
March 30, 2013
March
03
Mar
30
30
2013
07:13 PM
7
07
13
PM
PDT
Chance Ratcliff #118: It's not clear to me that what you've provided constitutes a definition of science which can address the demarcation problem That was not definition of natural science but outline of some necessary elements and their properties. Hence, it was a listing of some necessary conditions for natural science, not sufficient conditions. Namely, the point of that scheme was to explain how the ID proponents often violate the key necessary conditions for a natural science. Violating the necessary conditions, such as algorithmic effectiveness of postulates, suffices to disqualify a proposal from clams on becoming a science (see post #117 on why that is so). Since they have tripped already on the necessary conditions, there is no need to analyze further as to whether their proposal is sufficient. As to what science is, what are sufficient conditions, that's a lot bigger subject than few posts on a blog can fit, one would need to write books for that. I was only trying to point out the critical faulty cog in the scheme and how to fix it. However a search for the three terms, "model space", "empirical procedures", and "operational rules" turned up nothing. Search google scholar, one at a time, will give you tens of thousands hits on each from scholarly papers. Pairs of phrases also have some hits. These are fairly common concepts in philosophy of science, which I learned from the books (physics), before google. The three part scheme itself is a canonical partition (labels may differ depending on author) arising often in discussions about the meaning and perplexities of quantum theory. So, no, those are not my private terms or concepts. They are a commonplace in philosophy and methodology of science. The only "originality" is perhaps in my particular choices of using one label from one author, another from another, third one from yet another. Hence there may not be someone else using exactly those three phrases together, since each of them can be expressed in many ways. Since all that has been accumulated before google and bookmark collections, I have no specific links as to where I picked each one along the way; they're most likely from somewhere out of ~15K books behind me in my study. Another effect in play is that English is not my native tongue but only the 5th language, hence some of those may be literal translations from Serbo-Croatian, Russian, Italian or Latin, depending on where I picked the concept first (most likely from Russian since a good number of books behind me is in Russian). what can be considered useful and applicable to human knowledge, then it must take for granted consciousness, reason, the correspondence of perception to reality, these things which you appear to have placed outside of usefulness. That's one mixed bag of concepts. One must distinguish what can be object of science and what can be postulate or cog. Any of the above can be object of research. But not all can be cogs, or postulates. For that you need 'algorithmic effectiveness' -- the cog must be churning something out, it must have consequences, it must do something that makes a difference in a given discipline. The "difference" is not whether you or someone feels better about it, more in harmony with universe, but difference in some result specific and relevant to the discipline. In that sense, consciousness does nothing to any given discipline (other than as an object of research, such as in neuroscience or cognitive science). It lacks any algorithmic aspect, it doesn't produce anything. That doesn't mean it is irrelevant for your internal experience. But we are not talking about whether ID can be a personal experience, but whether it can be a natural science. My point is that it certainly can be, provided its proponents (such as S. Meyer) get rid of the algorithmically ineffective baggage and drop the 'consciousness' talk, since it only harms the cause of getting the ID to be accepted as a science. Intelligent agency can be formulated completely algorithmically, hence injecting the non-algorithmic term 'consciousness' is entirely gratuitous. It adds not one new finding to other findings of ID. Hence dropping it loses no ID finding either. It's not clear what you mean by "algorithmic model". A search for the term That was a self-contained term plus the explanation what exactly is meant by it. Whether you can find it in those same exact words and rhe same exact meaning I have no idea, but being explained right there in the post, the search was unnecessary. In any case it is not an original concept and specific words might be a literal translation from another language. I think I explained exactly what it is. What is missing in that explanation? A natural science needs some rules of operation and techniques, a.k.a. algorithms, on how it produces some output that is to be compared with empirical observations and facts. That part was labeled as (M) in the scheme, and named as 'algorithmic model' of that science. Isn't the necessity of such component (M) completely self-evident? Why would you need an authority to confirm something as obvious? For instance, you appeal to "planckian networks" and "planckian nodes" repeatedly. Again, a search for these terms turns up absolutely nothing. Dropping "ian" from Planckian will give you more. I use suffix "ian" to distinguish my concept since it extends what others in physics are playing with (e.g. see Wolfrwam's NKS). My extension is to assume that these networks have adaptable links (in physics these links would have only on/off levels), so they can operate like neural networks. In effect that combines perspectives and results from two fields of research. Planck scale model of physics (pregeometry), including Planck scale network models, spinor networks, pregeometry,... is a whole little cottage industry in physics going back 6-7 decades at least. I provided few links on the subject in posts #19 and #35. Try this link as a very clean, narrow search on Google scholar. I am not a theoretical physicist, a theoretical mathematician, or a theoretical computer scientist. Perhaps you are some or all of these things. Actually, that's a fairly close hit. I was educated as a theoretical physicist, but went to work in industry right after the grad school, working mostly as a chief scientist in various companies, doing math and computer science research (e.g. design of new algorithms; or generally tackling any problem everyone else they tried before, got stuck on). Here is a recent and very interesting discovery I stumbled upon (by having a lucky combination of fields of expertise which clicked together just the right way on that particular problem). It turns out, that the two seemingly unrelated problems, each a separate research field of its own, are after a suitable transformation one and the same problem: (a) maximizing network throughput (bisection) and (b) maximizing codeword distance of error correcting codes. Amazingly, these two are mathematically the same problem, which is quite useful since there are many optimal EC codes, and the paper provides a simple translation recipe that converts those codes into optimal large scale networks (e.g. useful for large Data Centers). What are these Planckian networks, what is there applicability to understanding physical laws, how are they modeled and simulated, what research is being done, ... The basic concept is known as "digital physics" and it goes back to 1970s MIT (Fredkin, Toffoli). Stephen Wolfram's variant which he calls "New Kind of Science" (NKS) is the most ambitious project of this type (here is NKS forum), a vision of translating and migrating all of natural science into the computational/algorithmic framework. Since he is physicist, major part of it is recreating physics out of some underlying networks at Planck scale. Another major source on this subject and ideas goes under the name "Complexity Science", mostly coming from Santa Fe Institute. The whole field goes back to early 1980s, with cellular automata research & dynamical systems (chaos), then late 1980s through 1990s, when neural network research took off, then 1990s "complexity science" which sought to integrate all the various branches under one roof "complex systems". I think those who have studied works from of the above sources would recognize what I am writing as a bit of rehash of those ideas. Here again you hint at a kind of Gnostic synthesis, in which you claim to know that our knowledge of physical laws can be superceded by some other notion, apparently not amenable to investigation, Au contraire, this perspective (which is closest to Wolfram's NKS project) is a call for investigation, not rejection of it. The strong sense of imperfection of the present knowledge is the result of realization of all the possibilities opened by the NKS style approach. For example the fact that major equations/laws of physics (Maxwell, Schrodinger, Dirac eqs, relativistic Lorenz transforms) can be obtained as a coarse grained approximations of the much finer behaviors of the patterns unfolding on these types of underlying computational elements, is a huge hint of what is really going on at the Planck scale. What "natural computations" are being lobotomized, and what demonstrable synthesis shows the inadequacy of our conception of physical laws, as they relate to the phenomena they describe? The "lobotomized" refers to how we seek to reduce biological phenomena to laws of physics. If you keep in mind the above picture of underlying computations and the activation patterns on these networks as being what is really going on (the "true laws"), then our laws of physics (such as Maxwell, Schrodinger or Dirac eqs) are merely some properties of those patterns extracted under very contrived constraints that allow only one small aspect of those patterns to stand out. The biological phenomena, which are different manifestations and properties of the unconstrained patterns are suppressed when extracting "physical laws". That's what I called "lobotomized" true laws. Another analogy may be taking a Beethoven's symphony and tuning into and extracting just two notes, cay C and D, filtering out all other notes, and then imagining that the pattern seen on that subset (C, D sequence) is the fundamental law onto which the rest of symphony is reducible.nightlight
March 30, 2013
March
03
Mar
30
30
2013
06:42 PM
6
06
42
PM
PDT
What you say about Godel's thoughts may well be so, but judging from a photo on the first page of photos in that book about and him and Einstein, A World Without Time, his mother is not impressed. I think it might be the funniest photograph I have ever seen. She's got her arm around his shoulder and is looking at him, as much as to say to the camera, 'Look at him. Just look at him... What am I to do with this boy? I can barely trust him to do up his shoe-laces...' And just to set it off to perfection, uncharacteristically, perhaps, he's looking very happy and pleased with himselfAxel
March 30, 2013
March
03
Mar
30
30
2013
06:09 PM
6
06
09
PM
PDT
NL, to continue on towards a more 'complete' picture of reality, I would like to point out that General Relativity is the 'odd man out' in a 'complete' picture of reality, yet Special Relativity is 'in' the picture:
Quantum electrodynamics Excerpt: Quantum electrodynamics (QED) is the relativistic quantum field theory of electrodynamics. In essence, it describes how light and matter interact and is the first theory where full agreement between quantum mechanics and special relativity is achieved. http://en.wikipedia.org/wiki/Quantum_electrodynamics Quantum Mechanics vs. General Relativity Excerpt: The Gravity of the Situation The inability to reconcile general relativity with quantum mechanics didn’t just occur to physicists. It was actually after many other successful theories had already been developed that gravity was recognized as the elusive force. The first attempt at unifying relativity and quantum mechanics took place when special relativity was merged with electromagnetism. This created the theory of quantum electrodynamics, or QED. It is an example of what has come to be known as relativistic quantum field theory, or just quantum field theory. QED is considered by most physicists to be the most precise theory of natural phenomena ever developed. In the 1960s and ’70s, the success of QED prompted other physicists to try an analogous approach to unifying the weak, the strong, and the gravitational forces. Out of these discoveries came another set of theories that merged the strong and weak forces called quantum chromodynamics, or QCD, and quantum electroweak theory, or simply the electroweak theory, which you’ve already been introduced to. If you examine the forces and particles that have been combined in the theories we just covered, you’ll notice that the obvious force missing is that of gravity.,,, http://www.infoplease.com/cig/theories-universe/quantum-mechanics-vs-general-relativity.html
Yet, by all rights, General Relativity should be able to somehow be unified within Quantum theory:
LIVING IN A QUANTUM WORLD – Vlatko Vedral – 2011 Excerpt: Thus, the fact that quantum mechanics applies on all scales forces us to confront the theory’s deepest mysteries. We cannot simply write them off as mere details that matter only on the very smallest scales. For instance, space and time are two of the most fundamental classical concepts, but according to quantum mechanics they are secondary. The entanglements are primary. They interconnect quantum systems without reference to space and time. If there were a dividing line between the quantum and the classical worlds, we could use the space and time of the classical world to provide a framework for describing quantum processes. But without such a dividing line—and, indeed, with­out a truly classical world—we lose this framework. We must ex­plain space and time (4D space-time) as somehow emerging from fundamental­ly spaceless and timeless physics. http://phy.ntnu.edu.tw/~chchang/Notes10b/0611038.pdf
A very interesting difference to point out between General Relativity and Special Relativity is that Special Relativity and General Relativity have two completely opposite curvatures in space time as to how the curvatures relate to us: Please note the 3:22 minute mark of the following video when the 3-Dimensional world ‘folds and collapses’ into a tunnel shape around the direction of travel as a ‘hypothetical’ observer accelerates towards the ‘higher dimension’ of the speed of light, (Of note: This following video was made by two Australian University Physics Professors with a supercomputer.),,
Approaching The Speed Of Light – Optical Effects – video http://www.metacafe.com/watch/5733303/
And please note the exact opposite effect for ‘falling’ into a blackhole. i.e. The 3-Dimensional world folds and collapses into a higher dimension as a ‘hypothetical’ observer falls towards the event horizon of a black-hole describe in General Relativity:
Space-Time of a Black hole http://www.youtube.com/watch?v=f0VOn9r4dq8
And remember time comes to a stop (becomes 'eternal') at both the event horizon of black-holes and at the speed of light:
time, as we understand it temporally, would come to a complete stop at the speed of light. To grasp the whole 'time coming to a complete stop at the speed of light' concept a little more easily, imagine moving away from the face of a clock at the speed of light. Would not the hands on the clock stay stationary as you moved away from the face of the clock at the speed of light? Moving away from the face of a clock at the speed of light happens to be the same 'thought experiment' that gave Einstein his breakthrough insight into e=mc2. Albert Einstein - Special Relativity - Insight Into Eternity - 'thought experiment' video http://www.metacafe.com/w/6545941/
The implications of having two completely different eternities within reality, one eternity that is very destructive at black holes (in fact black-holes are the greatest source of entropy in the universe) and one eternity that is very ordered at the speed of light, are, at least to those of us who are of a Christian 'eternity minded' persuasion, very sobering to put it mildly. Verse and Music:
Hosea 13:14 I will ransom them from the power of the grave; I will redeem them from death: O death, I will be thy plagues; O grave, I will be thy destruction:,, Dolly Parton - He's Alive - 1989 CMA - music video http://www.youtube.com/watch?v=UbRPWUHM80M
bornagain77
March 30, 2013
March
03
Mar
30
30
2013
06:08 PM
6
06
08
PM
PDT
nightlight, back at 100 in response to my question as to what brought all the 'conscious' matter-energy into being at the big bang, i.e. did matter-energy think itself into existence before it existed?, you stated:
There is always the ‘last opened’ Russian doll, no matter how many you opened. As to what is inside that one, it may take some tricky twists and tinkering before it splits open and the next ‘last opened’ shows itself. Hence, you are always at the ‘last opened’ one.
Well that sure sounds like the most 'unscientific' cop out I have ever seen since we are in fact talking about the instantaneous origination of every thing that you 'just so happen' to have attributed inherent consciousness to. To just leave the whole thing unaddressed since you can't address it with your preferred philosophy is a even worse violation of integrity than the blatant bias you have displayed against all the quantum evidence that goes against your preferred position!
What Properties Must the Cause of the Universe Have? - William Lane Craig - video http://www.youtube.com/watch?v=1SZWInkDIVI “Certainly there was something that set it all off,,, I can’t think of a better theory of the origin of the universe to match Genesis” Robert Wilson – Nobel laureate – co-discover Cosmic Background Radiation
Moreover it is important to note that Einstein, whom I believed leaned towards Spinoza's panpsychism, made his self admitted 'greatest blunder' of his scientific career, in not listening to what his very own equation was telling him about reality, due in some part (large part?) to this 'incomplete' (last doll will remain unopened) philosophy that you seem to be so enamored with. In fact it was a theist, a Belgium priest no less, that first brought the full implications of General Relativity to Einstein's attention: Albert Einstein (1879-1955), when he was shown his general relativity equation indicated a universe that was unstable and would 'draw together' under its own gravity, added a cosmological constant to his equation to reflect a stable universe rather than dare entertain the thought that the universe had a beginning.
Einstein and The Belgian Priest, George Lemaitre - The "Father" Of The Big Bang Theory - video http://www.metacafe.com/watch/4279662
Moreover, this is not the only place where Einstein has been shown to be wrong. The following article speaks of a proof developed by legendary mathematician/logician Kurt Gödel, from a thought experiment, in which Gödel showed General Relativity could not be a complete description of the universe:
THE GOD OF THE MATHEMATICIANS - DAVID P. GOLDMAN - August 2010 Excerpt: Gödel's personal God is under no obligation to behave in a predictable orderly fashion, and Gödel produced what may be the most damaging critique of general relativity. In a Festschrift, (a book honoring Einstein), for Einstein's seventieth birthday in 1949, Gödel demonstrated the possibility of a special case in which, as Palle Yourgrau described the result, "the large-scale geometry of the world is so warped that there exist space-time curves that bend back on themselves so far that they close; that is, they return to their starting point." This means that "a highly accelerated spaceship journey along such a closed path, or world line, could only be described as time travel." In fact, "Gödel worked out the length and time for the journey, as well as the exact speed and fuel requirements." Gödel, of course, did not actually believe in time travel, but he understood his paper to undermine the Einsteinian worldview from within. http://www.firstthings.com/article/2010/07/the-god-of-the-mathematicians Physicists continue work to abolish time as fourth dimension of space - April 2012 Excerpt: "Our research confirms Gödel's vision: time is not a physical dimension of space through which one could travel into the past or future." http://phys.org/news/2012-04-physicists-abolish-fourth-dimension-space.html
Moreover NL, contrary to the narrative you would prefer to believe in, it is quantum theory that has been steadily advancing on Einstein's 'incomplete' vision of reality for the last 50 years, and it is certainly not Quantum Mechanics that has been in retreat from Einstein's 'incomplete' view of reality.,, In the following video Alain Aspect speaks of the infamous Bohr-Einstein debates and of the steady retreat that Einstein's initial position has suffered:
Quantum Entanglement – The Failure Of Local Realism - Materialism - Alain Aspect - video http://www.metacafe.com/w/4744145
As a interesting sidelight to this, Einstein hated the loss of determinism that quantum mechanics brought forth to physics, as illustrated by his infamous 'God does not play dice' quote to Neils Bohr, yet actually, quantum mechanics restored the free will of man to its rightful place in a Theistic view of reality,, First by this method,,,
Why Quantum Physics (Uncertainty) Ends the Free Will Debate - Michio Kaku - video http://www.youtube.com/watch?v=lFLR5vNKiSw
And now, more recently, by this method:
Quantum physics mimics spooky action into the past - April 23, 2012 Excerpt: The authors experimentally realized a "Gedankenexperiment" called "delayed-choice entanglement swapping", formulated by Asher Peres in the year 2000. Two pairs of entangled photons are produced, and one photon from each pair is sent to a party called Victor. Of the two remaining photons, one photon is sent to the party Alice and one is sent to the party Bob. Victor can now choose between two kinds of measurements. If he decides to measure his two photons in a way such that they are forced to be in an entangled state, then also Alice's and Bob's photon pair becomes entangled. If Victor chooses to measure his particles individually, Alice's and Bob's photon pair ends up in a separable state. Modern quantum optics technology allowed the team to delay Victor's choice and measurement with respect to the measurements which Alice and Bob perform on their photons. "We found that whether Alice's and Bob's photons are entangled and show quantum correlations or are separable and show classical correlations can be decided after they have been measured", explains Xiao-song Ma, lead author of the study. According to the famous words of Albert Einstein, the effects of quantum entanglement appear as "spooky action at a distance". The recent experiment has gone one remarkable step further. "Within a naïve classical world view, quantum mechanics can even mimic an influence of future actions on past events", says Anton Zeilinger. http://phys.org/news/2012-04-quantum-physics-mimics-spooky-action.html
In other words, if my conscious choices really are just merely the result of whatever state the material particles in my brain happen to be in in the past (deterministic) how in blue blazes are my choices instantaneously effecting the state of material particles into the past? Moreover NL, it is simply preposterous for you, given the key places you refuse to look at evidence (i.e. especially the big bang!) to play all this evidence off as 'mind stuff', or as 'quantum magic'! It is called 'cherry picking' and 'confirmation bias' to do as you are doing with the evidence! Also of recent related note on Einstein from Zeilinger:
Of Einstein and entanglement: Quantum erasure deconstructs wave-particle duality - January 29, 2013 Excerpt: While previous quantum eraser experiments made the erasure choice before or (in delayed-choice experiments) after the interference – thereby allowing communications between erasure and interference in the two systems, respectively – scientists in Prof. Anton Zeilinger's group at the Austrian Academy of Sciences and the University of Vienna recently reported a quantum eraser experiment in which they prevented this communications possibility by enforcing Einstein locality. They accomplished this using hybrid path-polarization entangled photon pairs distributed over an optical fiber link of 55 meters in one experiment and over a free-space link of 144 kilometers in another. Choosing the polarization measurement for one photon decided whether its entangled partner followed a definite path as a particle, or whether this path-information information was erased and wave-like interference appeared. They concluded that since the two entangled systems are causally disconnected in terms of the erasure choice, wave-particle duality is an irreducible feature of quantum systems with no naïve realistic explanation. The world view that a photon always behaves either definitely as a wave or definitely as a particle would require faster-than-light communication, and should therefore be abandoned as a description of quantum behavior. http://phys.org/news/2013-01-einstein-entanglement-quantum-erasure-deconstructs.html
bornagain77
March 30, 2013
March
03
Mar
30
30
2013
05:24 PM
5
05
24
PM
PDT
NL: AmHD:
al·go·rithm (lg-rm) n. A step-by-step problem-solving procedure, especially an established, recursive computational procedure for solving a problem in a finite number of steps. red (rd) n. 1. a. The hue of the long-wave end of the visible spectrum, evoked in the human observer by radiant energy with wavelengths of approximately 630 to 750 nanometers; any of a group of colors that may vary in lightness and saturation and whose hue resembles that of blood; one of the additive or light primaries; one of the psychological primary hues. b. A pigment or dye having a red hue. c. Something that has a red hue.
KFkairosfocus
March 30, 2013
March
03
Mar
30
30
2013
04:38 PM
4
04
38
PM
PDT
Chance Ratcliff
This is not a context which is amenable to meaningful communication, as far as I can tell. Perhaps this just isn’t the proper venue.
Indeed perhaps not!Box
March 30, 2013
March
03
Mar
30
30
2013
04:22 PM
4
04
22
PM
PDT
Mung @121, priceless. :DChance Ratcliff
March 30, 2013
March
03
Mar
30
30
2013
04:21 PM
4
04
21
PM
PDT
EA:
BTW, Mung, I hope this helps answer your nagging and heart-felt question about macroevolution. Perhaps there is a reason Nick has gone so silent? There isn’t anything special about macroevolution — it is just microevolution writ large.
So if I buy enough books on micro-evolutionary theory it'll all eventually add up to a book on Macro-Evolutionary Theory? So I don't suppose that micro-evolution is caused by biochemical changes. Tour can't be right, can he?Mung
March 30, 2013
March
03
Mar
30
30
2013
04:12 PM
4
04
12
PM
PDT
From the OP:
In my judgment, a mind incapable of making the requisite distinctions hardly deserves to be taken seriously.
What a strange choice of words.Mung
March 30, 2013
March
03
Mar
30
30
2013
04:07 PM
4
04
07
PM
PDT
kairosfocus #112: You have so prioritised mathematical formalisms and algorithms that much of both historic and current praxis is excluded. Not at all. We are using different semantics. You are simply taking 'algorithmic' much too narrowly, understanding it only as either mathematical formula or a program listing in computer science or engineering class i.e. something that runs on a digital computer. The semantics I use would classify a 'cake baking recipe' as an algorithm. Of course, this algorithm doesn't run on a silicon processor as a program, but instead uses human brain as its CPU and a compiler. Essentially, if you can imagine an android robot doing something, that something is algorithmic in my semantics. In that sense, even historical sciences, such as archeology include 'algorithmic' component (M), as well as (E) and (O). As in the 'cake baking' algorithm, it is the brains of the archeology students which are programmed to operate as a CPU and compilers for executing the algorithms (M) of archeology as a natural science. This is no different than a physics student being taught how use formulas, how to transform expressions, take care of units,... etc. His brain is being transformed to operate as a CPU and compiler for reading physics papers and books, replicating or checking claimed results, extending the calculations and producing new results, etc. While on surface it sounds more 'algorithmic' than instructions on digging out and dusting off the old bones, they are still both algorithmic activities. Basically all sciences are in the above sense of 'algorithm', programs which run on flesh computers, on wetware, our brains and our bodies (the latter is more so in archeology or biology than in theoretical physics). Note though, that algorithmic component (M) of natural science and its stated properties are necessary but not sufficient to have a natural science. I was not defining sufficient conditions, since the point of that post was to explain how some of the ID proponents (such as e.g. S. Meyer) are violating the vital necessary condition for natural science -- the requirement for the algorithmic effectiveness of the postulates. The necessity of such requirement and consequence of violating them was explained in the previous reply.nightlight
March 30, 2013
March
03
Mar
30
30
2013
03:09 PM
3
03
09
PM
PDT
nightlight, thanks for your detailed reply. It's not clear to me that what you've provided constitutes a definition of science which can address the demarcation problem. Perhaps your goal is combining model space, empirical procedures, and operational rules in such a way as to provide apodicticity. You have provided some descriptions which in and of themselves have obvious applications to scientific studies and methods. This is all well and good. However a search for the three terms, "model space", "empirical procedures", and "operational rules" turned up nothing. That does not necessarily invalidate what you propose, but this formulation appears to be your private interpretation and demarcation of what can be considered useful to science and the expansion of human knowledge. You're making some pretty sweeping claims about what is and is not amenable to investigation, all within the context of your private interpretation of what constitutes genuine science, using private definitions and formulations that don't appear to have been subject to criticism by peers capable of evaluating your claims. If you're proposing a paradigm of reasoning that's supposed to account for what can be considered useful and applicable to human knowledge, then it must take for granted consciousness, reason, the correspondence of perception to reality, these things which you appear to have placed outside of usefulness.
"The model space (M) defines an algorithmic model of the problem space via postulates, equations, programs, etc. It’s like scaled down model of the real world, where you can run the model, observe behaviors, measure numbers, then compare these with numbers from empirical observations obtained via (E). A stylized image of this relation is a wind tunnel where you test the wing or propeller shapes to optimize their designs."
It's not clear what you mean by "algorithmic model". A search for the term yielded results ranging from the definition, "A method of estimating software cost using mathematical algorithms based on the parameters which are considered to be the major cost drivers" to a book on "algorithmic and finite model theory" which describes itself as, "Intended for researchers and graduate students in theoretical computer science and mathematical logic," Wikipedia defines "Finite Model Theory" as, "a subarea of model theory (MT). MT is the branch of mathematical logic which deals with the relation between a formal language (syntax) and its interpretations (semantics)." Whether this can be taken as a context for evaluating the usefulness of knowledge acquisition, namely a scientific demarcation, is at best a theoretical exercise and at worst an obfuscation of plain and obvious facts, such as our direct experience with cause and effect relationships. By any definition of algorithmic model theory that I was able to glean from searching, it's a theoretical subject of investigation, not a framework that science fits into. It's quite possible that we will share no context for meaningful discussion because I lack the training and language context for the ideas that you propose. It's also possible that this shotgun blast of terminology that you have peppered the comments with is a way of dismissing our first hand experience of cause-and-effect relationships and burying a discussion of design inferences in a cloud of obfuscatory language. But I'm trying to give you the benefit of the doubt. With regard to the questions you answered, thanks for being as direct as possible. I can only take your simple answers seriously however. For instance, you appeal to "planckian networks" and "planckian nodes" repeatedly. Again, a search for these terms turns up absolutely nothing. You keep inserting this private and context-dependent terminology into the discussion as if it's supposed to provide the necessary clarity for understand your point of view. I assure you this is not the case. I am a person of above-average intelligence, but I am not a theoretical physicist, a theoretical mathematician, or a theoretical computer scientist. Perhaps you are some or all of these things. Perhaps you're privy to a level of knowledge that the rest of us can scarcely imagine. Or maybe you're simply being intentionally elliptical. It's genuinely hard to tell. My first impression of your response was that you autogenerated parts of it. See MarkovLang. Also check out my own research paper, which practically indistinguishable from technobabble. Now let me address your simple answers to my questions.
Q1) Do you agree that chance and physical necessity are insufficient to account for designed objects such as airplanes, buildings, and computers? “Physical necessity” and “chance” are vague. If you have in mind physical laws (with their chance & necessity partitions) as known presently, then they are insufficient.
II'm perfectly happy with your terminology here, physical laws with chance and necessity partitions; and of course they are insufficient. Your answer is the obvious answer, because our repeated and uniform experience with physical reality makes clear that a category of objects exist which are not amenable to physical processes acting over time. You wrote,
"But in my picture, Planckian networks compute far more sophisticated laws that are not of reductionist, separable by kind, but rather cover physics, chemistry, biology… in one go. Our physical laws are in this scheme coarse grained approximations of some aspects of these real laws, chemical and biological laws similarly are coarse grained approximations of some other aspects of the real laws, i.e. our present natural science chops the real laws like a caveman trying to disassemble a computer for packing it, then trying to put it back together. It’s not going to work. Hence, the answer is that the real laws computed by the Planckian networks via their technologies (physical systems, biochemical networks, organisms), are sufficient. That’s in fact how those ‘artifacts’ were created — as their technological extensions by these underlying networks."
What are these Planckian networks, what is there applicability to understanding physical laws, how are they modeled and simulated, what research is being done, what are the results, how has our understanding of physical laws been broadened by understanding them, how do they account for all levels of causation, from necessity to the intentional activity of intelligence toward a goal or purpose? Sufficiency must be demonstrated, not presumed. I agree that our understanding of physical laws is incomplete, but presuming to have a higher, more complete explanatory framework for all of physical reality imposes a burden of proof that falls upon you. I'm highly suspicious when I hear claims that our understanding of that which can be determined through our perception, experience, use of language, powers of reason, and procedural experimentation can be superceded by some form of secret knowledge. What you're describing does not make sense of reality, it's at best an unconventional reduction that sounds like some sort of techno-new-ageism. I don't think anyone here is against you having unconventional views on science and philosophy, but your declarations of having some sort of Gnostic synthesis will not do, not if you're unable to make it plain enough.
Q2) Do you agree that intelligent agency is a causal phenomenon which can produce objects that are qualitatively distinguishable from the products of chance and necessity, such as those resulting from geological processes? Assuming ‘intelligent agency’ to be the above Planckian networks (which are conscious, super-intelligent system), then per (Q1) answer, that is the creator of real laws (which combine our physical, chemical, biological… laws as some their aspects). Since this agency operates only through these real laws (they are its computation, which is all it does), its actions are the actions of the real laws, hence there is nothing to be distinguished here, it’s one and the same thing.
I'm assuming intelligent agency to be the purposeful activity of human intelligent agents, or rather that which moves it. I do not accept your imposition of Planckian networks as an explanation for intelligence, unless you can demonstrate their use, and their necessary emergence from, or preeminence to, raw material behaviors. It's not even clear what you mean when you say that agency only operates through these real laws, and that the actions of agents are the actions of these real laws. We don't have a law of agency, or consciousness, and so certainly not knowledge of how "real laws" act to form real choices by real intelligent beings like ourselves. What we do have is first-hand, repeated, and uniform experience of intelligent agents acting in measurable ways, beginning with ourselves -- acting in ways to produce artifacts that are unaccountable to explanation by any known physical laws.
Q3) Do you think there are properties that designed objects can have in common which set them apart from objects explicable by chance and physical necessity? Again this comes back to Q1. If the “physical necessity + chance” refer to those that follow from our physical laws, then yes, designed objects are easily distinguishable. But when we contrive and constrain some setup that way, to comply with what we understand as physical laws, we are lobotomizing the natural computations in order to comply with one specific pattern (one that matches particular limited concept of physical law). Hence, in my picture Q3 is asking whether suitably lobotomized ‘natural laws’ (the real computed laws) will produce distinguishably inferior output to non-lobotomized system (the full natural computations) — obviously yes. It’s always easy to make something underperform by tying its both arms and legs so it fits some arbitrarily prescribed round shape.
Thanks for the straightforward answer that designed objects are easily distinguishable, but I cannot accept your qualification of that point. Here again you hint at a kind of Gnostic synthesis, in which you claim to know that our knowledge of physical laws can be superceded by some other notion, apparently not amenable to investigation, something which has to be believed to be seen. What "natural computations" are being lobotomized, and what demonstrable synthesis shows the inadequacy of our conception of physical laws, as they relate to the phenomena they describe? Again, thanks for a detailed reply. I'd be interested to know if anyone can make sense of it. The purposes of the questions I asked was to establish a common context for reasoning about design inferences, but it appears that you're proposing a radical, unorthodox, and so far, unintelligible view on components of reality that apparently are intended to supercede any current conception of physical law. This is not a context which is amenable to meaningful communication, as far as I can tell. Perhaps this just isn't the proper venue. Best, ChanceChance Ratcliff
March 30, 2013
March
03
Mar
30
30
2013
02:54 PM
2
02
54
PM
PDT
kairosfocus #114: On redness. Your dismissive objection obviously is rooted in the exchange with BD. Actually, as was pointed out [there was someone in my dept doing this sort of work as research], this is eminently definable empirically, Sorry, I may have not been clear enough what "redness" meant, although it was in the same short list with "consciousness" "mind" "feeling". Hence the "_redness_" I was talking about is the one that answers "what is it like to see red color" - the _redness_ as qualia (cf. 'hard problem of consciousness'). There is nothing in science that explains how this _redness_ comes out of neural activity associated with seeing red color, no matter how much detail one has about that neural activity. It is not merely an explanatory gap between the "two" in the present science, but there aren't even the "two" to have a gap in between, there is just "one" (neural activity), since there is no counterpart for _redness_ in natural science of any sort. It is that what-is-like _redness_, along with all the rest of qualia and mind stuff, that are outside of present natural science. Injecting this "_redness_" as a cog within present natural science is therefore vacuous since nothing follows from it -- it is algorithmically ineffective, a NOP (do nothing) operation. But if you do insist on injecting such algorithmically ineffective cogs, as Stephen Meyer keeps doing with 'consciousness', than whatever it is you're offering is going to trigger a strong immune response from the existent natural sciences which do follow the rule of 'no algorithmically ineffective cogs'. This negative response is of the same nature as strong reactions to someone trying to sneak to the front of a long line waiting to buy tickets -- it is a rejection of a cheater. In science, injecting such gratuitous, algorithmically ineffective cogs is cheating since such cog would be given the space within scientific discipline, without it providing anything in return (in the form of relevant consequences) to the scientific discipline which gave it the space -- it's like a renter refusing to pay the rent, a cheater. Hence, like in regular life, if a cog is to get a residence within a science, it better be able to give something back to the science, some consequence that matters in that science. The basic rule is thus -- only the algorithmically effective elements can be added as cogs to the natural science. Of course, that doesn't preclude _redness_ or _consciousness_ (as mind stuff concepts) from being the research objects of a natural science (such is in neuroscience or cognitive science). In this case they would be researched, seeking for example to discover what is it so special (structurally and functionally) about the kinds of systems which could report such phenomena. What is precluded, though, is injecting such elements as the cogs or givens or postulates. This is like requiring that physicians working in a hospital must have a medical degree. That requirement doesn't imply that patients must have a medical degree, too. The patients could be illiterate, dumb, incompetent,... and still be allowed into the hospital to be treated. They just can't start treating and performing surgeries on other patients. And that is a good thing.nightlight
March 30, 2013
March
03
Mar
30
30
2013
02:31 PM
2
02
31
PM
PDT
William J Murray #113: The issue I have with NL's argument, really, is teleology. Computing must be teleological to account for the presence of complex, functional machines. To solve any complex, specified target, an algorithm must have a goal. I don't see how computing targets and algorithmic goals can exist anywhere except in some form of consciousness, nor can I see how there is a "bottom up" pathway to such machinery regardless of what label one puts on that which is driving the materials and processes. One view of Neural Networks (NN) is as pattern recognizers, such as those used in handwriting and speech recognition systems which can be quite effective at such tasks, operating well with noisy, distorted and damaged/partial patterns. While the learning can be made a lot quicker and more efficient using supervised learning (where an external 'all knowing oracle' provides feedback), the unsupervised learning (no oracle but just local dynamical laws of interaction between nodes & link modifications) can perform as well, given more time, nodes and links. Hence we can ignore below how the learning algorithms were implemented (supervidsed or unsupervised) and simply consider a purely dynamical system (laws/rules of local interactions & link modifications) which can do anything being discussed. In the pattern recognition perspective, one can reverse engineer and find out how the networks learn pattern recognition. A simple visual depiction of the mechanism is a stretched out elastic horizontal surface with many strings attached from above and below pulling the section of surface up or down. The network link adaptations correspond to shortening or lengthening of the stings. Hence, given sufficient number of strings (i.e. number of NN's nodes & links), NN's are capable of reshaping the surface (fitness landscape) to an arbitrary form. The input patterns can be seen as balls being dropped at random places, then rolling on the surface and settling down in the nearest valley. If you allow the surface to vibrate (random noise in NN's operation), then the ball will naturally find deeper valleys, which corresponds to more global forms of optimization. On first glance, none of the above appears to have much in common with goal directed or anticipatory behaviors. In fact, it's all you need. Namely, consider a sequence of video frames of a soccer player kicking a ball and the ball flying into the goal or missing it. Stacking the frames in temporal order gives you a 3D pattern, with height of the pattern being proportional to the duration of the video. But this pattern, just like any other pattern, is as learnable as any in the character recognition tasks. Since pattern recognizing networks can learn how to recognize damaged, distorted and noisy patterns, in the case of this 3D pattern, they can learn how to recognize/identify lower part of the 3D pattern stack (later frames, ball entering or missing the goal), after seeing only the higher parts of the stack (earlier frames, the kick). But this capability to bring up/reconstruct the "later events/states" from the "earlier events/states" is precisely what is normally labeled as anticipation or look ahead or prediction. Once you realize the above poaaibility, the emergence of goal directed NN behavior is self-evident. For example, imagine this NN has a task to control the kick i.e. it is rewarded when the goal is scored, punished on miss. It learns from a series of 3D stacks of patterns that have frame sequences for goal hits and misses (e.g. these stacks could be captured by the NN's cams from previous tries). With its partial/damaged pattern recognition capabilities, looking at a series of 3D stacks as one 4D pattern, the NN can learn how to aim the kick to get the ball into the goal, i.e. such network can learn how to control a robotic soccer player. A problem of pole balancing on a moving cart (usually with simulated setups, but some also using real physical carts, motors and poles) is a common toy model for researching this type of NN capabilities (this particular problem is in fact a whole little cottage industry in this field). In short, adaptable networks have no problem with learning (in supervised or unsupervised fashion) anticipatory and goal directed behaviors since such behaviors are merely a special case of the pattern recognition in which the time axis is one dimension of the patterns. Hence adaptable networks can learn to behave as goal directed anticipatory systems. In case of adaptable networks with unsupervised learning (assumes only local dynamics for punishment/reward evaluations by nodes & link modifications), one can say that through the interaction with environment, the networks can spontaneously transform themselves into goal directed anticipatory systems, with the goal being maximizing of the net score (rewards - punishments). The general algorithmic pattern of these goal directed behaviors can be understood via internal modeling by the networks -- the networks build an internal model of the environment and of themselves (ego actor), then play this model forward in model time and space, trying out from the available actions of the ego actor (such as direction and strength of the kick), evaluating the responses of the model environment (such as ball flight path relative to the goal), then picking out the best action of ego actor as the one to execute in the real world (the real kick of the robotic soccer player). In simpler cases, it is also possible to 'reverse engineer' such internal models of the networks by observing them in action and then identifying 'neural correlates' of the components of the model and of their rules of operation (the laws of the model space, i.e. of their internal virtual reality). Hence, the internal modeling by the networks is not merely a contrived explanatory abstraction, but it is an actual algorithmic pattern used by the networks. Of course, none of the above addresses the question, 'what is it like to be' such anticipatory, goal directed system, i.e. what about the 'mind stuff' aspect of that whole process? Where does that come from? The post #109 has a sketch of how the 'mind stuff' aspect can be modeled as well (model based on panpsychism). In that quite economical model of consciousness, the answer is -- 'it is like' exactly as the description sounds like i.e. it is what one could imagine going through their mind while performing such anticipatory, goal directed tasks. Regarding the "bottom up pathway" -- the unsupervised networks require only the rules of operation for nodes (evaluation algorithm of rewards & punishments from incoming signals) and for link changes (how are links from a node changed after the evaluations by the node). These are all simple, local, dynamical rules, no more expensive or burdensome in terms of assumptions than postulates of physics which specify how the fields and particles interact. For example, a trading style network can be constructed in which some signals are labeled as 'goods and services', others as 'money' or 'payments', others as 'bills' or 'fees' and nodes as 'trading agents'... etc. Then one would define rules of trading, how costs & prices are set, how the trading partners are picked (which defines the links e.g. these could be random initially), how the links (trade volumes) are changed based on gains and losses at the node, etc., like a little virtual economy with each agent operating via simple rules. All that was said previously about optimization, goal oriented anticipatory behaviors, internal modeling by adaptable networks arises spontaneously within simple unsupervised (e.g. trading) networks sketched above. Given enough nodes and links, such network can optimize for arbitrarily complex fitness landscape (which need not be static, and may include other networks as part of network's 'environment' affecting the shape of the fitness landscape). The computational power of such network depends on how complex evaluation algorithms can nodes execute and on how many nodes and links are available. Since one can trade off between these two aspects, a large enough network can achieve arbitrarily high level of computational power (intelligence) with the simplest nodes possible (such as those with just two states; cf. "it from bit") and simple unsupervised learning/trading rules. In other words, the intelligence is additive with this type of systems. One can thus start with the dumbest and simplest possible initial nodes & links. Provided these 'dumb' elements can replicate (e.g. via simple cloning and random connections into the existent network), arbitrarily high level of intelligence can be achieved merely by letting the system run and replicate by the simple rules of the 'dumb' bottom level elements. No other input or external intelligence is needed, beyond what is needed to have such 'dumb' elements exist at all. Note that in any science, you need some set of givens, the postulates taken as is, which can't tell you what is the go of them. If you dream of a science with no initial givens, you already have it, it is saying absolutely nothing about anything at all, just basking in its pure, perfect nothingness and completeness. As explained in posts 19 and 35, if you start with ground level networks of this type at Planckian scale, there is enough room for networks which are computationally 10^80 times more powerful than any technology (including brains) constructed from our current "elementary" particles as cogs. To us, the output of such immensely powerful intelligence would be indistinguishable from godlike creations.nightlight
March 30, 2013
March
03
Mar
30
30
2013
02:16 PM
2
02
16
PM
PDT
William J Murray (and kf), Thanks for bringing clarity to his flaw(s). I 'sensed' something was not connecting in his logic and his stated worldview but could not put my finger on it exactly.bornagain77
March 30, 2013
March
03
Mar
30
30
2013
05:52 AM
5
05
52
AM
PDT
PS: On redness. Your dismissive objection obviously is rooted in the exchange with BD. Actually, as was pointed out [there was someone in my dept doing this sort of work as research], this is eminently definable empirically, as has been done for colour theory, foundational to display technology, printing, etc. It tuns out to be strongly based in reflection and/or emission of light in a band from a bit over 600 nm to a bit under 800 nm, depending on individual [there was someone else in the Dept who was colour blind], lighting conditions, and the like. Redness is an objective, measurable characteristic of objects, and being appeared to redly is something that can be empirically investigated once we are willing to allow that people do perceive, can evaluate and are often willing to accurately report what they experience. This extends to other areas of interest to science and technology, such as smell, sound/hearing, pain, etc. Turns out that our sensory systems use something pretty close to log compression of signals, linked to the Weber-Fechner law of fractional change sensitivity: dx/x, the link to logs is obvious. For light I think it is 10 decades of sensitivity. Indeed, it was discovered that as a result the classical magnitude scale of stars 1 - 6 [cf. here], was essentially logarithmic. 0 to 120 dB with sound is 12 decades. This is as close as the multimedia PC you are using, and as important as the techniques used in bandwidth compression that allow us to cram so many signals into limited bandwidth.kairosfocus
March 30, 2013
March
03
Mar
30
30
2013
04:25 AM
4
04
25
AM
PDT
BA77: Panpsychism in and of itself is actually fairly close to what I believe about the physical world - that mind is omnipresent, animating it. The issue I have with NL's argument, really, is teleology. Computing must be teleological to account for the presence of complex, functional machines. To solve any complex, specified target, an algorithm must have a goal. I don't see how computing targets and algorithmic goals can exist anywhere except in some form of consciousness, nor can I see how there is a "bottom up" pathway to such machinery regardless of what label one puts on that which is driving the materials and processes. Whether one calls it computing mind or matter governed by chance and necessity, whatever it is must be able to imagine a goal that does not yet exist in reality and coherently compute solutions to the acquisition of that target. Without that, whether the computing is done by bottom-up mind or matter, the process is just flailing about blindly, which is not good enough to get to the goal. IMO, the ability to imagine a goal requires a duality (X as current state and not-X as desired state) and some form of consciousness that can perceive and will a course towards a solution to not-X. I don't find NL's "bottom-up mind" explanation adequate to the task of sufficient explantion, but panpsychism by itself is - IMO - well inside the tent of ID. Also, if NL finds "consciousness" too "unscientific" an entity to use in any scientific explanation because it gives the foes of ID too much fodder for their protestations, I submit that the foes of ID don't care how careful or specified our arguments are when they are willing to burn language and logic down to avoid the conclusion.William J Murray
March 30, 2013
March
03
Mar
30
30
2013
04:07 AM
4
04
07
AM
PDT
NL: Let's cut to the chase scene, on definitions of science [insofar as such is possible -- no detailed one size fits all and only sci def'n is possible and generally accepted], that are not ideologically loaded. Ideologically loaded? Yes, loaded with scientism, and metaphysically question-begging. Here is Lewontin:
. . . to put a correct view of the universe [--> Notice, worldview level issue] into people's heads we must first get an incorrect view out . . . the problem is to get them to reject irrational and supernatural explanations of the world, the demons that exist only in their imaginations [--> Notice the assumed materialist worldview and strawman-laced contempt for anything beyond that], and to accept a social and intellectual apparatus, Science, as the only begetter of truth [--> NB: this is a knowledge claim about knowledge and its possible sources, i.e. it is a claim in philosophy not science; it is thus self-refuting] . . . . It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated . . . [["Billions and billions of demons," NYRB, Jan 1997. To cut5 off the usual atmosphere poisoning talking point on quote-mining, cf. the wider cite and notes here on, and the other four cites that fill out the point, showing how pervasive the problem is.]
This is simply unacceptable, materialist ideology dressing itself up in the how dare you challenge US, holy lab coat. And, as the onward link shows, it is not just a personal idiosyncrasy of Lewontin, this is at the pivot of a cultural civil war not only in science but across the board. In response, let me clip, first two useful dictionary summaries -- dictionaries, at best seek to sumarise good usage -- from before the present controversies muddied the waters:
science: a branch of knowledge conducted on objective principles involving the systematized observation of and experiment with phenomena, esp. concerned with the material and functions of the physical universe. [Concise Oxford, 1990 -- and yes, they used the "z" Virginia!] scientific method: principles and procedures for the systematic pursuit of knowledge [”the body of truth, information and principles acquired by mankind”] involving the recognition and formulation of a problem, the collection of data through observation and experiment, and the formulation and testing of hypotheses. [Webster's 7th Collegiate, 1965 . . . used to be my Mom's]
Notice, no ideological loading, no ideological agendas and no question-begging. Next, from my IOSE appendix on methods I can put up a framework that allows us to explore what science should be like:
science, at its best, is the unfettered — but ethically and intellectually responsible — progressive, observational evidence-led pursuit of the truth about our world (i.e. an accurate and reliable description and explanation of it), based on:
a: collecting, recording, indexing, collating and reporting accurate, reliable (and where feasible, repeatable) empirical -- real-world, on the ground -- observations and measurements, b: inference to best current -- thus, always provisional -- abductive explanation of the observed facts, c: thus producing hypotheses, laws, theories and models, using logical-mathematical analysis, intuition and creative, rational imagination [[including Einstein's favourite gedankenexperiment, i.e thought experiments], d: continual empirical testing through further experiments, observations and measurement; and, e: uncensored but mutually respectful discussion on the merits of fact, alternative assumptions and logic among the informed. (And, especially in wide-ranging areas that cut across traditional dividing lines between fields of study, or on controversial subjects, "the informed" is not to be confused with the eminent members of the guild of scholars and their publicists or popularisers who dominate a particular field at any given time.)
As a result, science enables us to ever more effectively (albeit provisionally) describe, explain, understand, predict and influence or control objects, phenomena and processes in our world.
Now, NL, observe your own attempt:
In any natural science, you need 3 basic elements: (M) – Model space (formalism & algorithms) (E) – Empirical procedures & facts of the “real” world (O) – Operational rules mapping numbers between (M) and (E) The model space (M) defines an algorithmic model of the problem space via postulates, equations, programs, etc. It’s like scaled down model of the real world, where you can run the model, observe behaviors, measure numbers, then compare these with numbers from empirical observations obtained via (E) . . . . scientific postulates can make no use of concepts such as ‘mind’ or ‘consciousness’ or ‘god’ or ‘feeling’ or ‘redness’ since no one knows how to formalize any of these, how to express them as algorithms that do something useful, even though we may have intuition that they do something useful in real world. But if you can’t turn them into algorithmic form, natural science has no use for them. Science seen via the scheme above, is in fact a “program” of sorts, which uses human brains and fingers with pencils as its CPU, compiler and printer. The ID proponents unfortunately don’t seem to realize this “little” requirement. Hence, they need to get rid of “mind” and “consciousness” talk, which are algorithmically vacuous at present, and provide at least a conjecture about the ‘intelligent agency’ which can be formulated algorithmically might be, at least in principle (e.g. as existential assertion, not explicit construction).
1 --> You have so prioritised mathematical formalisms and algorithms that much of both historic and current praxis is excluded. Scope fails at outset: factually inadequate at marking a demarcation line between commonly accepted science and not_science. Unsurprising as after decades it is broadly seen that the conventionally labelled sciences cannot be given a precising definition that includes all and only sciences and excludes all not_science. 2 --> The first basic problem with the worldview lock out you attempt is that by excluding mind, you have locked out the scientists themselves. The first fact we have -- whatever its ontological nature -- is that we are intelligent, conscious, reasoning, choosing, minded people. It is through that, that we access all else. 3 --> This then leads you to exclude an empirical fact and to distort what design thinkers and theorists do. It is an observable fact, that intelligent designers -- human and animal [think: Beavers and their dams adapted to stream circumstances] -- exist and create designed objects, processes etc. Thus, such are causes that observably act in the world. It is then scientifically reasonable to inquire whether there are reliably observable markers that indicate design -- the process of specifying and creating chosen configurations to achieve a desired function -- as cause. 4 --> As the very words of your own post demonstrate, functionally specific complex organisation of parts and associated information, FSCO/I . . . the operationally relevant form of complex, specified information as discussed since Orgel and Wicken in the 1970's . . . is a serious candidate sign. [Cf. 101 here on, noting the significance of Chi_500 = I*S - 500, bits beyond the solar system threshold; in light of its underlying context.] One, that is observable, quantifiable, subject to reduction to model form, and testable. Where, on billions of cases, without exception, FSCO/I is demonstrably a reliable marker of design as causal process. Which, as the linked will show, is specifically applicable to cell based life, especially the highly informational and functionally specific string structure in D/RNA and proteins. 5 --> You will notice that no ontological inferences have been made, and that the NCSE etc caricature on inferring to God and/or the supernatural is false, willfully false and misleading. Indeed, from being "immemorial" in the days of Plato, to the title of Monod's 1970 book, we can see that causal explanations -- common in science [e.g. for how a fire works or how a valley is eroded by flowing water, or how light below a frequency threshold fails to trigger photoemission of electrons in a metal surface etc] -- are based, aspect by aspect, on mechanical necessity, and/or chance and/or the ART-ificial. Cf here at UD for how this aspect by aspect causal investigation is reduced to an "algorithmic" procedure -- a flowchart -- by design thinkers. 6 --> That flowchart is essentially the context of the eqn above. 7 --> Similarly, it is sufficient that, per experience and observation, intelligent agents exist and indeed that this is foundational to the possibility of science and engineering. 8 --> So, we have empirical warrant that such intelligent designs are possible and that they show certain commonly seen characteristics as FSCO/I and irreducible complexity whereby a cluster of core components properly arranged and fitted together, are needed for a function. 9 --> This last brings out a significant note. A nodes and arcs structure can be used to lay out the "wiring diagram" [I cite Wicken] of a functionally specific object or process, and this can then be reduced to an ordered set of strings. [AutoCAD etc do this all the time.] So, description and discussion of strings . . . *-*-* . . . is WLOG. And also, we have a reduction of organisation to associated implicit information. This also allows testing of the islands of function effect through injection of noise and the tolerance for such. 10 --> This leads to the next point, the von Neumann Kinematic self replicator [vNSR] which is a key feature of cells, cf. 101 here. A representational description is used with a constructor facility to replicate an entity. This has considerable implications on design of the world of cell based life as the living cell is an encapsulated, gated metabolic automaton with a vNSR. That implies that a causal account of OOL has to account, in light of empirical warrant, for:
Now, following von Neumann generally (and as previously noted), such a machine uses . . .
(i) an underlying storable code to record the required information to create not only (a) the primary functional machine [[here, for a "clanking replicator" as illustrated, a Turing-type “universal computer”; in a cell this would be the metabolic entity that transforms environmental materials into required components etc.] but also (b) the self-replicating facility; and, that (c) can express step by step finite procedures for using the facility; (ii) a coded blueprint/tape record of such specifications and (explicit or implicit) instructions, together with (iii) a tape reader [[called “the constructor” by von Neumann] that reads and interprets the coded specifications and associated instructions; thus controlling: (iv) position-arm implementing machines with “tool tips” controlled by the tape reader and used to carry out the action-steps for the specified replication (including replication of the constructor itself); backed up by (v) either:
(1) a pre-existing reservoir of required parts and energy sources, or (2) associated “metabolic” machines carrying out activities that as a part of their function, can provide required specific materials/parts and forms of energy for the replication facility, by using the generic resources in the surrounding environment.
Also, parts (ii), (iii) and (iv) are each necessary for and together are jointly sufficient to implement a self-replicating machine with an integral von Neumann universal constructor. That is, we see here an irreducibly complex set of core components that must all be present in a properly organised fashion for a successful self-replicating machine to exist. [[Take just one core part out, and self-replicating functionality ceases: the self-replicating machine is irreducibly complex (IC).]
11 --> The only empirically warranted, needle in haystack plausible explanation is design. This also extends to the 10 - 100 mn bit increments in genetic information required to account for major body plans. +++++++++ In short, you have evidently begged a few questions and set up then knocked over some straw men. I suggest some rethinking. KFkairosfocus
March 30, 2013
March
03
Mar
30
30
2013
03:26 AM
3
03
26
AM
PDT
nightlight you claim,,
These networks can create such information not because they are conscious but because they can compute anything that is computable (provided they are large enough)
So I take it you hold that the brain is 'computing' novel functional information, and computers don't because the computers aren't large enough yet. Yet if you hold that our brains are merely 'computing' new functional information then it seems you have a problem with the second law,,,
Alan’s brain tells his mind, “Don’t you blow it.” Listen up! (Even though it’s inchoate.) “My claim’s neat and clean. I’m a Turing Machine!” … ‘Tis somewhat curious how he could know it.
Are Humans merely Turing Machines? Alan Turing extended Godel’s incompleteness theorem to material computers, as is illustrated in this following video:
Alan Turing & Kurt Godel – Incompleteness Theorem and Human Intuition – video http://www.metacafe.com/w/8516356
And it is now found that,,,
Human brain has more switches than all computers on Earth – November 2010 Excerpt: They found that the brain’s complexity is beyond anything they’d imagined, almost to the point of being beyond belief, says Stephen Smith, a professor of molecular and cellular physiology and senior author of the paper describing the study: …One synapse, by itself, is more like a microprocessor–with both memory-storage and information-processing elements–than a mere on/off switch. In fact, one synapse may contain on the order of 1,000 molecular-scale switches. A single human brain has more switches than all the computers and routers and Internet connections on Earth. http://news.cnet.com/8301-27083_3-20023112-247.html
Yet supercomputers with many switches have a huge problem dissipating heat,,,
Supercomputer architecture Excerpt: Throughout the decades, the management of heat density has remained a key issue for most centralized supercomputers.[4][5][6] The large amount of heat generated by a system may also have other effects, such as reducing the lifetime of other system components.[7] There have been diverse approaches to heat management, from pumping Fluorinert through the system, to a hybrid liquid-air cooling system or air cooling with normal air conditioning temperatures. http://en.wikipedia.org/wiki/Supercomputer_architecture
But the brain, though having as many switches as all the computers on earth, does not have such a problem dissipating heat,,,
Does Thinking Really Hard Burn More Calories? – By Ferris Jabr – July 2012 Excerpt: So a typical adult human brain runs on around (a remarkably constant) 12 watts—a fifth of the power required by a standard 60 watt lightbulb (no matter what type of thinking or physical activity is involved). Compared with most other organs, the brain is greedy; pitted against man-made electronics, it is astoundingly efficient. http://www.scientificamerican.com/article.cfm?id=thinking-hard-calories
Moreover, one source for the heat generated by computers is caused by the erasure of information from the computer in logical operations,,,
Landauer’s principle Of Note: “any logically irreversible manipulation of information, such as the erasure of a bit or the merging of two computation paths, must be accompanied by a corresponding entropy increase ,,, Specifically, each bit of lost information will lead to the release of an (specific) amount (at least kT ln 2) of heat.,,, Landauer’s Principle has also been used as the foundation for a new theory of dark energy, proposed by Gough (2008). http://en.wikipedia.org/wiki/Landauer%27s_principle
And any computer, that has anything close to as many switches as the brain has, this source of heat will become prohibitive:
Quantum physics behind computer temperature Excerpt: It was the physicist Rolf Landauer who first worked out in 1961 that when data is deleted it is inevitable that energy will be released in the form of heat. This principle implies that when a certain number of arithmetical operations per second have been exceeded, the computer will produce so much heat that the heat is impossible to dissipate.,,, ,, the team believes that the critical threshold where Landauer’s erasure heat becomes important may be reached within the next 10 to 20 years. http://cordis.europa.eu/search/index.cfm?fuseaction=news.document&N_RCN=33479
Thus the brain is either operating on reversible computation principles no computer can come close to emulating (Charles Bennett), or, as is much more likely, the brain is not erasing information from its memory as material computers are required to do, because our memories are stored on a ‘spiritual’ level rather than on a material level,,, Research backs up this conclusion,,,
A Reply to Shermer Medical Evidence for NDEs (Near Death Experiences) – Pim van Lommel Excerpt: For decades, extensive research has been done to localize memories (information) inside the brain, so far without success.,,,,So we need a functioning brain to receive our consciousness into our waking consciousness. And as soon as the function of brain has been lost, like in clinical death or in brain death, with iso-electricity on the EEG, memories and consciousness do still exist, but the reception ability is lost. People can experience their consciousness outside their body, with the possibility of perception out and above their body, with identity, and with heightened awareness, attention, well-structured thought processes, memories and emotions. And they also can experience their consciousness in a dimension where past, present and future exist at the same moment, without time and space, and can be experienced as soon as attention has been directed to it (life review and preview), and even sometimes they come in contact with the “fields of consciousness” of deceased relatives. And later they can experience their conscious return into their body. http://www.nderf.org/NDERF/Research/vonlommel_skeptic_response.htm
To add more support to this view that ‘memory/information’ is not stored in the brain but on a higher 'spiritual' level, one of the most common features of extremely deep near death experiences is the ‘life review’ where every minute detail of a person’s life is reviewed:
Near Death Experience – The Tunnel, The Light, The Life Review – video http://www.metacafe.com/watch/4200200/
Thus the evidence strongly supports the common sense conclusion that humans are not Turing Machines! Note:
The Capabilities of Chaos and Complexity: David L. Abel - Null Hypothesis For Information Generation - 2009 To focus the scientific community’s attention on its own tendencies toward overzealous metaphysical imagination bordering on “wish-fulfillment,” we propose the following readily falsifiable null hypothesis, and invite rigorous experimental attempts to falsify it: "Physicodynamics cannot spontaneously traverse The Cybernetic Cut: physicodynamics alone cannot organize itself into formally functional systems requiring algorithmic optimization, computational halting, and circuit integration." A single exception of non trivial, unaided spontaneous optimization of formal function by truly natural process would falsify this null hypothesis. http://www.mdpi.com/1422-0067/10/1/247/pdf Can We Falsify Any Of The Following Null Hypothesis (For Information Generation) - Abel 1) Mathematical Logic 2) Algorithmic Optimization 3) Cybernetic Programming 4) Computational Halting 5) Integrated Circuits 6) Organization (e.g. homeostatic optimization far from equilibrium) 7) Material Symbol Systems (e.g. genetics) 8 ) Any Goal Oriented bona fide system 9) Language 10) Formal function of any kind 11) Utilitarian work
Quote:
"Information is information, not matter or energy. No materialism which does not admit this can survive at the present day. " Norbert Wiener created the modern field of control and communication systems, utilizing concepts like negative feedback. His seminal 1948 book Cybernetics both defined and named the new field.
bornagain77
March 30, 2013
March
03
Mar
30
30
2013
03:03 AM
3
03
03
AM
PDT
William J Murray,,
I don’t really see how panpsychism is at odds with ID.
Well, I don't know all the nuances of panpsychism but I do know, as with Darwinist, he has no empirical evidence for his claim. Particularly this
These networks can create such information not because they are conscious but because they can compute anything that is computable (provided they are large enough). Since functional information is computable, networks can generate it, provided such product is considered “reward” during the network’s learning phase.
i.e. Alan Turing & Kurt Godel - Incompleteness Theorem and Human Intuition - video (notes in video description) http://www.metacafe.com/watch/8516356/ Here is what Gregory Chaitin, a world-famous mathematician and computer scientist, said about the limits of the computer program he was trying to develop to prove evolution was mathematically feasible: At last, a Darwinist mathematician tells the truth about evolution - VJT - November 2011 Excerpt: In Chaitin’s own words, “You’re allowed to ask God or someone to give you the answer to some question where you can’t compute the answer, and the oracle will immediately give you the answer, and you go on ahead.” https://uncommondescent.com/intelligent-design/at-last-a-darwinist-mathematician-tells-the-truth-about-evolution/ Here is the video where, at the 30:00 minute mark, you can hear the preceding quote from Chaitin's own mouth in full context: Life as Evolving Software, Greg Chaitin at PPGC UFRGS http://www.youtube.com/watch?v=RlYS_GiAnK8 Related quote from Chaitin: The Limits Of Reason - Gregory Chaitin - 2006 Excerpt: an infinite number of true mathematical theorems exist that cannot be proved from any finite system of axioms.,,, http://www.umcs.maine.edu/~chaitin/sciamer3.pdf Information. What is it? - Robert Marks - lecture video (With special reference to ev, AVIDA, and WEASEL) http://www.youtube.com/watch?v=d7seCcS_gPk From David Tyler: How do computer simulations of evolution relate to the real world? - October 2011 Excerpt: These programs ONLY work the way they want because as they admit, it only works because it has pre-designed goals and fitness functions which were breathed into the program by intelligent designers. The only thing truly going on is the misuse and abuse of intelligence itself. https://uncommondescent.com/darwinism/from-david-tyler-how-do-computer-simulations-of-evolution-relate-to-the-real-world/comment-page-1/#comment-401493 Conservation of Information in Computer Search (COI) - William A. Dembski - Robert J. Marks II - Dec. 2009 Excerpt: COI (Conservation Of Information) puts to rest the inflated claims for the information generating power of evolutionary simulations such as Avida and ev. http://evoinfo.org/publications/bernoullis-principle-of-insufficient-reason/bornagain77
March 30, 2013
March
03
Mar
30
30
2013
02:14 AM
2
02
14
AM
PDT
CR @104: can you share your definition of science here? While trying to make sense of your post at #101, I was unable to find any search results for "algorithmically effective postulates", "algorithmically effective form", or "algorithmically effective elements". You won't probably find them since these are my terms. First, you need a general schematics of natural science sketched in the post 49. In any natural science, you need 3 basic elements: (M) - Model space (formalism & algorithms) (E) - Empirical procedures & facts of the "real" world (O) - Operational rules mapping numbers between (M) and (E) The model space (M) defines an algorithmic model of the problem space via postulates, equations, programs, etc. It's like scaled down model of the real world, where you can run the model, observe behaviors, measure numbers, then compare these with numbers from empirical observations obtained via (E). A stylized image of this relation is a wind tunnel where you test the wing or propeller shapes to optimize their designs. The "algorithmically effective postulates" are then the core cogs of the (M) which define how the computations in (M) are done, e.g. via Maxwell eqs. for EM fields, or Newton laws for mechanics. The "algorithmically effective" attribute of postulates means that the postulates have to provide something that does something algorithmic and useful in the space (M). This is analogous to teaching programmers to add only code that does something useful, not code which does nothing useful, such as x=x. For example, scientific postulates can make no use of concepts such as 'mind' or 'consciousness' or 'god' or 'feeling' or 'redness' since no one knows how to formalize any of these, how to express them as algorithms that do something useful, even though we may have intuition that they do something useful in real world. But if you can't turn them into algorithmic form, natural science has no use for them. Science seen via the scheme above, is in fact a "program" of sorts, which uses human brains and fingers with pencils as its CPU, compiler and printer. The ID proponents unfortunately don't seem to realize this "little" requirement. Hence, they need to get rid of "mind" and "consciousness" talk, which are algorithmically vacuous at present, and provide at least a conjecture about the 'intelligent agency' which can be formulated algorithmically might be, at least in principle (e.g. as existential assertion, not explicit construction). Since cellular biochemical networks are the real intelligent agency (potent distributed computers) anyway, they don't even need to strain much to find it. In fact James Shapiro, among others, is already saying nearly as much (although his understanding of adaptable networks could use some crash course). It may be, of course, that some do know what is needed, but deliberately insist on injecting those algorithmically vacuous elements for their own reasons (religious, ideological, etc). That's when the immune system of the scientific 'priesthood' kicks in and the hostility toward ID flares up. This immune system won't tolerate hostile antigens being injected into their social organism. Hence, even in this case, it is still more useful to drop the non-algorithmic baggage which is guaranteed to trigger the strong immune response, replace it with algorithmically effective work alike (such as biochemical networks). Delayed gratification trumps instant gratification in the long run. Also, a few questions would help the rest of us gauge where you stand on design as an objective phenomenon: Now we're entering the realm of far out speculations. Before stepping out on that limb, since my pigeons are not shaped the right way for those pigeonholes, few preliminary explanations and definitions are needed. I'll take the "agency" to be something like Kantian "Ding an sich" included above to satisfy the common need for reaching an appearance of causal closure, however contrived such catch-all mailbox may be. Keeping in mind that any imagery is a limited tool capturing at best only some aspects of meaning, for this aspect, disliking finalities as a matter of principle, I prefer an image of Russian dolls, with the last unopened doll labeled by convention the 'agency'. In that perspective, there is always a chance that what appeared as a solid core doll, actually has a little thin line around the waste, unnoticed in all previous inspections, but which with some ingenuity and if twisted just the right way, might split open revealing an even smaller and more solid new "core" doll. Hence, the 'agency' here is not an entity in the heavens or above or the largest one, but exactly the opposite in all these attributes -- under and inside all other entities, the smallest one of them all. Each outer doll is thus shaped around the immediate inner doll, in its image, as it were. The ontology, including consciousness, sharpness of qualia and realness of reality, thus emanate and diffuse from the inside out, from the more solid, more real, smaller, quicker... to the more ephemeral, more dreamlike, larger, more sluggish... These layers being shaped in each other's image, there is common functional and structural pattern which is inherited and propagated between the layers, unfolding and shaping itself from inside out. Structurally, this pattern is a network with adaptable links, while functionally the patterns operate as distributed, self-programming computers (such as brains, which are networks of neurons, or as neural networks, which are mathematical models of such distributed computers). At the innermost layer, the elemental nodes (or agents) of the networks have only 2 states (Wheeler's "it from bit" concept), labeled as +1 (reward) and -1 (punishment). Ontologically, these two abstract node states correspond to the two elemental mind-stuff states, or qualia values, +1 = (happiness, joy, pleasure,...) and -1 = (unhappiness, misery, pain,...). For convenience they are labeled here in terms of familiar human experiences which are some of the counterparts (depending on context or location within our networks) at our level of these two elemental mind-stuff states as they emanate and get inherited from the inside out. Algorithmically, these networks function as self-programming distributed computers, operating as optimizers which seek to maximize the net (abstract) "reward" - "punishment" scores, the latter being sums of the +1 and -1 node values. As explained in post #107, the common algorithmic pattern used by the networks for this optimization is internal modeling of their environment and of self (ego actor), then playing this model forward in model time against different actions of the ego actor, as a what-if game, measuring and tallying resulting punishments & rewards (as encoded in the model's knowledge/patterns of the environment and self), and finally picking the "best" action of the ego actor to perform in the real world. By virtue of the ontological association between 'mind stuff' elements and computations by the nodes, the above computational process is experienced by the network as "thinking through" this what-if game. Hence, the computations by networks and conscious thinking are inseparable in this perspective. Since the above optimization relies on internal modeling and what-if game between sub-networks, optimization at the level above, of total (rewards - punishments) would seek to harmonize the actions of the subnetworks so that they are maximally predictable to each other (similar idea to Leibniz monads). Hence mutual predictability is one of the subgoals of the optimization process. That's the reason why the laws of nature seem suprisingly understandible and knowable -- they are optimized to be that way, they 'love' to be known and understood by other agencies. This harmonization process proceeds from smaller scales to larger scales via construction of ever larger computational technologies, just as we do it in our technological society (from PCs and corporate networks to massive Data Centers and internet). The key technologies computed this way are physical layer (particles, fields, interactions, physical constants & laws), chemical layer, cellular biochemical networks, layer of organisms, layer of societies of organisms, ecosystems (multiple societies of organisms). Note that physical particles, their laws and all higher level objects and their laws are properties of the activations patterns unfolding on the Planckian networks, analogous to gliders and other patterns unfolding on the grid of the Conway's Game of Life. Hence, the Planckian networks are not computing physical, chemical, biological,... laws separately. Such separation is an artifact of us selecting some aspects of those patterns and extracting regularities related to those isolated aspects. Hence, the real laws (activation patterns) as computed by the networks are not separate or reducible to some subset of laws (such as biological to physical), since there are no such subsets -- the pattern is computed in a single go, with all its aspects rolling at once, all in the one and the same set of "flickers" of the network's cells. Therefore, one can view chemical, biological,... laws as small, subtle and purposeful adjustments or refinements of the physical laws, when those system are operating in the more complex settings, such as complex molecule or a cell. Recall also that physical laws are already fundamentally statistical (via quantum theory). Hence, these subtle adjustments for chemistry or biology patterns are not violating those statistical laws of physics -- what to our present physical law is "random" is actually non-random. A good analogy for this kind of relation between laws would be "laws" of traffic flows, in which the cars are "elementary" objects of the theory. These "objects" obey some statistical laws and regularities of traffic flows. As far as such laws of traffic flows can tell, individual cars are moving "randomly" and only their statistics has regularities. In fact, the individual cars are not moving randomly but each is guided by its driver for purposes which are much too subtle to be captured by the crude traffic flow laws. Yet such intelligent guidance of each car from inside, to some higher subtle purposes, does not violate laws of traffic flaws, since these are only statistical laws. In the same way, our physical and chemical laws are much too crude to capture the subtle intelligent guidance of the particles used for the biological and higher laws. Yet, as in the traffic laws example, such intelligent guidance for higher purposes does not violate physical and chemical laws. With the above in mind, let's check the questions: Q1) Do you agree that chance and physical necessity are insufficient to account for designed objects such as airplanes, buildings, and computers? "Physical necessity" and "chance" are vague. If you have in mind physical laws (with their chance & necessity partitions) as known presently, then they are insufficient. But in my picture, Planckian networks compute far more sophisticated laws that are not of reductionist, separable by kind, but rather cover physics, chemistry, biology... in one go. Our physical laws are in this scheme coarse grained approximations of some aspects of these real laws, chemical and biological laws similarly are coarse grained approximations of some other aspects of the real laws, i.e. our present natural science chops the real laws like a caveman trying to disassemble a computer for packing it, then trying to put it back together. It's not going to work. Hence, the answer is that the real laws computed by the Planckian networks via their technologies (physical systems, biochemical networks, organisms), are sufficient. That's in fact how those 'artifacts' were created -- as their technological extensions by these underlying networks. Q2) Do you agree that intelligent agency is a causal phenomenon which can produce objects that are qualitatively distinguishable from the products of chance and necessity, such as those resulting from geological processes? Assuming 'intelligent agency' to be the above Planckian networks (which are conscious, super-intelligent system), then per (Q1) answer, that is the creator of real laws (which combine our physical, chemical, biological... laws as some their aspects). Since this agency operates only through these real laws (they are its computation, which is all it does), its actions are the actions of the real laws, hence there is nothing to be distinguished here, it's one and the same thing. Of course, per (Q1), the true laws computed here are not the sum of our physical, chemical and biological laws. The latter are only one dimensional crude shadows of the former, each of our present laws capturing only some aspects of regularity of the continuous computational process which keeps the universe going from moment to moment, and for each "elementary" particle, each atom, each molecule, each cell,... The real laws are not reducible to its components, hence they are not reductionist. You cannot stop some aspects of computation (biological) and ask what will other aspects (physical) do then. They are all same "flickers" of the same Planckian nodes, you stop one kind of pattern you stop them all. Q3) Do you think there are properties that designed objects can have in common which set them apart from objects explicable by chance and physical necessity? Again this comes back to Q1. If the "physical necessity + chance" refer to those that follow from our physical laws, then yes, designed objects are easily distinguishable. But when we contrive and constrain some setup that way, to comply with what we understand as physical laws, we are lobotomizing the natural computations in order to comply with one specific pattern (one that matches particular limited concept of physical law). Hence, in my picture Q3 is asking whether suitably lobotomized 'natural laws' (the real computed laws) will produce distinguishably inferior output to non-lobotomized system (the full natural computations) -- obviously yes. It's always easy to make something underperform by tying its both arms and legs so it fits some arbitrarily prescribed round shape. As you can see, due to unconventional shape of my pigeons, they just don't fit into your pigeonholes and answers won't tell you anything non-trivial. The real answers on substance are in the introductory description.nightlight
March 29, 2013
March
03
Mar
29
29
2013
11:45 PM
11
11
45
PM
PDT
What exactly are you asking i.e. what is “purely natural/material process” ? If it is what I call that, then the answer is trivial – cells and humans are ‘pure natural/material processes’ (keep in mind that this is panpsychism, where mind stuff is not unique to humans or to live organisms), they generate code and symbolic information.
A semiotic structure is also found in an automated fabric loom, but the source of the organization is no more in the fabric loom than it is in the body. There is no mechanism within the loom to establish the relationships required for it to operate, and that mechanism doesn't exist in the cell either. And the fact that the cell replicates with inheritable variation from a genotype to a phenotype is of no importance, because without the establishment of a semiotic state there is no genotype to begin with. What is unaccounted for is a mechanism capable of establishing a semiotic state.Upright BiPed
March 29, 2013
March
03
Mar
29
29
2013
09:03 PM
9
09
03
PM
PDT
bornagain77 #105: Basically you are claiming that purely material/natural processes, because they are `conscious', can create functional information. You missed the point. These networks can create such information not because they are conscious but because they can compute anything that is computable (provided they are large enough). Since functional information is computable, networks can generate it, provided such product is considered "reward" during the network's learning phase. The common algorithm used by the adaptable networks to maximize the net (rewards - punishments) figure is to create internal model of the environment delivering those punishments and rewards. Within the internal model, network also has 'ego actor' which it runs forward in time in the model space-time (in it's head, as it were) against the model's environment 'actor', testing different actions of the 'ego actor' and evaluating rewards and punishments, then picking the best action of the 'ego actor' as the one to execute in the real environment. Hence this works like a chess player playing imagined moves, his own, then opponents, then his own,... on his mind's internal model of chess position, until the best move is found to be played on the real board. These kinds of internal models can be reverse engineered in experiments with neural networks set to learn something. The specified information created in such "natural" process is the internal model which encodes the knowledge (as learned patterns) about environment (its laws or patterns of its behavior) and about 'ego actor' (self). Hence, networks, which can be implemented in many ways and out many materials (e.g. via solid state circuits), can create specified functional information. Cellular biochemical networks are one example of such systems which encode this info (about environment & self) in their DNA and in epigenetic form. Human or animal brains are another example. The comment on consciousness was inserted merely so you wouldn't jump on that "distinction" between natural/material and "conscious intelligence" (that in your scheme has different nature). It plays no role in conclusions. I am merely pointing out that, regardless of whether matter-energy is conscious or not, you are in the same exact empirical boat as Darwinists are in that you have ZERO evidence that purely material/natural processes, whether they are conscious or not, can create ANY functional information. Cells do it, and they are natural/material processes. Or are playing 'no true Scotsman' fallacy game? I.e. as soon as something violates your postulate, you reclassify it into the other side, so your flexible category "material/natural" seems to fulfill whatever you want it to fulfill. Note that simple physical processes can be seen as producing 'functional information' provided you define it right. E.g. when a rock falls into a mud, then gets removed, there is an exact complex imprint negative of the rock's features in that mud. Such imprint can then cast a replica positive for a rock shape, hence you can call the negative a code of rock's shape. More generally, interaction between any two physical systems leaves "imprint" of one in the other, capturing info about each other in great detail (provided you have a right decoder to extract it). Laws of physics just happen to be such that they allow for this kind of imprinting and mutual encoding between systems. The point being, you need a lot more precise language before you can make categorical pronouncements of the sort cited above. If you're trying to make pro-ID argument with such declarations, that's one of the poorer ways to do it, since it needlessly drags in piles of shaky, ill defined concepts. It's like building a house on a swamp.nightlight
March 29, 2013
March
03
Mar
29
29
2013
08:46 PM
8
08
46
PM
PDT
I don't really see how panpsychism is at odds with ID.William J Murray
March 29, 2013
March
03
Mar
29
29
2013
08:12 PM
8
08
12
PM
PDT
Basically you are claiming that purely material/natural processes, because they are 'conscious', can create functional information. I am merely pointing out that, regardless of whether matter-energy is conscious or not, you are in the same exact empirical boat as Darwinists are in that you have ZERO evidence that purely material/natural processes, whether they are conscious or not, can create ANY functional information. That is the exact demarcation point, as far as empirical evidence is concerned, that will separate you, and Darwinists, from pseudo-science!,,, Frankly, you should have gotten along quite swell with Darwinists, as far as I can tell, as you are claiming, basically, the same exact things for what we should see in reality,,,, but I guess you just weren't atheistic enough for them.,, Similar to James Shapiro's 'natural genetic engineering' predicament!bornagain77
March 29, 2013
March
03
Mar
29
29
2013
08:05 PM
8
08
05
PM
PDT
1 4 5 6 7 8 10

Leave a Reply