Uncommon Descent Serving The Intelligent Design Community

Optimus, replying to KN on ID as ideology, summarises the case for design in the natural world

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The following reply by Optimus to KN in the TSZ thread, is far too good not to headline as an excellent summary of the case for design as a scientifically legitimate view, not mere  “Creationism in a cheap tuxedo”  ideology motivated and driven by anti-materialism and/or a right-wing, theocratic, culture war mentality commonly ascribed to “Creationism” by its objectors:

______________

>> KN

It’s central to the ideological glue that holds together “the ID movement” that the following are all conflated:Darwin’s theories; neo-Darwinism; modern evolutionary theory; Epicurean materialistic metaphysics; Enlightenment-inspired secularism. (Maybe I’m missing one or two pieces of the puzzle.) In my judgment, a mind incapable of making the requisite distinctions hardly deserves to be taken seriously.

I think your analysis of the driving force behind ID is way off base. That’s not to say that persons who advocate ID (including myself) aren’t sometimes guilty of sloppy use of language, nor am I making the claim that the modern synthetic theory of evolution is synonymous with materialism or secularism. Having made that acknowledgement, though, it is demonstrably true that (1) metaphysical presuppostions absolutely undergird much of the modern synthetic theory. This is especially true with regard to methodological naturalism (of course, MN is distinct from ontological naturalism, but if, as some claim, science describes the whole of reality, then reality becomes coextensive with that which is natural). Methodological naturalism is not the end product of some experiment or series of experiments. On the contrary it is a ground rule that excludes a priori any explanation that might be classed as “non-natural”. Some would argue that it is necessary for practical reasons, after all we don’t want people atributing seasonal thunderstorms to Thor, do we? However, science could get along just as well as at present (even better in my view) if the ground rule is simply that any proposed causal explanation must be rigorously defined and that it shall not be accepted except in light of compelling evidence. Problem solved! Though some fear “supernatural explanation” (which is highly definitional) overwhelming the sciences, such concerns are frequently oversold. Interestingly, the much maligned Michael Behe makes very much the same point in his 1996 Darwin’s Black Box:

If my graduate student came into my office and said that the angel of death killed her bacterial culture, I would be disinclined to believe her…. Science has learned over the past half millenium that the universe operates with great regularity the great majority of the time, and that simple laws and predictable behavior explain most physical phenomena.
Darwin’s Black Box pg. 241

If Behe’s expression is representative of the ID community (which I would venture it is), then why the death-grip on methodological naturalism? I suggest that its power lies in its exclusionary function. It rules out ID right from the start, before even any discussions about the emprical data are to be had. MN means that ID is persona non grata, thus some sort of evolutionary explanation must win by default. (2) In Darwin’s own arguments in favor of his theory he rely heavily on metaphysical assumptions about what God would or wouldn’t do. Effectively he uses special creation by a deity as his null hypothesis, casting his theory as the explanatory alternative. Thus the adversarial relationship between Darwin (whose ideas are foundational to the MST) and theism is baked right into The Origin. To this very day, “bad design” arguments in favor of evolution still employ theological reasoning. (3) The modern synthetic theory is often used in the public debate as a prop for materialism (which I believe you acknowledged in another comment). How many times have we heard the famed Richard Dawkins quote to the effect that ‘Darwin made it possible to be an intellectually fulfilled atheist’? Very frequently evolutionary theory is impressed into service to show the superfluousness of theism or to explain away religion as an erstwhile useful phenomenon produced by natural selection (or something to that effect). Hardly can it be ignored that the most enthusiastic boosters of evolutionary theory tend to fall on the atheist/materialist/reductionist side of the spectrum (e.g. Eugenie Scott, Michael Shermer, P.Z. Meyers, Jerry Coyne, Richard Dawkins, Sam Harris, Peter Atkins, Daniel Dennett, Will Provine). My point simply stated is that it is not at all wrong-headed to draw a connection between the modern synthetic theory and the aforementioned class of metaphysical views. Can it be said that the modern synthetic theory (am I allowed just to write Neo-Darwinism for short?) doesn’t mandate nontheistic metaphysics? Sure. But it’s just as true that they often accompany each other.

In chalking up ID to a massive attack of confused cognition, you overlook the substantive reasons why many (including a number of PhD scientists) consider ID to be a cogent explanation of many features of our universe (especially the bioshpere):

-Functionally-specified complex information [FSCI] present in cells in prodigdious quantities
-Sophisticated mechanical systems at both the micro and macro level in organisms (many of which exhibit IC)
-Fine-tuning of fundamental constants
-Patterns of stasis followed by abrupt appearance (geologically speaking) in the fossil record

In my opinion the presence of FSCI/O and complex biological machinery are very powerful indicators of intelligent agency, judging from our uniform and repeated experience. Also note that none of the above reasons employ theological presuppositions. They flow naturally, inexorably from the data. And, yes, we are all familiar with the objection that organisms are distinct from artificial objects, the implication being that our knowledge from the domain of man-made objects doesn’t carry over to biology. I think this is fallacious. Everyone acknowledges that matter inhabiting this universe is made up of atoms, which in turn are composed of still other particles. This is true of all matter, not just “natural” things, not just “artificial” things – everything. If such is the case, then must not the same laws apply to all matter with equal force? From whence comes the false dichotomy that between “natural” and “artificial”? If design can be discerned in one case, why not in the other?

To this point we have not even addressed the shortcomings of the modern synthetic theory (excepting only its metaphysical moorings). They are manifold, however – evidential shortcomings (e.g. lack of empirical support), unjustified extrapolations, question-begging assumptions, ad hoc rationalizations, tolerance of “just so” stories, narratives imposed on data instead of gleaned from data, conflict with empirical data from generations of human experience with breeding, etc. If at the end of the day you truly believe that all ID has going for it is a culture war mentality, then may I politely suggest that you haven’t been paying attention.>>

______________

Well worth reflecting on, and Optimus deserves to be headlined. END

Comments
NL: Pardon, but -- with all due respect -- ignoring cogent correction from several sources then repeating the same talking points ad nauseum will not work at UD. Except to classify you in the category of the talking point pushers. Please, think again. KFkairosfocus
April 3, 2013
April
04
Apr
3
03
2013
03:06 AM
3
03
06
AM
PDT
NL you state:
So, that’s a counter-example invalidating your and Dr Abel’s claims of impossibility of such natural processes, not an “experimental proof” of anything as you keep relabeling it.
So you have an 'example' of the null being falsified but you have no actual 'experimental proof' of the null being falsified? Such as say a single functional protein or a molecular machine arising by your 'neural network' method? How convenient! Seems to me you are the one doing some major relabeling as to what constitutes falsification in science. Shoot you have even relabeled all of science just so that it conveniently can't include any Theistic 'mind stuff' (or any 'random' Darwinian stuff) but just so happens to conveniently include your false idol MATRIX version of 'mind stuff'.,,, Sure must be nice to practice science in such a way that you can guarantee only your theory will be considered 'scientific' beforehand.bornagain77
April 3, 2013
April
04
Apr
3
03
2013
02:57 AM
2
02
57
AM
PDT
PetrerJ #190 nightlight: "Namely, if the natural process has some very simple intelligence front loaded, such as working like a neural network, then such intelligence is additive, hence it can accumulate any level of intelligence needed to explain the intelligence implied by the biological artifacts." The `simple intelligence' you describe as being front loaded, why should it be `simple'? By Ockham's razor, the simplest model that suffices to explain the phenomenon would the preferable. Note that dumber the initial network, the more layers you need to reach given target level of intelligence (target as deduced from biological artifacts via ID detection method). Hence, the minimum needed front loaded level of intelligence may be determined by the number of layers of networks we can fit between the lowest (Planckian scale) and the highest level (cellular biochemical networks). Also, could this `simple intelligence' you talk of be that of a `mind'? Mind stuff (consciousness, qualia) is a fact of the personal experience. Hence it needs explaining. My preference is panpsychism, where the elemental building blocks, such nodes of Planckian networks, already have built in 'mind stuff' as the fundamental driver of their actions/decisions. Earlier posts #58 and #109 describe a possible model for amplification and composition of this elemental mind stuff into the mind stuff as we experience it.nightlight
April 3, 2013
April
04
Apr
3
03
2013
12:14 AM
12
12
14
AM
PDT
Chance Ratcliff #178: ID accepts that, in principle, random mutations are perfectly capable of explaining certain microevolutionary changes, such as bacterial drug resistance. Why would they concede even that when no one knows how to evaluate odds that out of all possible alterations of DNA consistent with laws of physics & chemistry, random picks of any such alterations would suffice to produce observed rate of beneficial mutations. Just because one can observe beneficial mutations in the lab or in nature, that doesn't mean the pick of DNA alteration was random among all possible alterations. It only means that DNA transformed in a beneficial manner in a given amount of time, but implies nothing about nature of guidance (intelligently guided or random). Suppose someone claims, and shows, they can get triple 1 by rolling 3 dice in 10 or fewer throws, at least half of the time. How would you know whether it was random or cheating (intelligently guided)? You calculate the size of event space N when rolling 3 dice, which is N=6^3=216 combinations 1=(1,1,1), 2=(1,1,2),... 216=(6,6,6). The odds of not getting (1,1,1) in 1 throw are 215/216. The odds of not getting (1,1,1) in 2 throws are (215/216)^2,... the odds of not getting (1,1,1) in 10 throws are (215/216)^10 = 95.47 %, hence the chance of achieving (1,1,1) in 10 of fewer tries is 100-95.47=4.53 %. So, a random process couldn't be getting (1,1,1) in 10 or fewer throws 50% of the time, but would get it only 4.54 % of time. Hence, the process was intelligently guided. No one has clue how to calculate such event space (and any weights of different configurations), to show that a random pick among all such accessible configurations, given the populations size and number of alternations tried in a given time. All they can do is measure spontaneous mutation rates, but those dice throws were spontaneous, too (on video they looked just like real random throws). Spontaneous is not synonym for random. Spontaneous means not induced by external deliberate interference by the experimenters. But how it was guided beyond that, you can't tell without having a probabilistic model of the event space, such as the above dice model, where you enumerate all accessible alternate configurations, then assign probabilities to events in thatspace that follow from laws of physics & chemistry. I have yet to see any calculation like that (full quantum theoretic computations for DNA size molecule to calculate exact odds of different adjacent states is out of question by a long, long shot). For example, cellular biochemical networks, being networks with adaptable links, are intelligent anticipatory system (that's valid whether they are a computing technology of the Planckian networks or not), and they could have computed the DNA alterations which improve the odds of the beneficial mutations above the (unknown) odds of a random pick among all accessible configurations. Without knowing what the latter odds are, they have no way of telling them apart from the observed spontaneous mutation rate. Consider analogous phenomenon in the evolution of technologies -- there is some observed rate of innovations. The relabeling of spontaneous mutations as "random" mutations in biology, would be analogous to claiming that any innovation that didn't come from government sponsored labs, with official seal, is random i.e. some manufacturing error or copying errors of software gave rise to new version of Windows or new model of a car. It would be absurd to concede "random" in such situation. Hence, by conceding "randomness" attribute of observed spontaneous mutations, ID proponents are setting themselves to have to accept any genetic novelty that can be observed to happen spontaneously in the lab or in nature as being result of "random" mutation (such as rapid adaptations of those isolated lizards on an Adriatic island recently). That's a very bad place to be at, since you never know how rapid intelligently guided (e.g. by biochemical networks) evolution can be and how much novelty can be observed. Paradoxically, with the above concession, the more rapid evolution, which was supposed to be ally of ID, becomes stronger "proof" of neo-Darwinian claims, that "random" mutation can produce such rapid evolution. Yet, nothing of the sort follows from mere observation of "spontaneous" mutation rate, however fast or slow it may be, just as it doesn't follow for other observed instances of "microevolution" already conceded. Even the randomly induced mutations, e.g. via radiation, which turn out beneficial, are not a proof that the resulting beneficial DNA change isn't a result of intelligent repair of the radiation damage by an intelligent processes such as the biochemical networks, rather than being just a randomly altered structure struck by a gamma photon. For example, we can were to look at analogous "induced mutation" via damage in examples of evolution of human technologies (which are obviously intelligently guided). Say a hacker vandalizes Microsoft's Windows source code, by deleting some function, plus all of its backups. When programmers try compiling the source, they get compiler error because of missing function. Then they discover the function is missing in the backups. With no other way out, they just rewrite the function from scratch. It may easily happen that the new version is better than the old, hence, looking from outside (that's all we can do with molecules), it appears as if the "induced random mutation" of the source code has improved source code. In fact, it was the same intelligence which created that source (human programmers) which produced the improvement. The "randomly induced mutation" is not a synonym for "induced random mutation." Only the first one is the fact, the seond one is an unproven conjecture. Therefore, even the "induced mutation" via random damage, which results in beneficial innovation is not a proof that beneficial innovation itself was produced by the random damage even though random damage was a trigger for the improvement (analogous to deletion triggering the fresh rewrite and improvement of the deleted function). The only way one could prove that random damage produced beneficial innovation (instead of merely being a trigger for intelligent process) is by computing the event space and correct odds of such improvement via random damage, exactly as in the spontaneous mutation case or in the dice example.nightlight
April 2, 2013
April
04
Apr
2
02
2013
11:51 PM
11
11
51
PM
PDT
Hi Nightlight, I've been following this discussion as best as I can, and have found it very interesting. However, I am no scientist, and can't claim to fully understand your argument in its entirety. Therefore I was wondering if you could help me out a little by giving me a quick overview of your main point (pretty much in laymans terms too, please)? Having explained my position here, and my need of your assistance to understand your point better, could you explian the following statement and answer some questions for me: "Namely, if the natural process has some very simple intelligence front loaded, such as working like a neural network, then such intelligence is additive, hence it can accumulate any level of intelligence needed to explain the intelligence implied by the biological artifacts." The 'simple intelligence' you describe as being front loaded, why should it be 'simple'? Also, could this 'simple intelligence' you talk of be that of a 'mind'? And if it is categorically not of a 'mind', how do you know that?PeterJ
April 2, 2013
April
04
Apr
2
02
2013
11:16 PM
11
11
16
PM
PDT
Phinehas #187: Storage and retrieval may not be the same thing as thinking. Biological complexity only implies ability to compute anticipatory (intelligent) algorithms, not ability to think (which is much to vague, anthropomorphic term anyway). Neural networks with unsupervised learning can do that via simple physics-like interactions (see <a href="https://uncommondescent.com/intelligent-design/optimus-replying-to-kn-on-id-as-ideology-summarises-the-case-for-design-in-the-natural-world/#comment-451321"post #116), without anyone programming such anticipatory algorithms into them.nightlight
April 2, 2013
April
04
Apr
2
02
2013
09:32 PM
9
09
32
PM
PDT
BA77 #185: NL you claim that you have empirical proof and then state nightlight: Read that post #179 again if you missed it (there is a link to a paper). No matter how much complex specified information is encoded in the DNA & proteins, the Imax value, the # of information bits storable and retrievable from the neural network, can exceed it (Imax is proportional to k*N^2, where N is number of nodes, k is number of bits per link which can be as large as you want). BA77: So you think presenting empirical proof for your claim is just programing in whatever number(s) you want into some `neural network' get some numbers out and that you don't actually have to produce any real world evidence for novel functional proteins or DNA sequences? Perhaps we need to define empirical proof a little more clearly? Why are you putting words in my mouth? I restated what I said previously. You were the only one talking about "empirical evidence" not me. What I said throughout, there as well, is that a natural processes, such as unsupervised neural networks, can generate any required amount of complex specified information (CSI). I gave you the evidence for what I said is true. The implication of what I said is that whatever CSI is observed in biological artifact is explicable by a natural process, provided the nature uses neural network based pregeometry (Planck scale physics; network models of that scale already exist). You and Dr Abel (is Cain on it, too?), claim that natural processes cannot produce CSI, which is incorrect. Namely, if the natural process has some very simple intelligence front loaded, such as working like a neural network, then such intelligence is additive, hence it can accumulate any level of intelligence needed to explain the intelligence implied by the biological artifacts. So, that's a counter-example invalidating your and Dr Abel's claims of impossibility of such natural processes, not an "experimental proof" of anything as you keep relabeling it.nightlight
April 2, 2013
April
04
Apr
2
02
2013
09:23 PM
9
09
23
PM
PDT
nightlight:
No matter how much complex specified information is encoded in the DNA & proteins, the Imax value, the # of information bits storable and retrievable from the neural network, can exceed it.
Storage and retrieval may not be the same thing as thinking. Someone posted recently about the qualitative difference between Shannon information and what we typically understand as information derived from intelligence. I don't think the capacity to store information or even to 'learn' information is equivalent to the kind of creative things the mind can do.Phinehas
April 2, 2013
April
04
Apr
2
02
2013
09:18 PM
9
09
18
PM
PDT
StephenB #184 Why would you cringe? The process by which the scientific inference to design is made is not synonymous with the philosophical/religious implications that may follow from it. There are no indicators for "mind" or "consciousness" in ID methodology-only the inferred presence of an intelligent agent. I cringe because mixing a perfectly valid empirical observation (the ID design detection) allows those who don't like the philosophical or religious implications, to disqualify it as non-science since it claims to infer the "mind" or "consciousness", which is not scientifically valid (within present natural science which lacks a model of 'mind stuff' and objective empirical way to detect it). The only thing that ID design detection in biology implies is that intelligent process (or agent) produced those artifacts, not whether such process or agent had a mind or consciousness as Meyer claims. He can't know that, much less demonstrate it scientifically. Neither Stephen Meyer, nor anyone else, has any way of demonstrating scientifically even that his wife has "mind", who is in front of him and telling him she has it, let alone claiming to infer that something no one has ever observed has it. That's a pure gratuitous self-sabotage, a complete waste of a valid ID inference, by a needless leap too far. So, it is the same cringe I had as I watched Magnus Carlsen needlessly self-destruct aganst Svidler in the critical last round chess game of the London tournament (winner gets to challenge world champion Anand). Luckily, the only other contender for the 1st place, Vlad Kramnik, self-destructed as well a bit later.nightlight
April 2, 2013
April
04
Apr
2
02
2013
08:48 PM
8
08
48
PM
PDT
NL you claim that you have empirical proof and then state:
Read that post #179 again if you missed it (there is a link to a paper). No matter how much complex specified information is encoded in the DNA & proteins, the Imax value, the # of information bits storable and retrievable from the neural network, can exceed it (Imax is proportional to k*N^2, where N is number of nodes, k is number of bits per link which can be as large as you want).
So you think presenting empirical proof for your claim is just programing in whatever number(s) you want into some 'neural network' get some numbers out and that you don't actually have to produce any real world evidence for novel functional proteins or DNA sequences? Perhaps we need to define empirical proof a little more clearly?
Empirical proof is "dependent on evidence or consequences that are observable by the senses. Empirical data is data that is produced by experiment or observation."
I know it is probably a bit beneath a man of your caliber, but could you actually go to the trouble of showing us exactly which novel functional proteins have been generated by your 'neural network' in real life. I don't know of any examples from bacteria that you can refer to, but who knows perhaps you've designed hundreds of proteins on 'neural network computers and we just don't about them yet:bornagain77
April 2, 2013
April
04
Apr
2
02
2013
08:21 PM
8
08
21
PM
PDT
nightlight
I have heard it (and cringed) many times from him, e.g. google search returns 36,400 hits, with him stating declaring that the implication of the “signature in the cell” is the product of “intelligent mind.”
Why would you cringe? The process by which the scientific inference to design is made is not synonymous with the philosophical/religious implications that may follow from it. There are no indicators for "mind" or "consciousness" in ID methodology--only the inferred presence of an intelligent agent.StephenB
April 2, 2013
April
04
Apr
2
02
2013
08:08 PM
8
08
08
PM
PDT
BA77 #180: NL, since you cannot produce any empirical proof for this claim,,, Read that post #179 again if you missed it (there is a link to a paper). No matter how much complex specified information is encoded in the DNA & proteins, the Imax value, the # of information bits storable and retrievable from the neural network, can exceed it (Imax is proportional to k*N^2, where N is number of nodes, k is number of bits per link which can be as large as you want). And again Abel directly challenges ANY scenarios such as yours to falsify the null,,, The capabilities of stand-alone chaos, complexity, self-ordered states, natural attractors, fractals, drunken walks, complex adaptive systems, and other subjects of non linear dynamic models are often inflated,, That is all about the guessing setup he uses, described in #179. But Abel's FSC pattern guessing setup is only one among possible scheme (corresponding to GA or RM+NS methods) one can try for generating CSI observed in the DNA and proteins of the live cells. That guessing method doesn't work, which is merely a rephrased ancient result about incompressibility of random data. The Abel's restriction doesn't apply to CSI generated by neural networks or by a computer program or by human brain, for that matter. NNs, computer programs or human brain for example can generate any amount of CSI. The CSI only means that you have two large matching patterns A and B, e.g. with A corresponding to some subset of DNA code and B corresponding to some well adapted phenotypic traits or requirements. One can say that B specifies A in the sense that well fitting phenotypic traits specify requirements on what encoding DNA needs to have. For example, if an animal lives in cold climate, DNA code for longer fur or thicker layers of fat are specified by these phenotypic requirements. The C in CSI only means that there is large enough number of such matching elements between A and B (hence large bits of information). CSI is not a synonym for the Abel's search method (or Dembski's assisted search), but an independent concept which long predates Abel, Dembski and ID. Note also that unlike regular computer program which can also produce any amount of CSI, but it needs a programmer, the neural networks don't a need a programmer, since unsupervised learners are self programming (they only need simple interactions).nightlight
April 2, 2013
April
04
Apr
2
02
2013
07:46 PM
7
07
46
PM
PDT
Chance Ratcliff #178: What specifically is Meyer stating that you take issue with, in regards to consciousness? Instead of "consciousness talk" generalities, if you could quote something relevant that Meyer actually said, it would make it possible to talk about his comments, rather than your interpretations of them. I have heard it (and cringed) many times from him, e.g. google search returns 36,400 hits, with him stating declaring that the implication of the "signature in the cell" is the product of "intelligent mind." The mind and consciousness are synonymous in this context, in that neither has any algorithmically effective definition i.e. nothing logically follows from them. Hence they are parasitic element of the "algorithmic component" (M) of his ID theory. Let me recall the general structure of natural science (post #49) (M) - Model space (formalism & algorithms) (E) - Empirical procedures & facts of the "real" world (O) - Operational rules mapping between (M) and (E) The Model space (M) is the generator (algorithms, formulas) of meaningful statements (numbers, words, pictures,..) of that science. Component (E) contains empirical facts and procedures (algorithms, techniques...) for extracting them from the real world. The component (O) prescribes mapping between (M) and (E) e.g. given statement S of some prediction by (M), then (O) prescribes how to pick element of (E) to compare with S (e.g. which kind of measurement should yield a number to compare with S). All postulates/hypotheses of the theory are elements of model space (M), along with all logical deductions from them. If you take issue with the fact that designed things have features in common that are only known to come about as the result of a mind, note that this is empirical, and positing mind as a source of such features is a necessity of that observation So, what you are saying is that his (E) space has "E-mind" element and his (M) space has some element M-mind, as a primitive (postulate since it doesn't follow from any other element of (M) in ID or in other natural science). Hence the mapping (O) maps trivially between the two orphaned primitives E-mind M-mind, i.e. it is a contentless tautology. Nothing else follows in the model space (M) from M-mind in algorithmic or deductive manner. Hence M-mid is an orphan (or parasitic) element with no logically deducible consequence within (M). If you take it out of (M), nothing else in model space (M) changes. On the other hand, E-mind is also orphaned within (E) since there is no way to detect it objectively. Nothing measures it, and it does nothing as far as present natural science (that's not a synonym for "personal experience") can tell. You can't take when a statement is objectively observed "I think" that this means E-mind was detected (a tape recorder or computer program or parrot can produce that sound as well). Hence, he has two orphaned elements E-mind and M-mind in two spaces, whose sole role in the theory is to point to each other and do nothing else. Drop both, and nothing changes, the ID detection methods still point to intelligent process or intelligence as the designer of the artifacts of life. The unfortunate part is that "intelligence" or "intelligent process" (with term "mind" dropped) can be defined in algorithmically effective way (e.g. via internal modeling and anticipatory computations, like in AI) for the model space (M), and it can be objectively detected within (E) space (via ID detection methods). All that is implied by ID argument is "intelligence" (or intelligent process) which is scientifically uncontroversial concept. If he and others (since that's a pretty common reflex among ID researchers) were to simply state that ID detection methods point to "intelligent process" (which is an algorithmically effective scientific concept), the ID would have as easy entry into the natural science as the Big Bang theory did. Even though atheists didn't like Big Bang theory either (because of informal implications of the universe having a beginning), it wasn't labeled as a non-science but merely remained a minority vew, until it was confirmed experimentally. Someone may object, well, even though we can't measure "mind" we know we have mind or consciousness. It's a problem of present natural sciece which has a gap in that place. That may well be so, but then what about the wisdom of sticking one leg of the ID chair into that gap, when there is plenty of floor room nearby without gaps.nightlight
April 2, 2013
April
04
Apr
2
02
2013
06:56 PM
6
06
56
PM
PDT
NL, if you don't mind a personal question are you Jewish or perhaps Muslim?
The Origin of Science Jaki writes: Herein lies the tremendous difference between Christian monotheism on the one hand and Jewish and Muslim monotheism on the other. This explains also the fact that it is almost natural for a Jewish or Muslim intellectual to become a pa(n)theist. About the former Spinoza and Einstein are well-known examples. As to the Muslims, it should be enough to think of the Averroists. With this in mind one can also hope to understand why the Muslims, who for five hundred years had studied Aristotle's works and produced many commentaries on them failed to make a breakthrough.,,, http://www.columbia.edu/cu/augustine/a/science_origin.html panpsychism is the view that all matter has a mental aspect, Pantheism is the belief,,, that the universe (or nature) is identical with divinity.
bornagain77
April 2, 2013
April
04
Apr
2
02
2013
06:07 PM
6
06
07
PM
PDT
NL, since you cannot produce any empirical proof for this claim,,,
there is no limit how much specified complex information they can learn or generate as memories of learned patterns
,,,Perhaps it would be well for you to quit claiming it. Particularly the 'generate' portion of the claim you made!!!. Playing games and trying to make exceptions for what type of information it generates does not really matter to me for your broad claim is that it can generate as such. Since you cannot produce even one example to refute the null then that should give a smart guy like you a major clue that you are barking up the wrong tree with your pseudo-theory! It is not that complicated NL,, Do you just want it to be true so bad that you can't see this glaring deficit between your broad claim and the real world evidence???,,, And again Abel directly challenges ANY scenarios such as yours to falsify the null,,, The capabilities of stand-alone chaos, complexity, self-ordered states, natural attractors, fractals, drunken walks, complex adaptive systems, and other subjects of non linear dynamic models are often inflated,,bornagain77
April 2, 2013
April
04
Apr
2
02
2013
05:42 PM
5
05
42
PM
PDT
bornagain77 #176: nightlight: there is no limit how much specified complex information they can learn or generate as memories of learned patterns bornagain77: I'm the one calling your bluff. If it is truly unlimited in the CSI it can generate then by all means produce one example of functional information (CSI) being generated above the 500 bit limit set by Dembski. You don't get it -- the two problems are different. The FSC by Abel is a problem where you are given some function of n values, whose values are 1 or 0: e.g. F(i)=1,0,1,0,1,1... for i=1,2,3,...n. The task is to devise a guessing algorithm G (random or deterministic, or any combination), such that first, G(1) 'predicts' F(1) as one of 0 or 1, and receives Yes/No answer, thus in effect it receives F(1). Then, knowing the answer for F(1), G predicts F(2) and gets another Yes/No answer, thus receives value for F(2). Then knowing F(1) and F(2), G predicts F(3), etc. Any other variation of guessing schedule is allowed to G, e.g. G can request to guess in blocks of say 4 successive values of F, e.g. G guesses next 4 digits 1011 and receives response that F has values 1001. Any other guessing schedule is allowed, as long as G doesn't have values of F it is trying to guess before it tries to guess them (duh). The claim is that no G algorithm exist which can beat the chance (50% hits, or n/2 guesses on average) if tested against all possible functions F i.e. on all possible 2^n patterns of n bits (or equoivalently on random F). That is correct, the best G can do is to get n/2 guesses on average. It is essentially restatement of incompressibility of random sequence. If some G were able to beat the chance by guessing n/2 + x bits on average over all possible F's, where x > 0, that would in this scheme be stated as 'G has generated x bits of FSC'. It is well known (and trivial) that no such G can exist. The neural networks tackle a different problem: there is some set of C 'canonical' bit patterns, P1, P2,... Pc, each containing n bits (e.g. these could be C=26 bitmaps of scanned alphabet letters A-Z, where each bitmap has n bits). After the learning phase of C canonical patterns, network is given some other n-bit patterns Q1, Q2,... which are 'damaged' altered bitmaps (e.g. via random noise) of the same 26 letters. The network then decides for each incoming Q which letter P1,..Pc it should retrieve. Depending on network algorithms and size, there is no upper limit on how many letters it can store or how many Q bitmaps it can process (i.e. how many bits per pattern n are there), provided you add enough nodes and links. For example, here is a paper from the top few on a google search, which for their particular type of network (BCPNN) with N nodes where each link is encoded in k bits of precision, and H columns (H>1) gives maximum information capacity of such network as: Imax = k*N*N/2 * (1 - 1/H) bits [eq. (9), p. 6]. This number 'Imax' can be as large number of bits as you want by making N and/or k large enough (the factor (1-1/H) is a fixed number 1/2 < f < 1). But that number Imax has no relation with x from the FSC problem, since the two problems have nothing to do with each other. The Abel's FCS (or Dembski's CSI) is set up meant to refute effectiveness claims of neo-Darwinian RM+NS algorithm (or more generally any GA) for generating complex specified information, which is fine, you can't get x>0 on average. The Planckian or higher level networks are not searching for a needle in the exponential haystack with 2^n choices. They are matching and evaluating closeness of input patterns Q to set of C patterns used for learning (or to C attractor regions). While total number of distinct patterns Q is also 2^n as in the FSC problem, the number of attractor regions C is a much smaller number than 2^n. The origin of such reduction of complexity is that Planckian networks (or higher level networks they produce, such as biochemical networks, or our social networks) are searching in a space populated by other agents of the same general kind (other networks working via same anticipatory or pattern matching algorithms), not in some general space of random, lawless entities Q, which can be any of the 2^n patterns (for n-bit entities). They are operating in a lawful, knowable world, in which pattern regularities extend from physical to human levels. In other words, Planckian & higher level networks are 'stereotyping' in all encounters with new patterns Q by classifying any new Q based on partial matches into one of the C known stereotypes (or C canonical patterns). The bottom up harmonization process (which maximizes mutual predictability), from physical laws and up, assures that stereotyping works by driving patterns toward the stereotypical forms. The harmonization thus helps make patterns or laws regular and knowable (e.g. check Wigner's paper "The Unreasonable Effectiveness of Mathemics").nightlight
April 2, 2013
April
04
Apr
2
02
2013
05:22 PM
5
05
22
PM
PDT
nightlight, I hope your weekend was good. At #135 you wrote,
"Neo-Darwinian Evolution theory (ND-E = RM + NS is the primary mechanism of evolution), carries a key parasitic element of this type, the attribute “random” in “random mutation” (RM) — that element is algorithmically ineffective since it doesn’t produce any falsifiable statement that can’t be produced by replacing “random” with “intelligently guided” (i.e. computed by an anticipatory/goal driven algorithm). In this case, the parasitic agenda carried by the gratuitous “randomness” attribute is atheism. That is actually another common misstep by ID proponents — they needlessly concede that “random” mutation completely explains “micro-evolution”."
ID accepts that, in principle, random mutations are perfectly capable of explaining certain microevolutionary changes, such as bacterial drug resistance. That is not the same as conceding that random mutations can explain all microevolutionary change. In other words, random mutations are sufficient for changes which can be achieved by a small number of "coordinated" heritable genomic changes; that doesn't imply sufficiency to account for all observed changes. If you could produce a relevant quote from a major ID proponent conceding that random mutations account for all of microevolution, it would support your assertion. In comment #117 you state,
But if you do insist on injecting such algorithmically ineffective cogs, as Stephen Meyer keeps doing with ‘consciousness’, than whatever it is you’re offering is going to trigger a strong immune response from the existent natural sciences which do follow the rule of ‘no algorithmically ineffective cogs’.
and from comment #128,
My point is that it certainly can be, provided its proponents (such as S. Meyer) get rid of the algorithmically ineffective baggage and drop the ‘consciousness’ talk, since it only harms the cause of getting the ID to be accepted as a science.
What specifically is Meyer stating that you take issue with, in regards to consciousness? Instead of "consciousness talk" generalities, if you could quote something relevant that Meyer actually said, it would make it possible to talk about his comments, rather than your interpretations of them. If you take issue with the fact that designed things have features in common that are only known to come about as the result of a mind, note that this is empirical, and positing mind as a source of such features is a necessity of that observation -- it shouldn't really be controversial with regard to non-biological objects such as machinery, Blu-ray players, and big-screen TVs.Chance Ratcliff
April 2, 2013
April
04
Apr
2
02
2013
03:56 PM
3
03
56
PM
PDT
Box (168): BTW what are the planckian networks up to when they self-organize into stars? How promising is the self-organized star formation trajectory in relation to expressing intelligence for the average self-respecting Planckian network? Undoubtedly a bright future lies ahead, but some may wonder what the prospects for happiness are at those network demolishing temperatures.
Nightlight (174): The Planckian networks are no more harmed by supernova temperatures than your PC is harmed by some wild pattern of cells in Conway’s Game of Life that is running on that computer.
In order to be happy Planckian networks want to design and self-organize into elemental particles (aka super-computers), and from there design and self-organize into biochemical networks (aka super-super–computers) to run internal models to invent body plans right? Well that trajectory is off the table when you design and self-organize into stars, right? I’m just asking.Box
April 2, 2013
April
04
Apr
2
02
2013
02:53 PM
2
02
53
PM
PDT
NL, you are the one making a specific claim that,,,
there is no limit how much specified complex information they can learn or generate as memories of learned patterns
I'm the one calling your bluff. If it is truly unlimited in the CSI it can generate then by all means produce one example of functional information (CSI) being generated above the 500 bit limit set by Dembski. A single protein or better yet, a molecular machine should do the trick. As to your claim that
That’s all about limitations of genetic algorithms (GA) for search problems, which has nothing to do with neural networks and their pattern recognition algorithms.
Abel's null hypothesis covers 'everything' including the convoluted scenario you a-priorily prefer for a worldview!
The Capabilities of Chaos and Complexity: David L. Abel – Null Hypothesis For Information Generation – 2009 Excerpt: The capabilities of stand-alone chaos, complexity, self-ordered states, natural attractors, fractals, drunken walks, complex adaptive systems, and other subjects of non linear dynamic models are often inflated. Scientific mechanism must be provided for how purely physicodynamic phenomena can program decision nodes, optimize algorithms, set configurable switches so as to achieve integrated circuits, achieve computational halting, and organize otherwise unrelated chemical reactions into a protometabolism. To focus the scientific community’s attention on its own tendencies toward overzealous metaphysical imagination bordering on “wish-fulfillment,” we propose the following readily falsifiable null hypothesis, and invite rigorous experimental attempts to falsify it:,,
You cannot claim that on the one hand,,,
there is no limit on how much specified complex information they can learn or generate
then on the other hand claim:
problem substance tackled by neural networks is entirely different (than the functional information addressed by Abel's null hypothesis)
Either your method can generate unlimited CSI as you claim and falsify the null hypothesis, and thus prove your outrageous claim that your program is 'intelligent', or it cannot generate functional information. There is no weasel room for you in this null hypothesis as it is set up!.bornagain77
April 2, 2013
April
04
Apr
2
02
2013
02:01 PM
2
02
01
PM
PDT
@bornagain77 #173 That's all about limitations of genetic algorithms (GA) for search problems, which has nothing to do with neural networks and their pattern recognition algorithms. The latter are unsupervised clustering algorithms for sets of patterns and there is no limit on how much specified complex information they can learn or generate as memories of learned patterns (such as learn to recognize noisy patterns of Chinese or Japanese characters; or natural language,... etc). The translation from pattern recognition language to anticipatory behavior language was explained in post #116. The GA critique (by Dembski & others) merely shows that neo-Darwinian algorithm, RM+NS, is incapable of solving large search problems. That's beating the same old dead horse. The problem set up and problem substance tackled by neural networks is entirely different (unsupervised pattern recognition or clustering) and none of Dembski's or other GA search limitations results apply to neural networks or pattern recognition. but the preceding is just a bunch of word salad. Oops, sorry didn't mean to overload your circuits.nightlight
April 2, 2013
April
04
Apr
2
02
2013
01:33 PM
1
01
33
PM
PDT
Box #169: I'm a strong chess player for many years and I can assure each and everyone that computer chess is very bad at strategy. ... I can still draw the top chess programs - about every other game. But I have to admit I cannot win. In played through grad school, got to USCF 2100 (expert rating). My younger brother (in ex-Yugoslavia) is a national master. With computers (my favorite is Hiarcs), if I play for exciting, fun games, I will lose every time. If I play for revenge, dull blocked position and slow maneuvering, I can draw half the time, even win every now and then (especially if I drill into the same dull variation and keep refining it). #168 What I meant to say was that your theory predicts a vivid super-intelligent universe - in any shape or form - rather than the comatose inert universe at hand. You must have succumbed to the materialist brainwashing. I see everything as animated, sparkling with life, in pursuit of its own happiness. Undoubtedly a bright future lies ahead, but some may wonder what the prospects for happiness are at those network demolishing temperatures. The Planckian networks are no more harmed by supernova temperatures than your PC is harmed by some wild pattern of cells in Conway's Game of Life that is running on that computer. Temperatures, energies, forces and the rest of physics with its space-time parameterization are just few coarse grained properties/regularities of the activation patterns unfolding on the Planckian networks. These networks are merely computing their patterns in pursuit of their happiness (maximizing +1 scores; posts #59 and #109 address the mind stuff semantics of +1,-1 labels). Their "space" is made of distances which are counts of hops between nodes, their "time" is the node state sequence number (each node has its own state seqence numbers 1,2,3,...; these numbers tell it which state sequence numbers of other nodes it needs to refer to when messaging with them). As far as postulates/assumptions, one can as well imagine all nodes as being compressed into a single point, just like you can stack a set of Ethernet switches on top of each other, without changing network connections (topology) or affecting its operation (everything will run as when switches are spread out in some 2D pattern). The "links" of Planckian networks are abstract "things in itself" which for a given node X merely refer to which other nodes Y1, Y2, Y3,... it takes/sends messages from/to. Links thus specify the labels of some of the other nodes that can be all compressed in the common point i.e. nothing needs to carry messages anywhere outside the single point. One can, thus, imagine the whole system as one point talking to different aspects of itself, as it were, as if trying to work out 'what am I' and 'why am I here'.nightlight
April 2, 2013
April
04
Apr
2
02
2013
12:50 PM
12
12
50
PM
PDT
NL, excuse me but the preceding is just a bunch of word salad. In order to provide solid empirical proof for your position that computers and calculators are 'intelligent', and to differentiate your preferred worldview from pseudo-science, you SIMPLY must produce an observed example(s) of functional information being generated above the 500 bit threshold proposed by Dembski. There is/are a null hypothesis(es) in place that says it will never be done:
The Capabilities of Chaos and Complexity: David L. Abel - Null Hypothesis For Information Generation - 2009 Excerpt: The capabilities of stand-alone chaos, complexity, self-ordered states, natural attractors, fractals, drunken walks, complex adaptive systems, and other subjects of non linear dynamic models are often inflated. Scientific mechanism must be provided for how purely physicodynamic phenomena can program decision nodes, optimize algorithms, set configurable switches so as to achieve integrated circuits, achieve computational halting, and organize otherwise unrelated chemical reactions into a protometabolism. To focus the scientific community’s attention on its own tendencies toward overzealous metaphysical imagination bordering on “wish-fulfillment,” we propose the following readily falsifiable null hypothesis, and invite rigorous experimental attempts to falsify it: “Physicodynamics cannot spontaneously traverse The Cybernetic Cut [9]: physicodynamics alone cannot organize itself into formally functional systems requiring algorithmic optimization, computational halting, and circuit integration.” A single exception of non trivial, unaided spontaneous optimization of formal function by truly natural process would falsify this null hypothesis. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2662469/ Can We Falsify Any Of The Following Null Hypothesis (For Information Generation) 1) Mathematical Logic 2) Algorithmic Optimization 3) Cybernetic Programming 4) Computational Halting 5) Integrated Circuits 6) Organization (e.g. homeostatic optimization far from equilibrium) 7) Material Symbol Systems (e.g. genetics) 8 ) Any Goal Oriented bona fide system 9) Language 10) Formal function of any kind 11) Utilitarian work ,,, "Artificial intelligence does not organize itself either. It is invariably programmed by agents to respond in certain ways to various environmental challenges in the artificial life data base." ,,, ,,,"Evolutionary algorithms, for example, must be stripped of all artificial selection and the purposeful steering of iterations toward desired products." - Abel,,, Three subsets of sequence complexity and their relevance to biopolymeric information - Abel, Trevors Excerpt: Three qualitative kinds of sequence complexity exist: random (RSC), ordered (OSC), and functional (FSC).,,, Shannon information theory measures the relative degrees of RSC and OSC. Shannon information theory cannot measure FSC. FSC is invariably associated with all forms of complex biofunction, including biochemical pathways, cycles, positive and negative feedback regulation, and homeostatic metabolism. The algorithmic programming of FSC, not merely its aperiodicity, accounts for biological organization. No empirical evidence exists of either RSC of OSC ever having produced a single instance of sophisticated biological organization. Organization invariably manifests FSC rather than successive random events (RSC) or low-informational self-ordering phenomena (OSC).,,, Testable hypotheses about FSC What testable empirical hypotheses can we make about FSC that might allow us to identify when FSC exists? In any of the following null hypotheses [137], demonstrating a single exception would allow falsification. We invite assistance in the falsification of any of the following null hypotheses: Null hypothesis #1 Stochastic ensembles of physical units cannot program algorithmic/cybernetic function. Null hypothesis #2 Dynamically-ordered sequences of individual physical units (physicality patterned by natural law causation) cannot program algorithmic/cybernetic function. Null hypothesis #3 Statistically weighted means (e.g., increased availability of certain units in the polymerization environment) giving rise to patterned (compressible) sequences of units cannot program algorithmic/cybernetic function. Null hypothesis #4 Computationally successful configurable switches cannot be set by chance, necessity, or any combination of the two, even over large periods of time. We repeat that a single incident of nontrivial algorithmic programming success achieved without selection for fitness at the decision-node programming level would falsify any of these null hypotheses. This renders each of these hypotheses scientifically testable. We offer the prediction that none of these four hypotheses will be falsified. http://www.tbiomed.com/content/2/1/29 The Law of Physicodynamic Incompleteness - David L. Abel - August 2011 Summary: “The Law of Physicodynamic Incompleteness” states that inanimate physicodynamics is completely inadequate to generate, or even explain, the mathematical nature of physical interactions (the purely formal laws of physics and chemistry). The Law further states that physicodynamic factors cannot cause formal processes and procedures leading to sophisticated function. Chance and necessity alone cannot steer, program or optimize algorithmic/computational success to provide desired non-trivial utility. http://www.scitopics.com/The_Law_of_Physicodynamic_Incompleteness.html The GS Principle (The Genetic Selection Principle) - Abel - 2009 Excerpt: Biological control requires selection of particular configurable switch-settings to achieve potential function. This occurs largely at the level of nucleotide selection, prior to the realization of any integrated biofunction. Each selection of a nucleotide corresponds to the setting of two formal binary logic gates. The setting of these switches only later determines folding and binding function through minimum-free-energy sinks. These sinks are determined by the primary structure of both the protein itself and the independently prescribed sequencing of chaperones. The GS Principle distinguishes selection of existing function (natural selection) from selection for potential function (formal selection at decision nodes, logic gates and configurable switch-settings). http://www.bioscience.org/2009/v14/af/3426/fulltext.htm Book Review - Meyer, Stephen C. Signature in the Cell. New York: HarperCollins, 2009. Excerpt: As early as the 1960s, those who approached the problem of the origin of life from the standpoint of information theory and combinatorics observed that something was terribly amiss. Even if you grant the most generous assumptions: that every elementary particle in the observable universe is a chemical laboratory randomly splicing amino acids into proteins every Planck time for the entire history of the universe, there is a vanishingly small probability that even a single functionally folded protein of 150 amino acids would have been created. Now of course, elementary particles aren't chemical laboratories, nor does peptide synthesis take place where most of the baryonic mass of the universe resides: in stars or interstellar and intergalactic clouds. If you look at the chemistry, it gets even worse—almost indescribably so: the precursor molecules of many of these macromolecular structures cannot form under the same prebiotic conditions—they must be catalysed by enzymes created only by preexisting living cells, and the reactions required to assemble them into the molecules of biology will only go when mediated by other enzymes, assembled in the cell by precisely specified information in the genome. So, it comes down to this: Where did that information come from? The simplest known free living organism (although you may quibble about this, given that it's a parasite) has a genome of 582,970 base pairs, or about one megabit (assuming two bits of information for each nucleotide, of which there are four possibilities). Now, if you go back to the universe of elementary particle Planck time chemical labs and work the numbers, you find that in the finite time our universe has existed, you could have produced about 500 bits of structured, functional information by random search. Yet here we have a minimal information string which is (if you understand combinatorics) so indescribably improbable to have originated by chance that adjectives fail. http://www.fourmilab.ch/documents/reading_list/indices/book_726.html To clarify as to how the 500 bit universal limit is found for 'structured, functional information': Dembski's original value for the universal probability bound is 1 in 10^150, 10^80, the number of elementary particles in the observable universe. 10^45, the maximum rate per second at which transitions in physical states can occur. 10^25, a billion times longer than the typical estimated age of the universe in seconds. Thus, 10^150 = 10^80 × 10^45 × 10^25. Hence, this value corresponds to an upper limit on the number of physical events that could possibly have occurred since the big bang. How many bits would that be: Pu = 10-150, so, -log2 Pu = 498.29 bits Call it 500 bits (The 500 bits is further specified as a specific type of information. It is specified as Complex Specified Information by Dembski or as Functional Information by Abel to separate it from merely Ordered Sequence Complexity or Random Sequence Complexity; See Three subsets of sequence complexity: Abel) This short sentence, "The quick brown fox jumped over the lazy dog" is calculated by Winston Ewert, in this following video at the 10 minute mark, to contain 1000 bits of algorithmic specified complexity, and thus to exceed the Universal Probability Bound (UPB) of 500 bits set by Dr. Dembski Proposed Information Metric: Conditional Kolmogorov Complexity - Winston Ewert - video http://www.youtube.com/watch?v=fm3mm3ofAYU Here are the slides of preceding video with the calculation of the information content of the preceding sentence on page 14 http://www.blythinstitute.org/images/data/attachments/0000/0037/present_info.pdf Lack of Signal Is Not a Lack of Information - July 18, 2012 Excerpt: Putting it all together: The NFL (No Free Lunch) Theorems show that evolution is stuck with a blind search. Information lights the path out of blind search; the more information, the brighter the light. Complex specified information (CSI) exceeds the UPB, so in the evolutionary context a blind search is not an option. Our uniform experience with CSI is that it always has an intelligent cause. Evolution is disconfirmed by negative arguments (NFL theorems and the UPB). Intelligent design is confirmed by positive arguments (uniform experience and inference to the best explanation). http://www.evolutionnews.org/2012/07/lack_of_signal062231.html
Music:
Moriah Peters - Well Done Official Music Video - Music Videos http://www.godtube.com/watch/?v=WDL7GLNX
bornagain77
April 2, 2013
April
04
Apr
2
02
2013
12:47 PM
12
12
47
PM
PDT
Bornagain77 & Nightlight This article about 'deep learning' at newyorker.com by Gary Marcus may be of interest.Box
April 2, 2013
April
04
Apr
2
02
2013
12:22 PM
12
12
22
PM
PDT
bornagain77 #: please give us a little perspective and cite your exact empirical evidence that computers can generate functional information above and beyond what they were originally programmed by a `mind' to generate A sketch of how that works, including the emergence of goal oriented anticipatory behaviors, internal modeling, etc. without explicitly being front loaded with these behaviors, is given in the post #116 via neural networks with unsupervised learning. That post doesn't give any links since it's based on common knowledge about such (artificial) neural networks which anyone can google and introduce himself into the subject. I have learned that material mostly from books (before google) and from my own computer experimentation with neural networks (from mid 1990s and on), but it is all easily accessible common knowledge (especially to as prolific searcher as you seem to be). I just don't need to call upon authority on matters which are obvious to me and which are easily verifiable. Your request is a bit like asking a master chef to point you to the prepackaged officially FDA approved frozen meal so you can compare the ingredients, for a dish he is making, that he has honed over years based on recipes from cookbooks, from older master chefs and from his own experimentation. He is well beyond the need to look it up or assure himself with what FDA or other "authority" says about it since he knows and understands that recipe as well as anyone. The key ingredient of the 'unsupervised learning' capability is to have a system which has a flexible feedback driven mechanism for reshaping its 'attractor surface' (a.k.a. fitness landscape). The attractor surface is easily understood by imagining a tub of clay (or play-doh), with initially flat surface, than sculpting the valleys in it by pressing you finger into it at different points. The evolution of system state in time is then like a marble dropped at any place on the surface, rolling and settling at the bottom of the nearest valley. This set of valley bottoms is a discrete set of system's memories which in this example memorize the points where you earlier poked the play-doh. The key attribute of such attractor surface is that the valleys attract to the same bottom point all marbles from anywhere on their slopes, which corresponds to recalling a canonical memory from partial/approximate matches. You imagine X,Y coordinates of the tub surface as representing the input pattern or state which needs to be recognized, such as bitmap in optical character recognition system. The n valley bottoms, given via 2D coordinates (X1,Y1), (X2,Y2)... (Xn,Yn) represent the n canonical patterns (or memories) that need to be recalled or recognized (such as canonical patterns of n=26 letters). The coordinates here are just some binary digits, say X1 = 1001,0000 for X coordinate of the first valley bottom. Now, when you put a marble at some point with coordinate X = 1001,1101, whose digit pattern doesn't correspond to digit pattern of X for any explicit memory or canonical pattern, this marble will roll say to 1001,0000 valley bottom, which is the nearest of the n valley bottoms. In other words, the approximate (noisy, damaged) pattern X is recognized as one of the n remembered/canonical patterns (e.g. one 26 letters). Hence, if such attractor surface is shaped the right way for the task, it can in principle recognize any set of canonical patterns from noisy, damaged, partial... input patterns, such as retrieving memorized pattern X1 = 1001,0000 from the noisy input pattern X = 1001,1101. For system of this type to be interesting or non-trivial, you need another key ingredient -- the feedback driven mechanism which can reshape its attractor surface. Neural networks are one such system, where successive adjustments of link strengths (based on some adjustment rules) between nodes can deform its attractor surface into any shape. The link adaptation mechnism works whether the 'attractor surface is static or dynamic (changeable in time). Unlike the play-doh 2D surface coordinates (X,Y), the system states here are specified in some d-dimensional space via numeric arrays such as (S1, S2,..., Sd) for network with d nodes, where Sd are numbers, e.g. 0 or 1 for nodes with binary states (boolean networks); generally a node state can have any number of distinct values including a continuous range of values (usually interval -1 to +1, or 0 to 1). The feedback mechanism which modifies the links is specified via some simple, local interaction rules, similar to physics or such as those for trading network sketched in post #116. The links here need not be wires, or dendrites & axons, or any material stringlike objects. They can be general or abstract "connections" such as those to family members or to brands of products you buy, or customers you sell to, etc. Their essential attribute is feedback driven adaptability of link strengths, e.g. you might change quantities of goods or services you buy of different brands based on their pricing, availability, perceived value... Such criteria which drive the modifications of the links are abstracted under generic labels "punishments" and "rewards". Getting from pattern recognition to anticipatory, goal directed behavior and internal modeling is fairly trivial, as explained in post #116 on the example of such network learning to control a robotic soccer player. The canonical example (or a whole cottage industry in this field) implemented in thousand of ways via all kinds of neural networks, in simulated and robotic forms doing it in real world, is the pole balancing task, where a network learns how to balance a pole (or broomstick) on a cart by moving the cart back and forth. Reverse engineering such network, once it learns the task, allows one to identify the "neural correlates" (such as specific activation patterns) of its internal model of the problem and its operation. Such internal model which network uses via internal what-if game to anticipate the consequences of its actions, doesn't look (in its neural correlate form) anything like the cart and the pole looks to us, just as your DNA doesn't look anything like you look in a photo. In both cases, though, that imprint is the 'code' encoding highly specified complex information. Stepping back for a birds eye view -- we have a simple system (network with adaptable links) with purely local physics-like rules for modifying links based on some generalized "punishments" and "rewards" (provided by interaction with environment). Running this network, let it adapt its links (by the given simple rules) while interacting with cart and the pole, without any additional input, gives rise to a fairly complex skill controlled via network's internal model and its encoding (expressed via link strengths, which shape the activation patters of its internal model). There was no external input that had injected this skill or the encoding of that skill into the network. The operation via its simple rules plus interaction with the cart & pole accomplished that all by itself. Of course, the whole program is written by intelligent agency (programmer) and is running on an intelligently designed and built computer. One can look these ingredients as front loading -- the network rules of operation and rules of interaction with the cart and the pole, are given upfront. But there was nothing that gave it upfront either the skill to control the cart or the encoding of that skill so it can apply it any time later. All these intelligent extras came out as result of operating under the simple rules of link modification (which are like toy physics laws or like toy trading network rules) and interaction with the 'environment' (cart & pole). Neither the anticipatory, goal directed behavior nor the internal modeling of the environment nor the internal encoding for that model had to be front loaded -- they came out entirely from the much simpler direct rules. All of the above are the well known, uncontroversial facts about neural networks. The interesting stuff happens when you follow up the implication of augmenting the seemingly unrelated network models of Planck scale physics (pregeometry models) with the adaptable links of neural networks -- you end up with super-intelligent Planckian networks (my term), capable of generating physics, as well as explaining the fine tuning of physics for life, serving as the intelligent agency behind the origin of life and its evolution. As with the pole balancing network, you don't to need to input any this intelligence via front loading. You do of course need to front load the rules of the neural network (link adaption rules) into the initial system, as set of givens that don't explain themselves (i.e. Planckian networks don't explain their origin). But these givens are simple, local rules of operation of 'dumb' links and nodes which are not any more complicated or assumption laden than conventional laws of physics about 'dumb' particles and fields. In other words, you don't need to front load anything remotely resembling the kind of intelligence that we see manifesting in live organisms -- that comes as an automatic consequence of the initial much simpler assumptions. The relation of the 'mind stuff' with these computations is explained in posts #59 and #109.nightlight
April 2, 2013
April
04
Apr
2
02
2013
11:29 AM
11
11
29
AM
PDT
Box: I have to admit that chess programs are stronger than humans. The way I see it is that chess involves much more calculating than I used to think.
Right. Computers (machines intelligently designed by humans) are faster than humans at calculating. My PC can calculate millions of floating point operations per second. Humans have the insight and foresight (both properties of intelligence) to harness nature this way. But computer didn't build or program themselves. Some smart people did it because of their insight and foresight. Humans are pretty cool. And so are many their designs.CentralScrutinizer
April 2, 2013
April
04
Apr
2
02
2013
08:57 AM
8
08
57
AM
PDT
About chess programs. I have to admit that chess programs are stronger than humans. The way I see it is that chess involves much more calculating than I used to think. I used to think that chess was 50% calculation and 50% 'overview' (strategic thinking). Now I think it is about 95% calculation - if one can make such a general claim, because it depends on the position. I'm a strong chess player for many years and I can assure each and everyone that computer chess is very bad at strategy. Computers don't have 'overview' - nada, zilch. That's why they can't play Go - which is probably a much more strategic game than chess. It's all calculations and some programmed general 101 guide lines for strategy. But you cannot teach matter to be conscious. And there is no overview without consciousness. I can still draw the top chess programs - about every other game. But I have to admit I cannot win. And I know how to draw those games, because I'm well aware of there weakness. And I can always show where the computer goes strategically wrong.Box
April 2, 2013
April
04
Apr
2
02
2013
08:32 AM
8
08
32
AM
PDT
Nightlight (164): So, computing physics for your functionality over few seconds is a massive and complex computational task that we can’t dream to ever approaching with all of our intelligence and technology put together.
Good point. My ‘universe filled with Max Plancks’ was intended to be metaphorical rather than anthropocentric. What I meant to say was that your theory predicts a vivid super-intelligent universe – in any shape or form - rather than the comatose inert universe at hand. BTW what are the planckian networks up to when they self-organize into stars? How promising is the self-organized star formation trajectory in relation to expressing intelligence for the average self-respecting Planckian network? Undoubtedly a bright future lies ahead, but some may wonder what the prospects for happiness are at those network demolishing temperatures.
Nightlight (164): Similarly, during morphogenesis, their internal model has a ‘picture’ of what they are constructing. That ‘picture’ would certainly not look like anything you see with your senses and your mind looking at the same form. But it looks like what they will perceive or sense when it is complete.
So in case of the monarch butterfly the internal models that biochemical networks run first picture the larve body plan in their mind and after its completion they picture the butterfly body plan in their mind? How do you explain that distinct body plans originate from the same source?Box
April 2, 2013
April
04
Apr
2
02
2013
07:29 AM
7
07
29
AM
PDT
NL you state:
we need to get the right perspective first
Okie dokie NL, please give us a little perspective and cite your exact empirical evidence that computers can generate functional information above and beyond what they were originally programed by a 'mind' to generate, or find, in the first place. Your references to computer programs and calculators being 'intelligent' are ludicrous and simply will not cut it as to empirical evidence for what you are radically claiming for true 'consciousness and intelligence' being inherent within the computer programs and calculators. Brute force computational ability does not intelligence nor consciousness make. Nor does redefining science so that it serendipitously includes your desired conclusion, and excludes Theistic conclusions, make you scientific. The assumed a-prioris you take for granted in your bizarre conjectures are gargantuan and this is without, as far as I can tell, even a inkling of validation, or grounding, from hard empirical science. As far as I can tell without such firm grounding in observational evidence, you have drifted, in the apparent full delusion of pride in the incoherent 'word salad' descriptions you have given to us, into a full fledged pseudo-science, no better than tea-leaf reading or such, without any real or true confirmation for others to see as to you being firmly grounded in reality. This is simply unacceptable scientifically and for you to insist that programs and calculators 'prove' your point, without such a demonstration of information generation. is to severely beg the very question being asked as to computers and consciousness!
Epicycling Through The Materialist Meta-Paradigm Of Consciousness GilDodgen: One of my AI (artificial intelligence) specialties is games of perfect knowledge. See here: worldchampionshipcheckers.com In both checkers and chess humans are no longer competitive against computer programs, because tree-searching techniques have been developed to the point where a human cannot overlook even a single tactical mistake when playing against a state-of-the-art computer program in these games. On the other hand, in the game of Go, played on a 19×19 board with a nominal search space of 19×19 factorial (1.4e+768), the best computer programs are utterly incompetent when playing against even an amateur Go player.,,, https://uncommondescent.com/intelligent-design/epicycling-through-the-materialist-meta-paradigm-of-consciousness/#comment-353454 Signature In The Cell - Review Excerpt: There is absolutely nothing surprising about the results of these (evolutionary) algorithms. The computer is programmed from the outset to converge on the solution. The programmer designed to do that. What would be surprising is if the program didn't converge on the solution. That would reflect badly on the skill of the programmer. Everything interesting in the output of the program came as a result of the programmer's skill-the information input. There are no mysterious outputs. Software Engineer - quoted to Stephen Meyer http://www.scribd.com/full/29346507?access_key=key-1ysrgwzxhb18zn6dtju0 Can a Computer Think? - Michael Egnor - March 31, 2011 Excerpt: The Turing test isn't a test of a computer. Computers can't take tests, because computers can't think. The Turing test is a test of us. If a computer "passes" it, we fail it. We fail because of our hubris, a delusion that seems to be something original in us. The Turing test is a test of whether human beings have succumbed to the astonishingly naive hubris that we can create souls.,,, It's such irony that the first personal computer was an Apple. http://www.evolutionnews.org/2011/03/failing_the_turing_test045141.html Algorithmic Information Theory, Free Will and the Turing Test - Douglas S. Robertson Excerpt: Chaitin’s Algorithmic Information Theory shows that information is conserved under formal mathematical operations and, equivalently, under computer operations. This conservation law puts a new perspective on many familiar problems related to artificial intelligence. For example, the famous “Turing test” for artificial intelligence could be defeated by simply asking for a new axiom in mathematics. Human mathematicians are able to create axioms, but a computer program cannot do this without violating information conservation. Creating new axioms and free will are shown to be different aspects of the same phenomena: the creation of new information.,,, The basic problem concerning the relation between AIT (Algorithmic Information Theory) and free will can be stated succinctly: Since the theorems of mathematics cannot contain more information than is contained in the axioms used to derive those theorems, it follows that no formal operation in mathematics (and equivalently, no operation performed by a computer) can create new information. http://cires.colorado.edu/~doug/philosophy/info7.pdf Evolutionary Computation: A Perpetual Motion Machine for Design Information? By Robert J. Marks II Final Thoughts: Search spaces require structuring for search algorithms to be viable. This includes evolutionary search for a targeted design goal. The added structure information needs to be implicitly infused into the search space and is used to guide the process to a desired result. The target can be specific, as is the case with a precisely identified phrase; or it can be general, such as meaningful phrases that will pass, say, a spelling and grammar check. In any case, there is yet no perpetual motion machine for the design of information arising from evolutionary computation.,,, "The mechanical brain does not secrete thought "as the liver does bile," as the earlier materialists claimed, nor does it put it out in the form of energy, as the muscle puts out its activity. Information is information, not matter or energy. No materialism which does not admit this can survive at the present day." Norbert Wiener created the modern field of control and communication systems, utilizing concepts like negative feedback. His seminal 1948 book Cybernetics both defined and named the new field. "A code system is always the result of a mental process (it requires an intelligent origin or inventor). It should be emphasized that matter as such is unable to generate any code. All experiences indicate that a thinking being voluntarily exercising his own free will, cognition, and creativity, is required. ,,,there is no known law of nature and no known sequence of events which can cause information to originate by itself in matter. Werner Gitt 1997 In The Beginning Was Information pp. 64-67, 79, 107." (The retired Dr Gitt was a director and professor at the German Federal Institute of Physics and Technology (Physikalisch-Technische Bundesanstalt, Braunschweig), the Head of the Department of Information Technology.)
bornagain77
April 2, 2013
April
04
Apr
2
02
2013
04:47 AM
4
04
47
AM
PDT
No volition means no personhood, ergo, no intelligence; merely data. Is there not a radically qualitative distinction between the dynamism of energy vivifying matter and the energy vivifying a living creature? And a further radically qualitative distinction between the volition of the creature without free will, i.e. a limited kind of personhood, and that of a human being with free will - and a moral dimension? (in which latter case, however, the psychopath would seem to present a puzzle? Or are psychopaths, too - at least, not so afflicted as a result of a brain injury - born with, at least, an inchoate potential for a moral sense?Axel
April 2, 2013
April
04
Apr
2
02
2013
03:14 AM
3
03
14
AM
PDT
Phinehas #163: Thus, the networks we create will never be more intelligent than we are and we will never be more intelligent than the network that created us. Or something like that It's a bit more subtle than that. For example, chess programmers routuinely lose chess games against their creations. A pocket calculator calculates faster than engineers who designed it or technicians who built it. It just happens that the post #164 right above explains this same topic in more detail.nightlight
April 2, 2013
April
04
Apr
2
02
2013
12:32 AM
12
12
32
AM
PDT
1 2 3 4 5 6 10

Leave a Reply