Uncommon Descent Serving The Intelligent Design Community

A note on state space search challenge

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
arroba Email

As was recently discussed, contrary to objections being made, the concept of blind search and linked search challenge in a configuration or state space is a reasonable and even recognised concept. As we explore this concept a little more, an illustration may be helpful:

With this in mind, we may again look at Dembski’s arrow and target illustration from NFL, p. 11:

ID researcher William A Dembski, NFL, p.11 on possibilities, target zones and events

Now, let us ponder again Wiki on state space search:

>>State space search is a process used in the field of computer science, including artificial intelligence (AI), in which successive configurations or states of an instance are considered, with the intention of finding a goal state with a desired property.

Problems are often modelled as a state space, a set of states that a problem can be in. The set of states forms a graph where two states are connected if there is an operation that can be performed to transform the first state into the second.

State space search often differs from traditional computer science search methods because the state space is implicit: the typical state space graph is much too large to generate and store in memory. Instead, nodes are generated as they are explored, and typically discarded thereafter. A solution to a combinatorial search instance may consist of the goal state itself, or of a path from some initial state to the goal state.

Representation

[–> Note, I would prefer stating the tuple as say: S := {{Ω, A, Action(s), Result (s,a), Cost(s,a)}} ]

Examples of State-space search algorithms

Uninformed Search

According to Poole and Mackworth, the following are uninformed state-space search methods, meaning that they do not know information about the goal’s location.[1]

Depth-first search
Breadth-first search
Lowest-cost-first search

Informed Search

Some algorithms take into account information about the goal node’s location in the form of a heuristic function[2]. Poole and Mackworth cite the following examples as informed search algorithms:

Heuristic depth-first search
Greedy best-first search
A* search>>

This now allows us to better appreciate the sort of challenge that blind watchmaker search in a darwin’s pond or the like pre-life environment faces, or that of hopping from one body plan to another:

Thus, we will better appreciate the general point Dembski and Marks et al have been making:

>>Needle-in-the-haystack [search] problems look for small targets in large spaces. In such cases, blind search stands no hope of success. Conservation of information dictates any search technique [–> as in not specifically correlated to the structure of the space, i.e. a map of the targets] will work, on average, as well as blind search. Success requires an assisted [intelligently directed, well-informed] search. But whence the assistance required for a search to be successful? To pose the question this way suggests that successful searches do not emerge spontaneously but need themselves to be discovered via a search. The question then naturally arises whether such a higher-level “search for a search” is any easier than the original search.

[–> where once search is taken as implying sample, searches take subsets so the set of possible searches is tantamount to the power set of the original set. For a set of cardinality n, the power set has cardinality 2^n.]>>

In this context, once we notice that we are in fact addressing an entity that is marked by the sort of functionally specific complex organisation and/or information [FSCO/I] that leads to this sort of needle in haystack challenge, the design inference explanatory filter applies:

And yes, yet again, we are back to the point that the origin of life and body plans is best explained on design; intelligently directed configuration by whatever ways and means such could have been achieved.

Further food for thought. END

Comments
@GP, thank you for your time. I believe I understand your position, and will continue to compare it with my understanding to see if I can convince myself I'm wrong. EricMH
EricMH: Thank you for your comment and you patience. Maybe I don't agree with everything you say, but probably it's better to stop here. One last note about this: "Thus, if human beings are indeed intelligent agents, we could hypothetically plug a brain monitor to read their entire brain state at time A, and at time B the brain would be in a very improbable, or perhaps impossible, state given the state at time A." Certainly not impossible. Improbable, but only if we consider its functional meaning. State B would be as probable as any other state at quantum level, but its functional meaning would be utterly improbable. The laws of physics are completely unaware of the functional meaning, therefore for the laws of physics state B is a perfectly legit state, like many others. To simplify, Shakespeare's sonnet 76, which I often quote, is as probable as any random sequence of letters of the same length. But of course there is only one (or a few) sequence which conveys the whole meaning of the sonnet, while almost all the random sequences convey absolutely nothing. So, if we find the sonnet, we infer design for what it means, not because that sequence is more improbable than any other by itself. The key point is the functional meaning, fully detectable by us as conscious beings. Natural laws have nothing to do with that. gpuccio
@GP I very much appreciate your indepth responses. I am sorry if my comments have been frustrating. You have answered all of my objections well, including whether the laws of physics are violated with this final comment. "For example, if mutations were guided, for example at quantum level, by some conscious intelligent designer, no law of physics would be violated, no energy would be created. Only, unlikely configurations would happen that would not otherwise happen. But that does not violate any law of physics, only the laws of probability in random non conscious systems." I completely agree with this. My point is the subsequent probability of an intelligently designed event is greater than its prior probability, when conditioned on the physical facts of the matter. So, in this way, the intelligent agent disrupts what would be expected from a purely physical point of view, and would consequently be empirically detectable, and a scientific testable claim. Thus, if human beings are indeed intelligent agents, we could hypothetically plug a brain monitor to read their entire brain state at time A, and at time B the brain would be in a very improbable, or perhaps impossible, state given the state at time A. Furthermore, this would be reversing the flow of entropy, moving the brain from a highly probable state to a highly improbable state. So there would be a net entropy reduction happening. EricMH
EricMH: You say: "So if humans are intelligent agents, and are the evidence we can use to infer intelligent agency in biological history, it would appear that humans have some very counter intuitive abilities, which have testable ramifications. But surely these capabilities must be demonstrated in order to make the theory of intelligent agency scientifically solid." a) Humans are intelligent agents. b) They certainly have great abilities which have testable ramifications. But they are not "counter intuitive", not at all. Indeed, they derive exactly from their specific intuitions (see later). c) These capabilities can be very easily demonstrated. For example, my ability to write this comments is a demonstration. And you can have all the demonstration you want if you just read the posts in this thread, including yours. d) The basic intuitions that allow us to generate complex functional information are the following: 1) We understand meanings. I understand meanings, you understand meanings. No non conscious entity or system understands meanings. Understanding is a conscious experience, and it is the basis for cognition. It is a subjective intuition. Can you deny that we have the subjective and untuitive experience of understanding meanings? 2) We feel desires and purposes. That is fundamental to define functions and to implement them No non cosncious entity or system feels desires or the urge to implement functions. You say: "And ID seems to imply some weird things. For example, if humans are intelligent agents, as you and KF claim, then this means that they create CSI. " Of course they create CSI. Can you deny that? If our comments here seem too lowly to you, just look at each of Shakespeare's 154 sonnets. You say: "If so, then they must have the ability to disrupt the fundamental laws of physics, otherwise the probability of an event occurring given the chance hypothesis would not be different than its specification." No, no, no! You seem not to understand anything of ID! No law of physics is disrupted. When I type on my keyboard and write this phrase, I am not disrupting any law of physics. I am only getting a result which is absolutely unlikely in any random non conscious system, but which is perfectly normal if a conscious intelligent agent is outputting his conscious experiences to matter. The same is true for software, for paintings, for all human creations. Do you think that Michelangelo was violating the laws of physics when he painted the Sistine Chapel? Of course not. But he was doing something that no non conscious system can do. Of course we break the probability barriers that are implicit in non conscious systems. The simple reason is that we are conscious and intelligent. But breaking probability barriers is not the same as breaking the laws of physics. You seem to be really confused about that. You say: "Finally, if intelligent agents can select an orderly target more frequently than expected, then it appears they have the ability to reduce net entropy of a system, which looks like creating energy out of nothing." No! Intelligent agents do not "select" an orderly target. They "create" an orderly target. By intervening on selectable switches which violate no laws of nature. We do not reduce the entropy in any way which violates the second law. We intervene on the informational aspect of entropy. The same is true for biological design. For example, if mutations were guided, for example at quantum level, by some conscious intelligent designer, no law of physics would be violated, no energy would be created. Only, unlikely configurations would happen that would not otherwise happen. But that does not violate any law of physics, only the laws of probability in random non conscious systems. I hope you may think about these points, but I am afraid that you are too convinced of some basic errors to really be able to see them. Of course, you are entitled to that. I don't think I can be more clear that this. gpuccio
EricMH: I have just answered many of your point on another thread, so I paste here those comments for your attention: This is the thread: https://uncommondesc.wpengine.com/intelligent-design/quote-of-the-day-28/#comment-651337 My comment #68:
Bob O’H: “How can you be sure that what is going on inside pour brains/minds is not equivalent?” The thing that is not equivalent is that we have subjective experiences. While strong AI theories assume that subjectivity emerges from material configurations of objects, there is absolutely nothing that justifies that view. As KF said, the Computer is not solving a chess problem, it is simply blindly executing chains of instructions ultimately at machine code and register transfer, with ALU operations level. In essence, there is no difference between an abacus and a computer. Adding simple operations to a computation, or increasing its speed, or varying the general structure of the computation (linear, parallel, what else) does not change the essence of the thing: it remains a computation effected by some material tool. There is absolutely no reason to think that an abacus has subjective experiences related to the computation we effect by it. In the same way, there is absolutely no reson to think that a computer has subjective experiences related to the siftwrae operations it is implementing, whatever they are. On the contrary, we know that we have subjective experiences. That’s all the difference. “Hm. I know a few humans who do that too.” Something like that, I can agree ???? . But even those humans, however unlikely it may appear, have probably subjective experiences. That they may use them badly (because of their won choice, or of other causes which do not depend on them) does not change the subjective nature of their representations. “Are you saying that English is contrary to the laws of physics? What particular law does it break? Can you give a specific proof that English “is far beyond the blind mechanical search capability of the observed cosmos”? Can you also explain why the Cosmos would be searching for English in the first place?” I would say that English language, like any other form of complex functional information in objects, is well beyond any power of any non conscious system. As I have often argued, complex functional configuration, bearing meaning (descriptive information) or function (prescriptive information) with a specificity beyond, say 500 – 1000 bits, have never been observed as the result of any non conscious system. And, beyond the empirical fact, there is also a deep reason for that: non conscious system can generate functional information only randomly, and 500 – 1000 bits of specific functional information (indeed, even much less than that) are definitely beyond the probabilistic resource of our universe. Of course, non random mechanisms have also been invoked: NS is of course the best known. But NS can only proceeed from the information that alredy exists (biological beings that reproduce with limited resources), and can only optimize that alredy existing function, and with extremely limited power. For a more complete discussion about that, see here: https://uncommondesc.wpengine.com/intelligent-design/what-are-the-limits-of-natural-selection-an-interesting-open-discussion-with-gordon-davisson/ English is not beyond the capabilities of our cosmos, but only because our cosmos includes conscious intelligent beings. English is certainly beyond the capabilities of any non conscious system in our cosmos. By the way, for an attempt at computing the functional information in English language texts, look here: https://uncommondesc.wpengine.com/intelligent-design/an-attempt-at-computing-dfsci-for-english-language/
And #69:
Bob O’H: As I have argued, many times, the specific ability of conscious intelligent beings like us to break the probabilistic barriers and generate tons of complex functional information (in machines, software and language) can easily be traced to those subjective experiences that allow them to design complex objects: – The subjective experience of cognition, in particular of understanding meanings – The subjective experience of feeling, in particular of having purposes related to desires. So, there is an appropriate rationale that can explain why conscious intelligent beings can generate complex functional information, and non conscious systems cannot do that.
More in next post. gpuccio
@GP Is there a mathematical model of intelligent agency? CSI is a log likelihood ratio, comparing two different theories, chance and ID. If it is positive, then this means ID is the more likely explanation. To do this, however, we are saying the probability of the artifact given ID is greater than the probability given the chance hypothesis. But without any definition as to what ID makes likely, it provides no insight. Anytime a hypothesis says X is unlikely, we can posit some nebulous alternative hypothesis that makes X likely, and then say the alternative hypothesis is the better hypothesis. We are only labelling our ignorance in this case. For example, say our initial hypothesis was the theory of aether (AT), the idea that there is a substrate that all particles travel through. Our experiments confirm two contradictory results, that aether is stationary and that aether is dragged. Since it is a contradiction, the conditional probability of this observation given aether theory is zero. To address this, we cannot just posit an alternate theory X that gives a conditional probability of 1 and say the problem is solved. We have to flesh out what X is, hopefully in a precise mathematical manner. Now, there may be a great body of philosophical work regarding X, which makes X fit into our worldview well, but that is not quite the same thing as a scientific theory of X. AT is clearly wrong, as is unguided Darwinism, but I do not see ID as having the same kind of worked out theory as, say, the theory of relativity. And ID seems to imply some weird things. For example, if humans are intelligent agents, as you and KF claim, then this means that they create CSI. If so, then they must have the ability to disrupt the fundamental laws of physics, otherwise the probability of an event occurring given the chance hypothesis would not be different than its specification. Furthermore, if intelligent agents can violate the No Free Lunch Theorem, then they have a weird ability to improve their ability to find a target in a search space ex nihilo. I have not figured out how to make sense of this, but it would be groundbreaking in the field of computer science. Finally, if intelligent agents can select an orderly target more frequently than expected, then it appears they have the ability to reduce net entropy of a system, which looks like creating energy out of nothing. This sounds very useful, and pretty scientific. So if humans are intelligent agents, and are the evidence we can use to infer intelligent agency in biological history, it would appear that humans have some very counter intuitive abilities, which have testable ramifications. But surely these capabilities must be demonstrated in order to make the theory of intelligent agency scientifically solid. Otherwise, if we have not demonstrated that humans are actually intelligent agents in a scientific manner, how can we then use them in an abductive reasoning argument to say biological organisms were also created by intelligent agency? If humans work entirely according to the laws of physics, and operationally can be described by a Turing machine, then so must any supposed designer that we infer. In which case, the designer's output would be entirely predictable by the preconditions, and the designer would never generate positive CSI, thus would not be a designer according to ID theory. EricMH
EMH, GP has already given you several correctives. I add, that science is premised on responsible, rational freedom of scientists to do scientific investigation leading to conclusions that in material part are empirically grounded, tested and are warranted as credibly true and/or reliable. Yes, scientific knowledge is weak-form. If scientists allow dominant ideologies and domineering ideologues to corrupt the credibility of such insights and associated moral government, they undermine science. Science, epistemologically, is not and cannot be autonomous or isolated from its context. As for your onward claims about intelligent agency, the first premise is that such are a massively evident empirical fact of observation, and artifacts of such known intelligent agents form a context with a trillions-member observational base. Again, denial of abundantly evident empirical facts is both unphilosophic and unscientific. These known artifacts demonstrate patterns that exhibit in many cases empirically reliable and well-tested signs of their causal origin being materially shaped by intelligently directed configuration. Just for an example, archaeologists, anthropologists [in exploring ancient human-associated evidence] and forensic scientists routinely identify what the former term "archaeology" vs "natural." Stonehenge is a classic in point. Further to that baseline, we can examine search challenge in configuration/search spaces of possibilities and see a reason why something like FSCO/I is an empirically reliable sign of design. As, is explored and drawn out in the OP above. So, no, the re-assignment of the design inference outside of Science [properly understood] fails. The ideological context that suggests such a re-assignment, yet again shows how counter-productive and irrational it is. If science does not target empirically grounded, reliable findings that support truth-seeking about our world, it fails. And demarcation arguments, over the past several decades, have consistently failed. A reasonable conclusion of such studies is, in a nutshell, that once there is a scientific study of origins (including of man) then imposition of a priori materialistic criteria on determining what is/is not science cannot be justified. The implied scientism, that science delimits and dominates first rate knowledge also fails as scientific warrant is weak-form. Indeed, such claims are to be understood i/l/o the pessimistic induction on the fate of scientific understanding across time; scientific knowledge (especially theories and linked paradigms or research programmes) does not amount to knowledge claims to even moral certainty. This specifically holds for deep-time theories of origins [a reality that is in itself inherently unobservable . . . we were not there], which are too often presented in the guise of indisputable fact. It also holds for theories of current observations and operations of the world that are capable of direct empirical testing through experiments and observational studies. For the former, we need to more diligently apply the Newtonian vera causa principle, allowing only such causal explanations of traces of what we cannot directly observe, as show themselves reliably capable of the like effects. There are many other claims to knowledge that are warranted to higher degree of warrant than such a weak form. For instance, to moral certainty [such that one would be irresponsible to act as though what is so warranted were false], and to undeniable certainty on pain of immediate reduction to patent absurdity. Tied to this, the very claim to dominate knowledge so that once big-S Science comes a knocking, everything else must yield the floor also fails. Indeed the claim suggested by Lewontin et al, that science is the only begetter of truth is an epistemological claim. As such it is a philosophical claim that undercuts philosophical knowledge claims. It refutes itself. Which does not prevent it from being implicitly believed and used ideologically. Coherence in thought is a hard-won prize. So, no, again. The design inference is clearly a scientific investigation, by responsible criteria of what science is, studies and warrants. It is also capable of a high degree of warrant. Ideologically tainted exclusion or dismissal, will be found to be unwarranted and in the end incoherent. KF kairosfocus
@KF, yes, there are good philosophical reasons to believe intelligent agency is real. But, the problem is in the science realm. There is no mathematical model of what intelligent agency looks like. On philosophical grounds we can argue intelligent agency exists, and God exists, and so on. However, that does not make intelligent agency a scientific hypothesis. It is like saying qualia exist, but there is no scientific test for "redness". It exists on a separate plane of reality as far as science is concerned. And without a scientific theory of intelligent agency, intelligent design is not firmly in the scientific realm, either. It is a scientifically testable phenomena with a philosophical explanation. Which is a significant conclusion, but does support the skeptic position that ID is not totally a scientific theory. EricMH
EricMH: "I think something further is required, namely the scientific evidence for intelligent agency, not merely intelligent design." I don't agree. The only thing required is that scientists gradually renounce to the heavy ideological bias that has characterized scientific thought in the last decades. The only thing required is that science gies back to its only true purpose: objectively pursuing truth. Moreover, evidence for design is evidence for intelligent agency. "Regarding the selection for survival can only select for survival, the problem here is that “survival” is not a fixed selection function, but will change with the population. So this provides an avenue for complexity to be added to selection." No. Absolutely not. For example, ATP synthase certainly contributes to survival, once it exists in the right cellular context. But that simple fact will never generate the complexity of ATP synthase, because it is definitely beyond any probabilistic resource of the universe. RV cannot get it, and natural selection cannot do anything until the whole machine exists. In the same way, a generic selection for speed will never generate an engine from horse drawn carriages. It will only select the fastest horse drawn carriage. Random variation on horse drawn carriages could optimize horse drawn carriages, but will never generate a car with a petrol engine. About the extreme limits of random variation and natural selection in generating functional information, see also my recent OPs: https://uncommondesc.wpengine.com/intelligent-design/what-are-the-limits-of-natural-selection-an-interesting-open-discussion-with-gordon-davisson/ https://uncommondesc.wpengine.com/intelligent-design/what-are-the-limits-of-random-variation-a-simple-evaluation-of-the-probabilistic-resources-of-our-biological-world/ gpuccio
EMH, actually, no. We already are morally governed and responsibly and rationally free intelligent, self-moved agents; just to stand on the basis that genuine discussion, reasoning, knowledge and more are so. Where consciousness is our first fact and the fact through which we access all others. Likewise, it is readily seen that mere computation on a substrate per signal processing units -- cf. current tracking of memristors and linked AI themes -- is not at all the same as rational, self-moved contemplation. To deny these is rapidly self-referentially absurd. This already means that our self-aware, self-moved consciousness is well beyond what a blind chance + necessity world accounts for. Occam was about simplicity constrained by realities to be accounted for. In this case, we must needs account for the IS-OUGHT gap. That can only be adequately answered at world-root level, and requires an IS that bears simultaneously the force of ought; putting ethical theism on the table as the serious option to beat. Going on, the multiverse is a surrender of the empirical observability and testability criterion of science. KF PS: Survivability can shift with circumstances, i.e. islands of function change in time and may even move like barrier islands (glorified sandbars). That is not an insurmountable problem, ecosystems change with climate, invasives and more. kairosfocus
GP I appreciate your correspondence. Yes, the presumption of the multiverse is not science, it is a philosophical commitment to naturalism. But, it could still be justified on an Occam's razor basis. Inserting God is a new kind of causal agency that is not present within naturalism. So even though scientifically testing for a multiverse is implausible, it would still meet the parsimony requirement for a theory. I think something further is required, namely the scientific evidence for intelligent agency, not merely intelligent design. Regarding the selection for survival can only select for survival, the problem here is that "survival" is not a fixed selection function, but will change with the population. So this provides an avenue for complexity to be added to selection. EricMH
EricMH: A random process can generate functional complexity, but only in extremely simple forms. The point is that complex functional complexity derives only from design. You say: "Now, meeting a functional requirement could happen by trial and error, so I wouldn’t say that is in itself a sign of intelligence." Not a complex functional requirement. That is the point. You say: "For example, we could have a random process plus a selector; e.g. survival." A selector for survival can only optimize survival. That is the information already present in the selector. A new complex protein that can increase survival cannot be selected until it is present. The functional complexity of the protein is beyond the probabilistic power of the random search. therefore, only intelligent design can get there. You say: "KF’s point comes in here: there is no connection between the functional islands, so the random process must hit a needle in a haystack." Of course there is no connection. I repeat here my famous challenge, that nobody has ever tried to answer:
Will anyone on the other side answer the following two simple questions? 1) Is there any conceptual reason why we should believe that complex protein functions can be deconstructed into simpler, naturally selectable steps? That such a ladder exists, in general, or even in specific cases? 2) Is there any evidence from facts that supports the hypothesis that complex protein functions can be deconstructed into simpler, naturally selectable steps? That such a ladder exists, in general, or even in specific cases?
Regarding the "skeptic response": the multiverse, used in this way, is not science. Not at all. gpuccio
GP: I see, intelligent agency is also defined by an ability to increase Kolmogorov complexity. However, a random process can also do this. Now, meeting a functional requirement could happen by trial and error, so I wouldn't say that is in itself a sign of intelligence. I would say each of these criteria are perhaps necessary to infer intelligence, but not sufficient. For example, we could have a random process plus a selector; e.g. survival. The random process ensures Kolmogorov complexity increases, and the selector ensures functional requirements are met. This also would not be intelligent, but it would meet the criteria listed. KF's point comes in here: there is no connection between the functional islands, so the random process must hit a needle in a haystack. A skeptic response to this is: all the argument shows is there must be some source that is trying the requisite number of trials in order to hit the functional islands. If our universe contains insufficient trials, then the source is outside our universe, such as in another part of the multiverse. Since the multiverse hypothesis stays within the confines of naturalism, it is the preferable explanation, since it does the same job as positing an external designer. EricMH
EricMH: As I said, regularities can be the result of natural laws, if the system can generate them. See my example at #31: a natural system can certainly exist where one of two (or more) events is strongly favoured. That does not require design. That's why in Dembski's explanatory system we have to exclude a necessity origin for regularities. If no necessity law in the system can explain a regularity, then design is the best inference, because order is anyway a completely unlikely outcome, if complex enough in terms of bits. However, I would still remind that functional information is not order, and that it is not regularity. In functional information, the specification is connected to what we can do with the object in its specific configuration. The bits are harnessed towards a purpose, not towards some abstract concept of order. The form of functional information is often pseudo-random: in many cases, we cannot distinguish a functional sequence from a random one, unless we know how to test the function. Let's take the example of software: a program may appear as a random sequence of 0s and 1s (even if some regularities, of course, can be present). But if the sequence is correct it works. That's why programs do not arise spontaneously. Neither does language. Nor proteins. These are all forms of functional information, highly contingent and highly connected to a function. Robots, programs and machines are form of freezed functional information. They can do a lot, but only in the measure of what they have been programmed to do. They can increase the computable complexity of specified outcomes (like the figures of pi), but not the Kolmogorov complexity of the system itself. But they cannot generate new original functional complexity, because, for example, they cannot generate new original specifications. A machine cannot recognize a potential new function, unless it was potentially defined in its original program. Even machines which can incorporate new information from the environment, or from their own working (like neural networks or AlphaGo) have the same basic limitation. In all cases, machines can only do what they have been programmed to do, either directly or indirectly. Cosncious intelligent beings are different. They do things because they have conscious experiences and desires. That makes all the difference. The conscious experiences of understanding meaning, and of desiring outcomes, are the only true source of new original complex functional information. gpuccio
GP I agree, would this type of reasoning apply to any regularity we find in nature, including sequence #2? EricMH
EricMH: A robot is programmed by intelligent persons to implement some specific task. It is designed. So, a design inference is however necessary. A robot which can compute pi and write the figures on a rock wall is much more difficult to explain than the figures themselves. In the end, this is what the law of conservation of information probably means, without any need to understand in detail the mathematical complexities! :) gpuccio
GP a robot could have made the marks, and a robot is not intelligent. EricMH
EricMH: An example I have often used is the first 10000 decimal figures of pi. In particular, let's imagine that we arrive at a faraway planet. No traces of inhabitants. We reach a big rock wall, where some strange marks can be seen. Although they are not completely regular, they can be easily read as a binary code, two different marks. Somone also notices that, if read in a specific way, they really represent the first 10000 decimal figures of pi, in binary code. Without any error or ambiguity. Now, the marks could be easily explained as natural results of weather or other natural agents. But what about their configuration? I think we can definitely infer design. The important point is that we know absolutely nothing of the possible designer or designers. Except that they are designers (conscious intelligent purposeful agents), and maybe that they can compute pi. gpuccio
KF hah! don't I know it :) The binary progression example is good, though one could say that is the result of a mechanical process, too. Every finite string, for that matter, can be the result of a mechanical process, as well as many infinite strings. EricMH
EMH, yes, though in fact every argument will be flooded with objections. GP and I are just pointing out the difference between order and organisation. Perhaps, if you used something like: T-TH-HH-HTT-HTH-HHT-HHH-HTTT . . . etc we may show organisation, once we can see that this is counting up in binary.The point is this is NOT mere order that can be mechanically forced but fits an independent pattern that is indicative of intelligently directed configuration. Onward we could do say the first 100 primes in binary, or the like. And after that ascii code compared with DNA code. KF kairosfocus
GP yes, the example doesn't explain the intricacies of accounting for all chance and necessity hypotheses. I just use it to illustrate the basic intuition behind the design inference without a lot of math. Even if we only get the audience to the point of realizing orderliness cannot come from pure randomness, that there must be some original source of order, I think that's a big step forward, because that original source also cannot have come from randomness, and we are left with needing to account for the primal order. An important thing the ID camp must keep in mind is the need for accurate, yet easy to understand, illustrations of key principles. Any source of ambiguity opens the flood gate for misunderstanding. EricMH
GP that is why we look for high contingency and functionally coherent organisation. KF kairosfocus
EricMH and KF: Great points. I would like to add that in functional information the evidence for design is even stronger than in simple "order". Indeed, while order is certainly a valid independent specification, still order can arise from law, when specific conditions in the system are present. For example, while sequence 1 at #28 has formal properties compatible with a random sequence generated by coin tossing, sequence 2 can be a designed sequence, but it can also be a sequence generated by tossing a coin which is strongly biased towards heads. Even if we remain in a random system (so that both results are possible), if the probability of heads is, say, 0.99, the probability of having 40 heads in a row is not at all negligible (about 0.67). Instead, when we have functional information so that a specific configuration is defined by an objectively defined function, like in a functional protein, or in software, no natural laws in the system can be invoked, because natural laws do not work according to any understanding of other natural laws. For example, random mutations certainly happen according to natural laws of biochemistry, regarding DNA replication and the biochemistry of nucleotides, but those natural laws certainly do not understand in any way, and are not related in any way to the natural laws of biochemistry which rule the function of a protein whose sequence is generated by the nucleotide sequence according to a symbolic code. So, no explanation based on natural laws and necessity can be invoked in the case of complex functional information. gpuccio
EMH:
according to probability theory, both sequences are equally likely with a 50/50 coin toss. So, probability is not the reason. Then they will say, it is the pattern exhibited by #2 that is more improbable, not the sequence itself. Aha! That is correct. There is only one sequence of all heads, while there are many sequences that are a mixture.
This is of course a simple form of the question of independent specification that allows objective, and often macro-observable clustering of configuration space states. We then can assign relative statistical weights to such clusters. In systems that are not constrained otherwise and random walk/hop on configurations, there is a spontaneous trend towards and to remain in the predominant cluster. This is where we see the grounding of thermodynamic equilibrium, fluctuations and more, with the second law of thermodynamics being closely related. In the coins case, the coins case the predominant group will be near 50-50 H-T, in no particular recognisable, simply describable pattern. That is it resists Kolmogorov-style compression and exhibits high randomness in the sense that the state would basically have to be quoted. At the same time, there will be low configuration-driven functionality. That is it is not a message or a coding of an algorithm or the like. D/RNA, of course, is complex and is coded with significant sections expressing algorithms. KF kairosfocus
Note: The above extends a prior discussion which provides further context and details: https://uncommondesc.wpengine.com/informatics/ai-state-configuration-space-search-and-the-id-search-challenge/ KF kairosfocus
The coin flip example I like to use is the following. I have two sequences of flips. One I generated randomly and one I created myself. Can you tell which is which? 1. TTHHTTTTTHHHHTTTTHHTHTTTHTHHTTTTTTHHHTTT 2. HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH An astute observer will say sequence #2 is the sequence I generated. But why? They will say it is more improbable. I point out that according to probability theory, both sequences are equally likely with a 50/50 coin toss. So, probability is not the reason. Then they will say, it is the pattern exhibited by #2 that is more improbable, not the sequence itself. Aha! That is correct. There is only one sequence of all heads, while there are many sequences that are a mixture. And this is precisely the inference to design. EricMH
Coin flip was a good illustration, KF. I'm just trying to figure out what Bob is thinking. tribune7
@KF, yes, that is a good explanation why even granting the big If of correlation still leave an extraordinary amount to be explained by blind processes. Since the proponents of blind evolution are very circumspect when it comes to clearly stating what their assumptions and logic are, I've been trying to infer what their underlying rationale is. Here is the best I've been able to suss. 1. Begin with methodological naturalism, because otherwise we would mistake anomalies, which could lead to further scientific discoveries, for God's inscrutable action in our world. 2. Second, we have evidence that organisms long ago were simple, genetically speaking, and now they are complex. 3. Methodological naturalism requires us to only consider blind processes (nothing else makes sense). 4. Therefore, a blind process is responsible for the progression from ancient simple organisms to modern complex organisms. The ID camp, on the other hand, argues that for 1-4 to be true then there must be a vast number of trials for the occurrence of 4 to be expected. The MN camp responds in one of two ways: A. We have no idea what the prior for our universe is. We can assume maximum entropy and a uniform distribution, which ID does. However, at the end of the day, we are ignorant of the initial probability, and it doesn't really matter, practically speaking. B. We ascribe something special to the complexity we see, but that is merely anthropomorphism. There is not an objective specification. Any specification the ID camp refers to is arbitrary, and thus the calculations of enormous CSI are selection bias. Finally, the ID camp responds: A. Maximum entropy is justified on the very same grounds used to justify MN. If any initial condition probabilities are acceptable, then this implies we cannot infer anything from what we observe, a la Hume. There are an infinite number of models that fit the evidence, and all models are acceptable. B. This is an equivocation between the instance, and the pattern the instance exhibits. While all instances may have the same a priori probability, not all patterns have the same probability. There are objective patterns that vary in likelihood, e.g. binomial distribution, Kolmogorov complexity. A string of 100 coin flips resulting in heads makes a non random hypothesis much more likely than a random hypothesis. Hence, Dembski's explanatory filter and the CSI calculation. But, the human mind usually cannot follow this many levels of argument, and the MN camp resets to the 1-4 argument, restarting the argument. EricMH
Trib, I think coin flip observations give a fairly concrete but relevant case. I suspect the context of a configuration or state space or worse a phase space is unfamiliar. Yes, a system on the scale of our cosmos will only implement a tiny fraction of the possibilities, but that does not mean that the ones not seen were impossible. And, the issue of clustering on utterly dominant gibberish vs deeply isolated islands of function also seems problematic for some. Wait till I start to speak of moving islands, like sand bar islands. KF kairosfocus
--The search space is a part of the natural world, so it’s just there.-- I'm not sure what you mean by this. Can you elaborate? --Indeed. So why are Dembski & Marks averaging over (impossibilities), with a non-zero probability?-- Can you be specific as to where they are doing that? tribune7
BO'H: a configuration space of possibilities is by definition based on configurations that COULD occur. The problem is that to instantiate you must pay a price in time and resources. Possibilities readily grow exponentially with complexity, as we all know. So, we soon enough see the needle in haystack challenge where already for 500 - 1,000 bits of information [as was previously discussed] the number of instantiated possibilities is a nearly zero fraction of the full set of abstract possibilities for a world of 10^80 atoms or a solar system of 10^57, where fast atomic interactions run at 10^-14 s and time is like 10^17 s. Ponder as a simple example 10^80 atoms each as observer of a 1,000 member chain of coins flipped every 10^-14 s for that duration. We will get what 10^111 observed strings, but face 1.07*10^301 possibilities. The overwhelming pattern will be gibberish near 50-50% H : T, indeed, this is the first exampe of L K Nash and also Mandl for statistical mechanics. (To make it realistic think about a paramagnetic substance with a weak orienting field.) Such a search will be maximally unlikely to encounter a meaningful 1,000 coin string, never mind that the full space has in it every possible string from TTTT . . . T to HHH . . . H. And as every FSCO/I rich entity can be described by a bit string in some language, of enough complexity, this is WLOG. Realistic cases will be far more complex with spaces far beyond the just noted. This is what the search challenge is about: deep isolation of islands of function in huge spaces of possibilities. The gibberish sub-space dominates blind search. The reason we are beating that in the text we are putting up is that we are using intelligently directed configuration. KF kairosfocus
BO'H: first, indeed you have engaged, though not on the focal point. Next, you have been very unclear to me in your wording, if what I thought you addressed previously is not what you intended. You now seem to be suggesting that IF -- big if -- a search succeeds (or is abandoned?) after relatively few inquiries the scope W is irrelevant. Nope, as we cannot rely on assumed miracles or the magical ratchet of hill-climbing on one side -- that is, the challenge is to find islands of function. On the other, we are first looking at atoms in a Darwin's warm pond or the like in a pre-life environment, where such atoms are going to be thermally agitated and interacting at rates up to 10^14 possible reactions/s, for 10^17 s or so on the usual timeline. For 10^57 atoms there are going to be a lot of opportunities, just that the raw space of possibilities dwarfs it. And that is material. As for search for golden search, the point is that searches sample subsets of the possible configurations, which brings to bear the problem of the power set. Overwhelmingly, blind search will churn away, but is utterly unlikely to be observed spontaneously originating FSCO/I, for reasons of the sheer statistical weight of the gibberish states. And yes, that is close enough to the statistical grounding of the second law of thermodynamics. KF kairosfocus
t7 @ 20 -
–but if the search never gets to see those search spaces where it performs well, does that matter?– Isn’t that the crux of the matter? How does the search get to the search space where it does well?
The search space is a part of the natural world, so it's just there.
–the problem is that a lot of these search spaces don’t occur, and some are probably physically impossible.– Wouldn’t by definition a search space rule out impossibilities?
Indeed. So why are Dembski & Marks averaging over them, with a non-zero probability? Bob O'H
Bob, --but if the search never gets to see those search spaces where it performs well, does that matter?-- Isn't that the crux of the matter? How does the search get to the search space where it does well? --the problem is that a lot of these search spaces don’t occur, and some are probably physically impossible.-- Wouldn't by definition a search space rule out impossibilities? tribune7
gpuccio @ 18 - see kf's comment at 4. He implies that the average is over search spaces (which is also what the NFLs are based on). the problem is that a lot of these search spaces don't occur, and some are probably physically impossible. So why are they even considered? If the search does better than random over the possible search spaces, then the Dembski & Marks argument is invalid. But I haven't seen any consideration in the mathematics of only possible search spaces. Bob O'H
Bob O'H: "kf – but if the search never gets to see those search spaces where it performs well, does that matter?" And: "But you still haven’t addressed my question @ 6 – if the search never gets to see a lot of search spaces, do they matter? The D & M point you quote specifically relies on averaging over a lot of search spaces, but I’ve never seen an explanation for why this should be done." I am not sure I understand your point here. Could you be more specific, please? The original quote is: "Conservation of information dictates any search technique [–> as in not specifically correlated to the structure of the space, i.e. a map of the targets] will work, on average, as well as blind search." So, it seems to me that the scenario is about applying repeatedly some search to one search space, or about applying many different search techniques to the same search space (not specifically correlated to the structure of the space). I don't see anything about "a lot of search spaces", but maybe I am wrong. gpuccio
Bob O'H: "what do they mean by “on average”? In particular, what are they averaging over?" I am not the most appropriate person to comment on that, but I suppose that this is the general formulation of the famous "no free lunch" theorem. It probably means that some implementations of the search could perform better than blind chance, and some others worse, but that in average the performance of the search will not be better than blind chance. gpuccio
So, we can take it by studious absence of those who would pounce on real or imagined flaws, that it is realised that the grand narrative of peculiar and dubious use of the concept of a- search- challenged- by- needle- in- the- haystack- implausibility has been overturned. That is already significant.
Unlike me, evidently. *sniff*
in a biological setting the search and space are correlated, and that is why the NFLT does not apply
Obviously, that first is a claim to successful blind search for a golden search. Which, is exponentially harder than direct search.
Your argument for the difficulty of a search for a search relies on the same argument. But you still haven't addressed my question @ 6 - if the search never gets to see a lot of search spaces, do they matter? The D & M point you quote specifically relies on averaging over a lot of search spaces, but I've never seen an explanation for why this should be done. Bob O'H
EMH, So, we can take it by studious absence of those who would pounce on real or imagined flaws, that it is realised that the grand narrative of peculiar and dubious use of the concept of a- search- challenged- by- needle- in- the- haystack- implausibility has been overturned. That is already significant. But generally speaking we deal with a zero concession, zero acknowledgement to those IDiots policy. So, I guess we simply see silent bypassing of the point and let's go on to the next attack-point. So, we now face:
in a biological setting the search and space are correlated, and that is why the NFLT does not apply
Obviously, that first is a claim to successful blind search for a golden search. Which, is exponentially harder than direct search. For, where a config space has n possibilities and a search selects a subset, the set of possible searches is tantamount to the set of subsets. So search for golden search viewed as blind is a search in a higher space of 2^n, scale of the power set. (If you want more from Marks and Dembski et al, look at the search for a search paper for its analysis. I simply prefer a direct response on the nature of searches.) So, to appeal to the biological world as start-point is to already claim to be in a very specially fine-tuned world. Which itself strongly points to design. You will also notice that I am here not blindly appealing to no free lunch theorems but to need to solve search challenge. Issue, not label. Information and organisation involving that information need to be soberly explained on Newton's vera causa: observed demonstrated cause capable of leading to the like result. The problem then becomes appeal to convenient fluctuation without empirical demonstration, too often backed by imposition of another form of question-begging: evolutionary materialistic scientism by the back door of so called methodological naturalism. No, design inference is not "giving up on science" nor doomed "god of the shrinking gaps" or the like. It is the simple, sober recognition that functionally specific, complex organisation and associated information [FSCO/I] has only one empirically observed, search-challenge plausible explanation: intelligently directed configuration. But that is only a preliminary point. By already addressing biological search, the question has been begged of getting to first functional life architecture, that is OOL. This requires addressing physical, chemical and thermodynamic origins of a c-chemistry, aqueous medium, encapsulated, smart-gated metabolic automaton with an integral von Neumann self replicating facility. Where ability to reproduce by cellular self-replication integrated with such a metabolic entity [itself a huge exercise in integrated chemistry dwarfing an oil refinery in complex, specifically functional organised coherent system design] is prior to biology. Getting to the first shoreline of function. Hill-climbing beyond may imply appeal to well behaved fitness function, but we need not elaborate on the special nature of that as a form of fine tuning. Where, mere assertion of conveniently fine tuned matching up that gets us to OOL in such a convenient world is not enough, it has to be empirically demonstrated in realistic pre-life environments. Which simply has not been done nor is it anywhere close. Question-begging on steroids that gets you to oh, we can hill-climb a convenient fitness function. Summing up this first phase, intelligently directed configuration is the only vera causa supported account for the origin of a key requisite, FSCO/I. Until empirical demonstration of spontaneous, undesigned OOL in realistic pre-life environments is empirically warranted, we have every right to insist on this. Design sits at the table of scientific inference regarding the tree of life from its root on up. That root being OOL. Next, origin of body plans. We know from amino acid [AA] sequence space, that protein fold domains definitely come in deeply isolated clusters that are deeply isolated. That is, as is expected for FSCO/I, we are dealing with islands of function amidst a wide sea of gibberish. So, we now have to explain origin of coherently functional body plans that require 10 - 100+ million base pairs of incremental bio-information on top of the 100 - 1,000 kbases for basic cellular life. Dozens of times over. And connected to early stages of elaboration from an initial cell, which is exactly where empirical evidence points strongly to destructive sensitivity to perturbations. That is, to fine tuning and deeply isolated islands of function. Where a space of order 2^[10^7] is outrageously beyond the search resources of a solar system of order 10^57 atoms or an observed cosmos of 10^80 atoms in which 10^-14s is a fast organic reaction time and 10^17s is a generally accepted order of magnitude to the singularity. Where in fact, the Cambrian life revolution points to a very narrow geological window of time and resources on a planet of what 10^40+ atoms. In short, again, the question of getting to shorelines of deeply isolated function is begged. Where is the vera causa, observationally warranted evidence of the origin of the claimed result? Nowhere in evidence. If someone doubts, just present it _____ . Then, indicate discovery by whom ______ and the Nobel or equivalent prizes won for the empirically confirmed (not speculative, ideologically question-begged) result: _____ . Those blanks are a tad hard to fill in, and we can confidently state that such will continue to be the case. Intelligently directed configuration, AKA design, is the strong horse on the world of life. KF kairosfocus
@KF, an objection I hear is that in a biological setting the search and space are correlated, and that is why the NFLT does not apply. EricMH
BO'H: if there is in effect a random link of search to space, the odds that search and space will be well fitted are all but zero for reasonable scale large spaces. So, we have reason to see that even searches that given the right match would do very well, typically would all but certainly fail under the circumstances in view. Hence the mismatch comment. If you are responding to the twisty MCQ test comparative, in that case nearly right is worse than just random as the subtle distractors will make it LOOK right making you more likely to fail if you are 3/4's right than if you are outright guessing. So under pathological circumstances it CAN matter. KF kairosfocus
kf @ 1 - that doesn't address my question @ 6, though. Bob O'H
Robo, interesting snippets. KF kairosfocus
BO'H: an intelligently directed configuration process can routinely defeat the search challenge. E.g. in typing up comments we get it close to right first time, though the odd typo fix or edit may be needed. The contrast in capability is one of the reasons to infer design as best explanation. It is blind search that is readily overwhelmed by functionally specific complex organisation and information. Of course, design is often based on mapping the territory or the like, which then minimises search effort while achieving success. KF kairosfocus
Quotes from Chapter 3 of Dembski and Marks EI Book (sorry for bad copy/pasting from PDF): p.59 3.9 Conclusions Design is an inherently iterative search process requiring domain intelligence and expertise. Domain knowledge and experience can be applied to the search procedure to decrease the time needed for a successful search. Because of the exponential explosion of possibilities (i.e. the curse of dimensionality), the time and resources required by blind search quickly become too large to apply. UndirectedDarwinianevolutionhasneitherthetimenorcomputational resources to design anything of even moderate complexity. External knowledge is needed. Neither quantum computing nor Moore’s law makes a signi?cant dent in these requirements. Robocop
Quotes from Chapter 3 of Dembski and Marks EI Book (sorry for bad copy/pasting from PDF): p.47 3.5.1 Will Moore ever help? How about Grover? How about computers of the future? Will they allow large undirected blind searches within a reasonable amount of time? Even for problems of intermediate size, the answer is no. Moore’s law says that the number of transistors in a dense integrated circuit doubles approximately every two years. For discussion purposes, assume the speed of the computer doubles every year. Suppose there is a recipe with a 1500 design parameters, each of which takes on a value of one (use the ingredient) or zero (doesn’t use it).Assume it takes a year to compute all of the recipes.g Let the speed of the computer double. How muchlargerasearchcanwenowdoinayear?Ifthespeedofthecomputer hasdoubled,thedisappointingansweristhatasearchcanbedoneforonly 1,501ingredients.h Only 1 more ingredient can be considered.Forthenew search, we’d have to do the original search where the new ingredient is not used, and repeat the experiment for when the new ingredient is used. The effect of the addition of a single ingredient in the search is independent of the original search without the extra ingredient. Faster computers will not solve our problem. What about the ?eld of quantum computing? A quantum computer makes use of the strange and wonderful lawsi of quantum mechanics such as superposition and entanglement15 to perform operations on data. If implemented, Shor’s algorithm16 for quantum computers could rapidly decrypt many of the cryptographic systems in use today. Robocop
Quotes from Chapter 3 of Dembski and Marks EI Book (sorry for bad copy/pasting from PDF): p.40 The job of SEARCH is to determine the next recipe to present to the COOK computer program. With no understanding of the shape of the ?tness function or where the target is, there is not much guidance to choose the next recipe. The best we can do is not revisit a recipe that has already been tried. This is a type of blind search. 3.3.5 A search for a good pancake #5: Simulating pancakes on a computer with an arti?cial tongue using an evolutionary search A blind search corresponds toa single sightless agent with a good memory walking around the search space and asking an oracle what the ?tness is at its current location. For fast parallel computers, the situation can be improved. Instead of one agent, a team of agents can be searching the landscape, communicating with each other on walkie-talkies. A diamond necklacelostina?eldisbetterfoundbyamovinglineofsearchersholding hands than a single searcher walking the ?eld in a Zamboni pattern. Evolutionary search is a special case of multiple agent search.9 For the pancake problem, a simple evolutionary search is shown on the bottom of Fig. 3.4. N recipes are ?rst presented to the oracle (consisting of the COOK and TONGUE algorithms). Only recipes with high taste rankings are kept (survival of the ?ttest). Low taste rankings are discarded. To keep the population at a count of N, the discarded recipes are replaced with copies of recipes with higher rankings. We have repopulated. Each of the N recipes is now changed slightly. The changes are minor in hopes that the new recipe maintains the features that made it good in the ?rst place. This corresponds to the mutation step in evolution.e One generation of evolution has occurred. The N new recipes are then subjected to a new cycle of selection (survival of the ?ttest), repopulation and mutation. The hope of evolutionary programs is that the population will become stronger and stronger as witnessed by ever increasing ?tness scores and, in the case of the pancake design, ultimately result in a recipe for a delicious pancake. Robocop
kf - but if the search never gets to see those search spaces where it performs well, does that matter? Bob O'H
Quotes from Chapter 3 of Dembski and Marks EI Book (sorry for bad copy/pasting from PDF): p.31 BrilliantelectricalengineerNikolaTesladisagreed.Teslafeltthat99% perspiration is not necessary for those who know what they are doing. He admonished Edison for his lack of domain expertise and the consequent busywork required for Edison’s invention process. In his own career,Tesla brilliantly manipulated visions and foundational theory in his creative mindandconceivedofastonishinginventionssuchasbrushlessalternating current induction motors and wireless energy transfer. Tesla wrote that Edison required a large number of trials because of his lack of domain expertise. Tesla writes8 “[Edison’s] method [of design] was inef?cient in the extreme, for an immense ground had to be covered to get anything at all unless blind chance intervened and, at ?rst, I was almost a sorry witness of his doings, knowing that just a little theory and calculation would have saved him 90 percent of the labor. But he hadaveritablecontemptforbooklearningandmathematicalknowledge,trusting himself entirely to his inventor’s instinct and practicalAmerican sense.”a From the numbers he used, Tesla apparently believed that genius is 0.1×90%=9% perspiration and the remaining 91% inspiration. In the quotation above, Tesla makes mention of blind chance.A blind search results when there is no domain expertise.We will hear much more about this later. Tesla engaged in a famous battle against Edison concerning the use of alternatingversusdirectcurrent.Aswitnessedbytheoutputoftheelectrical outlets in your home today, Tesla’s technology prevailed. Robocop
Trib & BO'H: Yes. They point out that a search well matched for a given "map" would likely do worse than randomness on most searches of spaces which will overwhelmingly be mismatched to the map. A good comparative is the old fashioned penalised multiple choice test with sneaky, tricky distractors. If you are a little off in your understandings, you can do worse than if you outright blindly guessed. I have seen negative scores, which understandably are really demoralising. KF kairosfocus
Bob, I read it as taking any method without restrictions on where to search will have the same rate of success as not using a method. tribune7
When Dembski & Marks write this:
Conservation of information dictates any search technique [–> as in not specifically correlated to the structure of the space, i.e. a map of the targets] will work, on average, as well as blind search.
what do they mean by "on average"? In particular, what are they averaging over? Bob O'H
A note on state space search challenge kairosfocus

Leave a Reply