Uncommon Descent Serving The Intelligent Design Community

A note on state space search challenge

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

As was recently discussed, contrary to objections being made, the concept of blind search and linked search challenge in a configuration or state space is a reasonable and even recognised concept. As we explore this concept a little more, an illustration may be helpful:

With this in mind, we may again look at Dembski’s arrow and target illustration from NFL, p. 11:

ID researcher William A Dembski, NFL, p.11 on possibilities, target zones and events

Now, let us ponder again Wiki on state space search:

>>State space search is a process used in the field of computer science, including artificial intelligence (AI), in which successive configurations or states of an instance are considered, with the intention of finding a goal state with a desired property.

Problems are often modelled as a state space, a set of states that a problem can be in. The set of states forms a graph where two states are connected if there is an operation that can be performed to transform the first state into the second.

State space search often differs from traditional computer science search methods because the state space is implicit: the typical state space graph is much too large to generate and store in memory. Instead, nodes are generated as they are explored, and typically discarded thereafter. A solution to a combinatorial search instance may consist of the goal state itself, or of a path from some initial state to the goal state.

Representation

[–> Note, I would prefer stating the tuple as say: S := {{Ω, A, Action(s), Result (s,a), Cost(s,a)}} ]

Examples of State-space search algorithms

Uninformed Search

According to Poole and Mackworth, the following are uninformed state-space search methods, meaning that they do not know information about the goal’s location.[1]

Depth-first search
Breadth-first search
Lowest-cost-first search

Informed Search

Some algorithms take into account information about the goal node’s location in the form of a heuristic function[2]. Poole and Mackworth cite the following examples as informed search algorithms:

Heuristic depth-first search
Greedy best-first search
A* search>>

This now allows us to better appreciate the sort of challenge that blind watchmaker search in a darwin’s pond or the like pre-life environment faces, or that of hopping from one body plan to another:

Thus, we will better appreciate the general point Dembski and Marks et al have been making:

>>Needle-in-the-haystack [search] problems look for small targets in large spaces. In such cases, blind search stands no hope of success. Conservation of information dictates any search technique [–> as in not specifically correlated to the structure of the space, i.e. a map of the targets] will work, on average, as well as blind search. Success requires an assisted [intelligently directed, well-informed] search. But whence the assistance required for a search to be successful? To pose the question this way suggests that successful searches do not emerge spontaneously but need themselves to be discovered via a search. The question then naturally arises whether such a higher-level “search for a search” is any easier than the original search.

[–> where once search is taken as implying sample, searches take subsets so the set of possible searches is tantamount to the power set of the original set. For a set of cardinality n, the power set has cardinality 2^n.]>>

In this context, once we notice that we are in fact addressing an entity that is marked by the sort of functionally specific complex organisation and/or information [FSCO/I] that leads to this sort of needle in haystack challenge, the design inference explanatory filter applies:

And yes, yet again, we are back to the point that the origin of life and body plans is best explained on design; intelligently directed configuration by whatever ways and means such could have been achieved.

Further food for thought. END

Comments
BO'H: a configuration space of possibilities is by definition based on configurations that COULD occur. The problem is that to instantiate you must pay a price in time and resources. Possibilities readily grow exponentially with complexity, as we all know. So, we soon enough see the needle in haystack challenge where already for 500 - 1,000 bits of information [as was previously discussed] the number of instantiated possibilities is a nearly zero fraction of the full set of abstract possibilities for a world of 10^80 atoms or a solar system of 10^57, where fast atomic interactions run at 10^-14 s and time is like 10^17 s. Ponder as a simple example 10^80 atoms each as observer of a 1,000 member chain of coins flipped every 10^-14 s for that duration. We will get what 10^111 observed strings, but face 1.07*10^301 possibilities. The overwhelming pattern will be gibberish near 50-50% H : T, indeed, this is the first exampe of L K Nash and also Mandl for statistical mechanics. (To make it realistic think about a paramagnetic substance with a weak orienting field.) Such a search will be maximally unlikely to encounter a meaningful 1,000 coin string, never mind that the full space has in it every possible string from TTTT . . . T to HHH . . . H. And as every FSCO/I rich entity can be described by a bit string in some language, of enough complexity, this is WLOG. Realistic cases will be far more complex with spaces far beyond the just noted. This is what the search challenge is about: deep isolation of islands of function in huge spaces of possibilities. The gibberish sub-space dominates blind search. The reason we are beating that in the text we are putting up is that we are using intelligently directed configuration. KFkairosfocus
February 6, 2018
February
02
Feb
6
06
2018
11:34 AM
11
11
34
AM
PDT
BO'H: first, indeed you have engaged, though not on the focal point. Next, you have been very unclear to me in your wording, if what I thought you addressed previously is not what you intended. You now seem to be suggesting that IF -- big if -- a search succeeds (or is abandoned?) after relatively few inquiries the scope W is irrelevant. Nope, as we cannot rely on assumed miracles or the magical ratchet of hill-climbing on one side -- that is, the challenge is to find islands of function. On the other, we are first looking at atoms in a Darwin's warm pond or the like in a pre-life environment, where such atoms are going to be thermally agitated and interacting at rates up to 10^14 possible reactions/s, for 10^17 s or so on the usual timeline. For 10^57 atoms there are going to be a lot of opportunities, just that the raw space of possibilities dwarfs it. And that is material. As for search for golden search, the point is that searches sample subsets of the possible configurations, which brings to bear the problem of the power set. Overwhelmingly, blind search will churn away, but is utterly unlikely to be observed spontaneously originating FSCO/I, for reasons of the sheer statistical weight of the gibberish states. And yes, that is close enough to the statistical grounding of the second law of thermodynamics. KFkairosfocus
February 6, 2018
February
02
Feb
6
06
2018
11:13 AM
11
11
13
AM
PDT
t7 @ 20 -
–but if the search never gets to see those search spaces where it performs well, does that matter?– Isn’t that the crux of the matter? How does the search get to the search space where it does well?
The search space is a part of the natural world, so it's just there.
–the problem is that a lot of these search spaces don’t occur, and some are probably physically impossible.– Wouldn’t by definition a search space rule out impossibilities?
Indeed. So why are Dembski & Marks averaging over them, with a non-zero probability?Bob O'H
February 6, 2018
February
02
Feb
6
06
2018
10:16 AM
10
10
16
AM
PDT
Bob, --but if the search never gets to see those search spaces where it performs well, does that matter?-- Isn't that the crux of the matter? How does the search get to the search space where it does well? --the problem is that a lot of these search spaces don’t occur, and some are probably physically impossible.-- Wouldn't by definition a search space rule out impossibilities?tribune7
February 6, 2018
February
02
Feb
6
06
2018
08:43 AM
8
08
43
AM
PDT
gpuccio @ 18 - see kf's comment at 4. He implies that the average is over search spaces (which is also what the NFLs are based on). the problem is that a lot of these search spaces don't occur, and some are probably physically impossible. So why are they even considered? If the search does better than random over the possible search spaces, then the Dembski & Marks argument is invalid. But I haven't seen any consideration in the mathematics of only possible search spaces.Bob O'H
February 6, 2018
February
02
Feb
6
06
2018
06:53 AM
6
06
53
AM
PDT
Bob O'H: "kf – but if the search never gets to see those search spaces where it performs well, does that matter?" And: "But you still haven’t addressed my question @ 6 – if the search never gets to see a lot of search spaces, do they matter? The D & M point you quote specifically relies on averaging over a lot of search spaces, but I’ve never seen an explanation for why this should be done." I am not sure I understand your point here. Could you be more specific, please? The original quote is: "Conservation of information dictates any search technique [–> as in not specifically correlated to the structure of the space, i.e. a map of the targets] will work, on average, as well as blind search." So, it seems to me that the scenario is about applying repeatedly some search to one search space, or about applying many different search techniques to the same search space (not specifically correlated to the structure of the space). I don't see anything about "a lot of search spaces", but maybe I am wrong.gpuccio
February 6, 2018
February
02
Feb
6
06
2018
06:14 AM
6
06
14
AM
PDT
Bob O'H: "what do they mean by “on average”? In particular, what are they averaging over?" I am not the most appropriate person to comment on that, but I suppose that this is the general formulation of the famous "no free lunch" theorem. It probably means that some implementations of the search could perform better than blind chance, and some others worse, but that in average the performance of the search will not be better than blind chance.gpuccio
February 6, 2018
February
02
Feb
6
06
2018
06:07 AM
6
06
07
AM
PDT
So, we can take it by studious absence of those who would pounce on real or imagined flaws, that it is realised that the grand narrative of peculiar and dubious use of the concept of a- search- challenged- by- needle- in- the- haystack- implausibility has been overturned. That is already significant.
Unlike me, evidently. *sniff*
in a biological setting the search and space are correlated, and that is why the NFLT does not apply
Obviously, that first is a claim to successful blind search for a golden search. Which, is exponentially harder than direct search.
Your argument for the difficulty of a search for a search relies on the same argument. But you still haven't addressed my question @ 6 - if the search never gets to see a lot of search spaces, do they matter? The D & M point you quote specifically relies on averaging over a lot of search spaces, but I've never seen an explanation for why this should be done.Bob O'H
February 6, 2018
February
02
Feb
6
06
2018
05:59 AM
5
05
59
AM
PDT
EMH, So, we can take it by studious absence of those who would pounce on real or imagined flaws, that it is realised that the grand narrative of peculiar and dubious use of the concept of a- search- challenged- by- needle- in- the- haystack- implausibility has been overturned. That is already significant. But generally speaking we deal with a zero concession, zero acknowledgement to those IDiots policy. So, I guess we simply see silent bypassing of the point and let's go on to the next attack-point. So, we now face:
in a biological setting the search and space are correlated, and that is why the NFLT does not apply
Obviously, that first is a claim to successful blind search for a golden search. Which, is exponentially harder than direct search. For, where a config space has n possibilities and a search selects a subset, the set of possible searches is tantamount to the set of subsets. So search for golden search viewed as blind is a search in a higher space of 2^n, scale of the power set. (If you want more from Marks and Dembski et al, look at the search for a search paper for its analysis. I simply prefer a direct response on the nature of searches.) So, to appeal to the biological world as start-point is to already claim to be in a very specially fine-tuned world. Which itself strongly points to design. You will also notice that I am here not blindly appealing to no free lunch theorems but to need to solve search challenge. Issue, not label. Information and organisation involving that information need to be soberly explained on Newton's vera causa: observed demonstrated cause capable of leading to the like result. The problem then becomes appeal to convenient fluctuation without empirical demonstration, too often backed by imposition of another form of question-begging: evolutionary materialistic scientism by the back door of so called methodological naturalism. No, design inference is not "giving up on science" nor doomed "god of the shrinking gaps" or the like. It is the simple, sober recognition that functionally specific, complex organisation and associated information [FSCO/I] has only one empirically observed, search-challenge plausible explanation: intelligently directed configuration. But that is only a preliminary point. By already addressing biological search, the question has been begged of getting to first functional life architecture, that is OOL. This requires addressing physical, chemical and thermodynamic origins of a c-chemistry, aqueous medium, encapsulated, smart-gated metabolic automaton with an integral von Neumann self replicating facility. Where ability to reproduce by cellular self-replication integrated with such a metabolic entity [itself a huge exercise in integrated chemistry dwarfing an oil refinery in complex, specifically functional organised coherent system design] is prior to biology. Getting to the first shoreline of function. Hill-climbing beyond may imply appeal to well behaved fitness function, but we need not elaborate on the special nature of that as a form of fine tuning. Where, mere assertion of conveniently fine tuned matching up that gets us to OOL in such a convenient world is not enough, it has to be empirically demonstrated in realistic pre-life environments. Which simply has not been done nor is it anywhere close. Question-begging on steroids that gets you to oh, we can hill-climb a convenient fitness function. Summing up this first phase, intelligently directed configuration is the only vera causa supported account for the origin of a key requisite, FSCO/I. Until empirical demonstration of spontaneous, undesigned OOL in realistic pre-life environments is empirically warranted, we have every right to insist on this. Design sits at the table of scientific inference regarding the tree of life from its root on up. That root being OOL. Next, origin of body plans. We know from amino acid [AA] sequence space, that protein fold domains definitely come in deeply isolated clusters that are deeply isolated. That is, as is expected for FSCO/I, we are dealing with islands of function amidst a wide sea of gibberish. So, we now have to explain origin of coherently functional body plans that require 10 - 100+ million base pairs of incremental bio-information on top of the 100 - 1,000 kbases for basic cellular life. Dozens of times over. And connected to early stages of elaboration from an initial cell, which is exactly where empirical evidence points strongly to destructive sensitivity to perturbations. That is, to fine tuning and deeply isolated islands of function. Where a space of order 2^[10^7] is outrageously beyond the search resources of a solar system of order 10^57 atoms or an observed cosmos of 10^80 atoms in which 10^-14s is a fast organic reaction time and 10^17s is a generally accepted order of magnitude to the singularity. Where in fact, the Cambrian life revolution points to a very narrow geological window of time and resources on a planet of what 10^40+ atoms. In short, again, the question of getting to shorelines of deeply isolated function is begged. Where is the vera causa, observationally warranted evidence of the origin of the claimed result? Nowhere in evidence. If someone doubts, just present it _____ . Then, indicate discovery by whom ______ and the Nobel or equivalent prizes won for the empirically confirmed (not speculative, ideologically question-begged) result: _____ . Those blanks are a tad hard to fill in, and we can confidently state that such will continue to be the case. Intelligently directed configuration, AKA design, is the strong horse on the world of life. KFkairosfocus
February 5, 2018
February
02
Feb
5
05
2018
09:13 PM
9
09
13
PM
PDT
@KF, an objection I hear is that in a biological setting the search and space are correlated, and that is why the NFLT does not apply.EricMH
February 5, 2018
February
02
Feb
5
05
2018
04:44 PM
4
04
44
PM
PDT
BO'H: if there is in effect a random link of search to space, the odds that search and space will be well fitted are all but zero for reasonable scale large spaces. So, we have reason to see that even searches that given the right match would do very well, typically would all but certainly fail under the circumstances in view. Hence the mismatch comment. If you are responding to the twisty MCQ test comparative, in that case nearly right is worse than just random as the subtle distractors will make it LOOK right making you more likely to fail if you are 3/4's right than if you are outright guessing. So under pathological circumstances it CAN matter. KFkairosfocus
February 5, 2018
February
02
Feb
5
05
2018
03:31 PM
3
03
31
PM
PDT
kf @ 1 - that doesn't address my question @ 6, though.Bob O'H
February 5, 2018
February
02
Feb
5
05
2018
02:35 PM
2
02
35
PM
PDT
Robo, interesting snippets. KFkairosfocus
February 5, 2018
February
02
Feb
5
05
2018
02:31 PM
2
02
31
PM
PDT
BO'H: an intelligently directed configuration process can routinely defeat the search challenge. E.g. in typing up comments we get it close to right first time, though the odd typo fix or edit may be needed. The contrast in capability is one of the reasons to infer design as best explanation. It is blind search that is readily overwhelmed by functionally specific complex organisation and information. Of course, design is often based on mapping the territory or the like, which then minimises search effort while achieving success. KFkairosfocus
February 5, 2018
February
02
Feb
5
05
2018
02:29 PM
2
02
29
PM
PDT
Quotes from Chapter 3 of Dembski and Marks EI Book (sorry for bad copy/pasting from PDF): p.59 3.9 Conclusions Design is an inherently iterative search process requiring domain intelligence and expertise. Domain knowledge and experience can be applied to the search procedure to decrease the time needed for a successful search. Because of the exponential explosion of possibilities (i.e. the curse of dimensionality), the time and resources required by blind search quickly become too large to apply. UndirectedDarwinianevolutionhasneitherthetimenorcomputational resources to design anything of even moderate complexity. External knowledge is needed. Neither quantum computing nor Moore’s law makes a signi?cant dent in these requirements.Robocop
February 5, 2018
February
02
Feb
5
05
2018
02:02 PM
2
02
02
PM
PDT
Quotes from Chapter 3 of Dembski and Marks EI Book (sorry for bad copy/pasting from PDF): p.47 3.5.1 Will Moore ever help? How about Grover? How about computers of the future? Will they allow large undirected blind searches within a reasonable amount of time? Even for problems of intermediate size, the answer is no. Moore’s law says that the number of transistors in a dense integrated circuit doubles approximately every two years. For discussion purposes, assume the speed of the computer doubles every year. Suppose there is a recipe with a 1500 design parameters, each of which takes on a value of one (use the ingredient) or zero (doesn’t use it).Assume it takes a year to compute all of the recipes.g Let the speed of the computer double. How muchlargerasearchcanwenowdoinayear?Ifthespeedofthecomputer hasdoubled,thedisappointingansweristhatasearchcanbedoneforonly 1,501ingredients.h Only 1 more ingredient can be considered.Forthenew search, we’d have to do the original search where the new ingredient is not used, and repeat the experiment for when the new ingredient is used. The effect of the addition of a single ingredient in the search is independent of the original search without the extra ingredient. Faster computers will not solve our problem. What about the ?eld of quantum computing? A quantum computer makes use of the strange and wonderful lawsi of quantum mechanics such as superposition and entanglement15 to perform operations on data. If implemented, Shor’s algorithm16 for quantum computers could rapidly decrypt many of the cryptographic systems in use today.Robocop
February 5, 2018
February
02
Feb
5
05
2018
02:01 PM
2
02
01
PM
PDT
Quotes from Chapter 3 of Dembski and Marks EI Book (sorry for bad copy/pasting from PDF): p.40 The job of SEARCH is to determine the next recipe to present to the COOK computer program. With no understanding of the shape of the ?tness function or where the target is, there is not much guidance to choose the next recipe. The best we can do is not revisit a recipe that has already been tried. This is a type of blind search. 3.3.5 A search for a good pancake #5: Simulating pancakes on a computer with an arti?cial tongue using an evolutionary search A blind search corresponds toa single sightless agent with a good memory walking around the search space and asking an oracle what the ?tness is at its current location. For fast parallel computers, the situation can be improved. Instead of one agent, a team of agents can be searching the landscape, communicating with each other on walkie-talkies. A diamond necklacelostina?eldisbetterfoundbyamovinglineofsearchersholding hands than a single searcher walking the ?eld in a Zamboni pattern. Evolutionary search is a special case of multiple agent search.9 For the pancake problem, a simple evolutionary search is shown on the bottom of Fig. 3.4. N recipes are ?rst presented to the oracle (consisting of the COOK and TONGUE algorithms). Only recipes with high taste rankings are kept (survival of the ?ttest). Low taste rankings are discarded. To keep the population at a count of N, the discarded recipes are replaced with copies of recipes with higher rankings. We have repopulated. Each of the N recipes is now changed slightly. The changes are minor in hopes that the new recipe maintains the features that made it good in the ?rst place. This corresponds to the mutation step in evolution.e One generation of evolution has occurred. The N new recipes are then subjected to a new cycle of selection (survival of the ?ttest), repopulation and mutation. The hope of evolutionary programs is that the population will become stronger and stronger as witnessed by ever increasing ?tness scores and, in the case of the pancake design, ultimately result in a recipe for a delicious pancake.Robocop
February 5, 2018
February
02
Feb
5
05
2018
02:00 PM
2
02
00
PM
PDT
kf - but if the search never gets to see those search spaces where it performs well, does that matter?Bob O'H
February 5, 2018
February
02
Feb
5
05
2018
01:59 PM
1
01
59
PM
PDT
Quotes from Chapter 3 of Dembski and Marks EI Book (sorry for bad copy/pasting from PDF): p.31 BrilliantelectricalengineerNikolaTesladisagreed.Teslafeltthat99% perspiration is not necessary for those who know what they are doing. He admonished Edison for his lack of domain expertise and the consequent busywork required for Edison’s invention process. In his own career,Tesla brilliantly manipulated visions and foundational theory in his creative mindandconceivedofastonishinginventionssuchasbrushlessalternating current induction motors and wireless energy transfer. Tesla wrote that Edison required a large number of trials because of his lack of domain expertise. Tesla writes8 “[Edison’s] method [of design] was inef?cient in the extreme, for an immense ground had to be covered to get anything at all unless blind chance intervened and, at ?rst, I was almost a sorry witness of his doings, knowing that just a little theory and calculation would have saved him 90 percent of the labor. But he hadaveritablecontemptforbooklearningandmathematicalknowledge,trusting himself entirely to his inventor’s instinct and practicalAmerican sense.”a From the numbers he used, Tesla apparently believed that genius is 0.1×90%=9% perspiration and the remaining 91% inspiration. In the quotation above, Tesla makes mention of blind chance.A blind search results when there is no domain expertise.We will hear much more about this later. Tesla engaged in a famous battle against Edison concerning the use of alternatingversusdirectcurrent.Aswitnessedbytheoutputoftheelectrical outlets in your home today, Tesla’s technology prevailed.Robocop
February 5, 2018
February
02
Feb
5
05
2018
01:57 PM
1
01
57
PM
PDT
Trib & BO'H: Yes. They point out that a search well matched for a given "map" would likely do worse than randomness on most searches of spaces which will overwhelmingly be mismatched to the map. A good comparative is the old fashioned penalised multiple choice test with sneaky, tricky distractors. If you are a little off in your understandings, you can do worse than if you outright blindly guessed. I have seen negative scores, which understandably are really demoralising. KFkairosfocus
February 5, 2018
February
02
Feb
5
05
2018
12:16 PM
12
12
16
PM
PDT
Bob, I read it as taking any method without restrictions on where to search will have the same rate of success as not using a method.tribune7
February 5, 2018
February
02
Feb
5
05
2018
07:43 AM
7
07
43
AM
PDT
When Dembski & Marks write this:
Conservation of information dictates any search technique [–> as in not specifically correlated to the structure of the space, i.e. a map of the targets] will work, on average, as well as blind search.
what do they mean by "on average"? In particular, what are they averaging over?Bob O'H
February 5, 2018
February
02
Feb
5
05
2018
06:04 AM
6
06
04
AM
PDT
A note on state space search challengekairosfocus
February 5, 2018
February
02
Feb
5
05
2018
02:43 AM
2
02
43
AM
PDT
1 2

Leave a Reply