Uncommon Descent Serving The Intelligent Design Community

A note on state space search challenge

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

As was recently discussed, contrary to objections being made, the concept of blind search and linked search challenge in a configuration or state space is a reasonable and even recognised concept. As we explore this concept a little more, an illustration may be helpful:

With this in mind, we may again look at Dembski’s arrow and target illustration from NFL, p. 11:

ID researcher William A Dembski, NFL, p.11 on possibilities, target zones and events

Now, let us ponder again Wiki on state space search:

>>State space search is a process used in the field of computer science, including artificial intelligence (AI), in which successive configurations or states of an instance are considered, with the intention of finding a goal state with a desired property.

Problems are often modelled as a state space, a set of states that a problem can be in. The set of states forms a graph where two states are connected if there is an operation that can be performed to transform the first state into the second.

State space search often differs from traditional computer science search methods because the state space is implicit: the typical state space graph is much too large to generate and store in memory. Instead, nodes are generated as they are explored, and typically discarded thereafter. A solution to a combinatorial search instance may consist of the goal state itself, or of a path from some initial state to the goal state.

Representation

[–> Note, I would prefer stating the tuple as say: S := {{Ω, A, Action(s), Result (s,a), Cost(s,a)}} ]

Examples of State-space search algorithms

Uninformed Search

According to Poole and Mackworth, the following are uninformed state-space search methods, meaning that they do not know information about the goal’s location.[1]

Depth-first search
Breadth-first search
Lowest-cost-first search

Informed Search

Some algorithms take into account information about the goal node’s location in the form of a heuristic function[2]. Poole and Mackworth cite the following examples as informed search algorithms:

Heuristic depth-first search
Greedy best-first search
A* search>>

This now allows us to better appreciate the sort of challenge that blind watchmaker search in a darwin’s pond or the like pre-life environment faces, or that of hopping from one body plan to another:

Thus, we will better appreciate the general point Dembski and Marks et al have been making:

>>Needle-in-the-haystack [search] problems look for small targets in large spaces. In such cases, blind search stands no hope of success. Conservation of information dictates any search technique [–> as in not specifically correlated to the structure of the space, i.e. a map of the targets] will work, on average, as well as blind search. Success requires an assisted [intelligently directed, well-informed] search. But whence the assistance required for a search to be successful? To pose the question this way suggests that successful searches do not emerge spontaneously but need themselves to be discovered via a search. The question then naturally arises whether such a higher-level “search for a search” is any easier than the original search.

[–> where once search is taken as implying sample, searches take subsets so the set of possible searches is tantamount to the power set of the original set. For a set of cardinality n, the power set has cardinality 2^n.]>>

In this context, once we notice that we are in fact addressing an entity that is marked by the sort of functionally specific complex organisation and/or information [FSCO/I] that leads to this sort of needle in haystack challenge, the design inference explanatory filter applies:

And yes, yet again, we are back to the point that the origin of life and body plans is best explained on design; intelligently directed configuration by whatever ways and means such could have been achieved.

Further food for thought. END

Comments
@GP, thank you for your time. I believe I understand your position, and will continue to compare it with my understanding to see if I can convince myself I'm wrong.EricMH
February 18, 2018
February
02
Feb
18
18
2018
11:59 AM
11
11
59
AM
PDT
EricMH: Thank you for your comment and you patience. Maybe I don't agree with everything you say, but probably it's better to stop here. One last note about this: "Thus, if human beings are indeed intelligent agents, we could hypothetically plug a brain monitor to read their entire brain state at time A, and at time B the brain would be in a very improbable, or perhaps impossible, state given the state at time A." Certainly not impossible. Improbable, but only if we consider its functional meaning. State B would be as probable as any other state at quantum level, but its functional meaning would be utterly improbable. The laws of physics are completely unaware of the functional meaning, therefore for the laws of physics state B is a perfectly legit state, like many others. To simplify, Shakespeare's sonnet 76, which I often quote, is as probable as any random sequence of letters of the same length. But of course there is only one (or a few) sequence which conveys the whole meaning of the sonnet, while almost all the random sequences convey absolutely nothing. So, if we find the sonnet, we infer design for what it means, not because that sequence is more improbable than any other by itself. The key point is the functional meaning, fully detectable by us as conscious beings. Natural laws have nothing to do with that.gpuccio
February 13, 2018
February
02
Feb
13
13
2018
03:08 PM
3
03
08
PM
PDT
@GP I very much appreciate your indepth responses. I am sorry if my comments have been frustrating. You have answered all of my objections well, including whether the laws of physics are violated with this final comment. "For example, if mutations were guided, for example at quantum level, by some conscious intelligent designer, no law of physics would be violated, no energy would be created. Only, unlikely configurations would happen that would not otherwise happen. But that does not violate any law of physics, only the laws of probability in random non conscious systems." I completely agree with this. My point is the subsequent probability of an intelligently designed event is greater than its prior probability, when conditioned on the physical facts of the matter. So, in this way, the intelligent agent disrupts what would be expected from a purely physical point of view, and would consequently be empirically detectable, and a scientific testable claim. Thus, if human beings are indeed intelligent agents, we could hypothetically plug a brain monitor to read their entire brain state at time A, and at time B the brain would be in a very improbable, or perhaps impossible, state given the state at time A. Furthermore, this would be reversing the flow of entropy, moving the brain from a highly probable state to a highly improbable state. So there would be a net entropy reduction happening.EricMH
February 13, 2018
February
02
Feb
13
13
2018
09:09 AM
9
09
09
AM
PDT
EricMH: You say: "So if humans are intelligent agents, and are the evidence we can use to infer intelligent agency in biological history, it would appear that humans have some very counter intuitive abilities, which have testable ramifications. But surely these capabilities must be demonstrated in order to make the theory of intelligent agency scientifically solid." a) Humans are intelligent agents. b) They certainly have great abilities which have testable ramifications. But they are not "counter intuitive", not at all. Indeed, they derive exactly from their specific intuitions (see later). c) These capabilities can be very easily demonstrated. For example, my ability to write this comments is a demonstration. And you can have all the demonstration you want if you just read the posts in this thread, including yours. d) The basic intuitions that allow us to generate complex functional information are the following: 1) We understand meanings. I understand meanings, you understand meanings. No non conscious entity or system understands meanings. Understanding is a conscious experience, and it is the basis for cognition. It is a subjective intuition. Can you deny that we have the subjective and untuitive experience of understanding meanings? 2) We feel desires and purposes. That is fundamental to define functions and to implement them No non cosncious entity or system feels desires or the urge to implement functions. You say: "And ID seems to imply some weird things. For example, if humans are intelligent agents, as you and KF claim, then this means that they create CSI. " Of course they create CSI. Can you deny that? If our comments here seem too lowly to you, just look at each of Shakespeare's 154 sonnets. You say: "If so, then they must have the ability to disrupt the fundamental laws of physics, otherwise the probability of an event occurring given the chance hypothesis would not be different than its specification." No, no, no! You seem not to understand anything of ID! No law of physics is disrupted. When I type on my keyboard and write this phrase, I am not disrupting any law of physics. I am only getting a result which is absolutely unlikely in any random non conscious system, but which is perfectly normal if a conscious intelligent agent is outputting his conscious experiences to matter. The same is true for software, for paintings, for all human creations. Do you think that Michelangelo was violating the laws of physics when he painted the Sistine Chapel? Of course not. But he was doing something that no non conscious system can do. Of course we break the probability barriers that are implicit in non conscious systems. The simple reason is that we are conscious and intelligent. But breaking probability barriers is not the same as breaking the laws of physics. You seem to be really confused about that. You say: "Finally, if intelligent agents can select an orderly target more frequently than expected, then it appears they have the ability to reduce net entropy of a system, which looks like creating energy out of nothing." No! Intelligent agents do not "select" an orderly target. They "create" an orderly target. By intervening on selectable switches which violate no laws of nature. We do not reduce the entropy in any way which violates the second law. We intervene on the informational aspect of entropy. The same is true for biological design. For example, if mutations were guided, for example at quantum level, by some conscious intelligent designer, no law of physics would be violated, no energy would be created. Only, unlikely configurations would happen that would not otherwise happen. But that does not violate any law of physics, only the laws of probability in random non conscious systems. I hope you may think about these points, but I am afraid that you are too convinced of some basic errors to really be able to see them. Of course, you are entitled to that. I don't think I can be more clear that this.gpuccio
February 12, 2018
February
02
Feb
12
12
2018
04:53 PM
4
04
53
PM
PDT
EricMH: I have just answered many of your point on another thread, so I paste here those comments for your attention: This is the thread: https://uncommondescent.com/intelligent-design/quote-of-the-day-28/#comment-651337 My comment #68:
Bob O’H: “How can you be sure that what is going on inside pour brains/minds is not equivalent?” The thing that is not equivalent is that we have subjective experiences. While strong AI theories assume that subjectivity emerges from material configurations of objects, there is absolutely nothing that justifies that view. As KF said, the Computer is not solving a chess problem, it is simply blindly executing chains of instructions ultimately at machine code and register transfer, with ALU operations level. In essence, there is no difference between an abacus and a computer. Adding simple operations to a computation, or increasing its speed, or varying the general structure of the computation (linear, parallel, what else) does not change the essence of the thing: it remains a computation effected by some material tool. There is absolutely no reason to think that an abacus has subjective experiences related to the computation we effect by it. In the same way, there is absolutely no reson to think that a computer has subjective experiences related to the siftwrae operations it is implementing, whatever they are. On the contrary, we know that we have subjective experiences. That’s all the difference. “Hm. I know a few humans who do that too.” Something like that, I can agree ???? . But even those humans, however unlikely it may appear, have probably subjective experiences. That they may use them badly (because of their won choice, or of other causes which do not depend on them) does not change the subjective nature of their representations. “Are you saying that English is contrary to the laws of physics? What particular law does it break? Can you give a specific proof that English “is far beyond the blind mechanical search capability of the observed cosmos”? Can you also explain why the Cosmos would be searching for English in the first place?” I would say that English language, like any other form of complex functional information in objects, is well beyond any power of any non conscious system. As I have often argued, complex functional configuration, bearing meaning (descriptive information) or function (prescriptive information) with a specificity beyond, say 500 – 1000 bits, have never been observed as the result of any non conscious system. And, beyond the empirical fact, there is also a deep reason for that: non conscious system can generate functional information only randomly, and 500 – 1000 bits of specific functional information (indeed, even much less than that) are definitely beyond the probabilistic resource of our universe. Of course, non random mechanisms have also been invoked: NS is of course the best known. But NS can only proceeed from the information that alredy exists (biological beings that reproduce with limited resources), and can only optimize that alredy existing function, and with extremely limited power. For a more complete discussion about that, see here: https://uncommondescent.com/intelligent-design/what-are-the-limits-of-natural-selection-an-interesting-open-discussion-with-gordon-davisson/ English is not beyond the capabilities of our cosmos, but only because our cosmos includes conscious intelligent beings. English is certainly beyond the capabilities of any non conscious system in our cosmos. By the way, for an attempt at computing the functional information in English language texts, look here: https://uncommondescent.com/intelligent-design/an-attempt-at-computing-dfsci-for-english-language/
And #69:
Bob O’H: As I have argued, many times, the specific ability of conscious intelligent beings like us to break the probabilistic barriers and generate tons of complex functional information (in machines, software and language) can easily be traced to those subjective experiences that allow them to design complex objects: – The subjective experience of cognition, in particular of understanding meanings – The subjective experience of feeling, in particular of having purposes related to desires. So, there is an appropriate rationale that can explain why conscious intelligent beings can generate complex functional information, and non conscious systems cannot do that.
More in next post.gpuccio
February 12, 2018
February
02
Feb
12
12
2018
04:32 PM
4
04
32
PM
PDT
@GP Is there a mathematical model of intelligent agency? CSI is a log likelihood ratio, comparing two different theories, chance and ID. If it is positive, then this means ID is the more likely explanation. To do this, however, we are saying the probability of the artifact given ID is greater than the probability given the chance hypothesis. But without any definition as to what ID makes likely, it provides no insight. Anytime a hypothesis says X is unlikely, we can posit some nebulous alternative hypothesis that makes X likely, and then say the alternative hypothesis is the better hypothesis. We are only labelling our ignorance in this case. For example, say our initial hypothesis was the theory of aether (AT), the idea that there is a substrate that all particles travel through. Our experiments confirm two contradictory results, that aether is stationary and that aether is dragged. Since it is a contradiction, the conditional probability of this observation given aether theory is zero. To address this, we cannot just posit an alternate theory X that gives a conditional probability of 1 and say the problem is solved. We have to flesh out what X is, hopefully in a precise mathematical manner. Now, there may be a great body of philosophical work regarding X, which makes X fit into our worldview well, but that is not quite the same thing as a scientific theory of X. AT is clearly wrong, as is unguided Darwinism, but I do not see ID as having the same kind of worked out theory as, say, the theory of relativity. And ID seems to imply some weird things. For example, if humans are intelligent agents, as you and KF claim, then this means that they create CSI. If so, then they must have the ability to disrupt the fundamental laws of physics, otherwise the probability of an event occurring given the chance hypothesis would not be different than its specification. Furthermore, if intelligent agents can violate the No Free Lunch Theorem, then they have a weird ability to improve their ability to find a target in a search space ex nihilo. I have not figured out how to make sense of this, but it would be groundbreaking in the field of computer science. Finally, if intelligent agents can select an orderly target more frequently than expected, then it appears they have the ability to reduce net entropy of a system, which looks like creating energy out of nothing. This sounds very useful, and pretty scientific. So if humans are intelligent agents, and are the evidence we can use to infer intelligent agency in biological history, it would appear that humans have some very counter intuitive abilities, which have testable ramifications. But surely these capabilities must be demonstrated in order to make the theory of intelligent agency scientifically solid. Otherwise, if we have not demonstrated that humans are actually intelligent agents in a scientific manner, how can we then use them in an abductive reasoning argument to say biological organisms were also created by intelligent agency? If humans work entirely according to the laws of physics, and operationally can be described by a Turing machine, then so must any supposed designer that we infer. In which case, the designer's output would be entirely predictable by the preconditions, and the designer would never generate positive CSI, thus would not be a designer according to ID theory.EricMH
February 12, 2018
February
02
Feb
12
12
2018
09:00 AM
9
09
00
AM
PDT
EMH, GP has already given you several correctives. I add, that science is premised on responsible, rational freedom of scientists to do scientific investigation leading to conclusions that in material part are empirically grounded, tested and are warranted as credibly true and/or reliable. Yes, scientific knowledge is weak-form. If scientists allow dominant ideologies and domineering ideologues to corrupt the credibility of such insights and associated moral government, they undermine science. Science, epistemologically, is not and cannot be autonomous or isolated from its context. As for your onward claims about intelligent agency, the first premise is that such are a massively evident empirical fact of observation, and artifacts of such known intelligent agents form a context with a trillions-member observational base. Again, denial of abundantly evident empirical facts is both unphilosophic and unscientific. These known artifacts demonstrate patterns that exhibit in many cases empirically reliable and well-tested signs of their causal origin being materially shaped by intelligently directed configuration. Just for an example, archaeologists, anthropologists [in exploring ancient human-associated evidence] and forensic scientists routinely identify what the former term "archaeology" vs "natural." Stonehenge is a classic in point. Further to that baseline, we can examine search challenge in configuration/search spaces of possibilities and see a reason why something like FSCO/I is an empirically reliable sign of design. As, is explored and drawn out in the OP above. So, no, the re-assignment of the design inference outside of Science [properly understood] fails. The ideological context that suggests such a re-assignment, yet again shows how counter-productive and irrational it is. If science does not target empirically grounded, reliable findings that support truth-seeking about our world, it fails. And demarcation arguments, over the past several decades, have consistently failed. A reasonable conclusion of such studies is, in a nutshell, that once there is a scientific study of origins (including of man) then imposition of a priori materialistic criteria on determining what is/is not science cannot be justified. The implied scientism, that science delimits and dominates first rate knowledge also fails as scientific warrant is weak-form. Indeed, such claims are to be understood i/l/o the pessimistic induction on the fate of scientific understanding across time; scientific knowledge (especially theories and linked paradigms or research programmes) does not amount to knowledge claims to even moral certainty. This specifically holds for deep-time theories of origins [a reality that is in itself inherently unobservable . . . we were not there], which are too often presented in the guise of indisputable fact. It also holds for theories of current observations and operations of the world that are capable of direct empirical testing through experiments and observational studies. For the former, we need to more diligently apply the Newtonian vera causa principle, allowing only such causal explanations of traces of what we cannot directly observe, as show themselves reliably capable of the like effects. There are many other claims to knowledge that are warranted to higher degree of warrant than such a weak form. For instance, to moral certainty [such that one would be irresponsible to act as though what is so warranted were false], and to undeniable certainty on pain of immediate reduction to patent absurdity. Tied to this, the very claim to dominate knowledge so that once big-S Science comes a knocking, everything else must yield the floor also fails. Indeed the claim suggested by Lewontin et al, that science is the only begetter of truth is an epistemological claim. As such it is a philosophical claim that undercuts philosophical knowledge claims. It refutes itself. Which does not prevent it from being implicitly believed and used ideologically. Coherence in thought is a hard-won prize. So, no, again. The design inference is clearly a scientific investigation, by responsible criteria of what science is, studies and warrants. It is also capable of a high degree of warrant. Ideologically tainted exclusion or dismissal, will be found to be unwarranted and in the end incoherent. KFkairosfocus
February 11, 2018
February
02
Feb
11
11
2018
01:03 AM
1
01
03
AM
PDT
@KF, yes, there are good philosophical reasons to believe intelligent agency is real. But, the problem is in the science realm. There is no mathematical model of what intelligent agency looks like. On philosophical grounds we can argue intelligent agency exists, and God exists, and so on. However, that does not make intelligent agency a scientific hypothesis. It is like saying qualia exist, but there is no scientific test for "redness". It exists on a separate plane of reality as far as science is concerned. And without a scientific theory of intelligent agency, intelligent design is not firmly in the scientific realm, either. It is a scientifically testable phenomena with a philosophical explanation. Which is a significant conclusion, but does support the skeptic position that ID is not totally a scientific theory.EricMH
February 10, 2018
February
02
Feb
10
10
2018
05:42 PM
5
05
42
PM
PDT
EricMH: "I think something further is required, namely the scientific evidence for intelligent agency, not merely intelligent design." I don't agree. The only thing required is that scientists gradually renounce to the heavy ideological bias that has characterized scientific thought in the last decades. The only thing required is that science gies back to its only true purpose: objectively pursuing truth. Moreover, evidence for design is evidence for intelligent agency. "Regarding the selection for survival can only select for survival, the problem here is that “survival” is not a fixed selection function, but will change with the population. So this provides an avenue for complexity to be added to selection." No. Absolutely not. For example, ATP synthase certainly contributes to survival, once it exists in the right cellular context. But that simple fact will never generate the complexity of ATP synthase, because it is definitely beyond any probabilistic resource of the universe. RV cannot get it, and natural selection cannot do anything until the whole machine exists. In the same way, a generic selection for speed will never generate an engine from horse drawn carriages. It will only select the fastest horse drawn carriage. Random variation on horse drawn carriages could optimize horse drawn carriages, but will never generate a car with a petrol engine. About the extreme limits of random variation and natural selection in generating functional information, see also my recent OPs: https://uncommondescent.com/intelligent-design/what-are-the-limits-of-natural-selection-an-interesting-open-discussion-with-gordon-davisson/ https://uncommondescent.com/intelligent-design/what-are-the-limits-of-random-variation-a-simple-evaluation-of-the-probabilistic-resources-of-our-biological-world/gpuccio
February 10, 2018
February
02
Feb
10
10
2018
01:15 AM
1
01
15
AM
PDT
EMH, actually, no. We already are morally governed and responsibly and rationally free intelligent, self-moved agents; just to stand on the basis that genuine discussion, reasoning, knowledge and more are so. Where consciousness is our first fact and the fact through which we access all others. Likewise, it is readily seen that mere computation on a substrate per signal processing units -- cf. current tracking of memristors and linked AI themes -- is not at all the same as rational, self-moved contemplation. To deny these is rapidly self-referentially absurd. This already means that our self-aware, self-moved consciousness is well beyond what a blind chance + necessity world accounts for. Occam was about simplicity constrained by realities to be accounted for. In this case, we must needs account for the IS-OUGHT gap. That can only be adequately answered at world-root level, and requires an IS that bears simultaneously the force of ought; putting ethical theism on the table as the serious option to beat. Going on, the multiverse is a surrender of the empirical observability and testability criterion of science. KF PS: Survivability can shift with circumstances, i.e. islands of function change in time and may even move like barrier islands (glorified sandbars). That is not an insurmountable problem, ecosystems change with climate, invasives and more.kairosfocus
February 10, 2018
February
02
Feb
10
10
2018
12:06 AM
12
12
06
AM
PDT
GP I appreciate your correspondence. Yes, the presumption of the multiverse is not science, it is a philosophical commitment to naturalism. But, it could still be justified on an Occam's razor basis. Inserting God is a new kind of causal agency that is not present within naturalism. So even though scientifically testing for a multiverse is implausible, it would still meet the parsimony requirement for a theory. I think something further is required, namely the scientific evidence for intelligent agency, not merely intelligent design. Regarding the selection for survival can only select for survival, the problem here is that "survival" is not a fixed selection function, but will change with the population. So this provides an avenue for complexity to be added to selection.EricMH
February 9, 2018
February
02
Feb
9
09
2018
05:11 PM
5
05
11
PM
PDT
EricMH: A random process can generate functional complexity, but only in extremely simple forms. The point is that complex functional complexity derives only from design. You say: "Now, meeting a functional requirement could happen by trial and error, so I wouldn’t say that is in itself a sign of intelligence." Not a complex functional requirement. That is the point. You say: "For example, we could have a random process plus a selector; e.g. survival." A selector for survival can only optimize survival. That is the information already present in the selector. A new complex protein that can increase survival cannot be selected until it is present. The functional complexity of the protein is beyond the probabilistic power of the random search. therefore, only intelligent design can get there. You say: "KF’s point comes in here: there is no connection between the functional islands, so the random process must hit a needle in a haystack." Of course there is no connection. I repeat here my famous challenge, that nobody has ever tried to answer:
Will anyone on the other side answer the following two simple questions? 1) Is there any conceptual reason why we should believe that complex protein functions can be deconstructed into simpler, naturally selectable steps? That such a ladder exists, in general, or even in specific cases? 2) Is there any evidence from facts that supports the hypothesis that complex protein functions can be deconstructed into simpler, naturally selectable steps? That such a ladder exists, in general, or even in specific cases?
Regarding the "skeptic response": the multiverse, used in this way, is not science. Not at all.gpuccio
February 9, 2018
February
02
Feb
9
09
2018
03:01 PM
3
03
01
PM
PDT
GP: I see, intelligent agency is also defined by an ability to increase Kolmogorov complexity. However, a random process can also do this. Now, meeting a functional requirement could happen by trial and error, so I wouldn't say that is in itself a sign of intelligence. I would say each of these criteria are perhaps necessary to infer intelligence, but not sufficient. For example, we could have a random process plus a selector; e.g. survival. The random process ensures Kolmogorov complexity increases, and the selector ensures functional requirements are met. This also would not be intelligent, but it would meet the criteria listed. KF's point comes in here: there is no connection between the functional islands, so the random process must hit a needle in a haystack. A skeptic response to this is: all the argument shows is there must be some source that is trying the requisite number of trials in order to hit the functional islands. If our universe contains insufficient trials, then the source is outside our universe, such as in another part of the multiverse. Since the multiverse hypothesis stays within the confines of naturalism, it is the preferable explanation, since it does the same job as positing an external designer.EricMH
February 9, 2018
February
02
Feb
9
09
2018
01:01 PM
1
01
01
PM
PDT
EricMH: As I said, regularities can be the result of natural laws, if the system can generate them. See my example at #31: a natural system can certainly exist where one of two (or more) events is strongly favoured. That does not require design. That's why in Dembski's explanatory system we have to exclude a necessity origin for regularities. If no necessity law in the system can explain a regularity, then design is the best inference, because order is anyway a completely unlikely outcome, if complex enough in terms of bits. However, I would still remind that functional information is not order, and that it is not regularity. In functional information, the specification is connected to what we can do with the object in its specific configuration. The bits are harnessed towards a purpose, not towards some abstract concept of order. The form of functional information is often pseudo-random: in many cases, we cannot distinguish a functional sequence from a random one, unless we know how to test the function. Let's take the example of software: a program may appear as a random sequence of 0s and 1s (even if some regularities, of course, can be present). But if the sequence is correct it works. That's why programs do not arise spontaneously. Neither does language. Nor proteins. These are all forms of functional information, highly contingent and highly connected to a function. Robots, programs and machines are form of freezed functional information. They can do a lot, but only in the measure of what they have been programmed to do. They can increase the computable complexity of specified outcomes (like the figures of pi), but not the Kolmogorov complexity of the system itself. But they cannot generate new original functional complexity, because, for example, they cannot generate new original specifications. A machine cannot recognize a potential new function, unless it was potentially defined in its original program. Even machines which can incorporate new information from the environment, or from their own working (like neural networks or AlphaGo) have the same basic limitation. In all cases, machines can only do what they have been programmed to do, either directly or indirectly. Cosncious intelligent beings are different. They do things because they have conscious experiences and desires. That makes all the difference. The conscious experiences of understanding meaning, and of desiring outcomes, are the only true source of new original complex functional information.gpuccio
February 8, 2018
February
02
Feb
8
08
2018
04:47 PM
4
04
47
PM
PDT
GP I agree, would this type of reasoning apply to any regularity we find in nature, including sequence #2?EricMH
February 8, 2018
February
02
Feb
8
08
2018
02:11 PM
2
02
11
PM
PDT
EricMH: A robot is programmed by intelligent persons to implement some specific task. It is designed. So, a design inference is however necessary. A robot which can compute pi and write the figures on a rock wall is much more difficult to explain than the figures themselves. In the end, this is what the law of conservation of information probably means, without any need to understand in detail the mathematical complexities! :)gpuccio
February 8, 2018
February
02
Feb
8
08
2018
10:56 AM
10
10
56
AM
PDT
GP a robot could have made the marks, and a robot is not intelligent.EricMH
February 8, 2018
February
02
Feb
8
08
2018
10:46 AM
10
10
46
AM
PDT
EricMH: An example I have often used is the first 10000 decimal figures of pi. In particular, let's imagine that we arrive at a faraway planet. No traces of inhabitants. We reach a big rock wall, where some strange marks can be seen. Although they are not completely regular, they can be easily read as a binary code, two different marks. Somone also notices that, if read in a specific way, they really represent the first 10000 decimal figures of pi, in binary code. Without any error or ambiguity. Now, the marks could be easily explained as natural results of weather or other natural agents. But what about their configuration? I think we can definitely infer design. The important point is that we know absolutely nothing of the possible designer or designers. Except that they are designers (conscious intelligent purposeful agents), and maybe that they can compute pi.gpuccio
February 8, 2018
February
02
Feb
8
08
2018
10:29 AM
10
10
29
AM
PDT
KF hah! don't I know it :) The binary progression example is good, though one could say that is the result of a mechanical process, too. Every finite string, for that matter, can be the result of a mechanical process, as well as many infinite strings.EricMH
February 8, 2018
February
02
Feb
8
08
2018
08:33 AM
8
08
33
AM
PDT
EMH, yes, though in fact every argument will be flooded with objections. GP and I are just pointing out the difference between order and organisation. Perhaps, if you used something like: T-TH-HH-HTT-HTH-HHT-HHH-HTTT . . . etc we may show organisation, once we can see that this is counting up in binary.The point is this is NOT mere order that can be mechanically forced but fits an independent pattern that is indicative of intelligently directed configuration. Onward we could do say the first 100 primes in binary, or the like. And after that ascii code compared with DNA code. KFkairosfocus
February 8, 2018
February
02
Feb
8
08
2018
07:05 AM
7
07
05
AM
PDT
GP yes, the example doesn't explain the intricacies of accounting for all chance and necessity hypotheses. I just use it to illustrate the basic intuition behind the design inference without a lot of math. Even if we only get the audience to the point of realizing orderliness cannot come from pure randomness, that there must be some original source of order, I think that's a big step forward, because that original source also cannot have come from randomness, and we are left with needing to account for the primal order. An important thing the ID camp must keep in mind is the need for accurate, yet easy to understand, illustrations of key principles. Any source of ambiguity opens the flood gate for misunderstanding.EricMH
February 8, 2018
February
02
Feb
8
08
2018
06:22 AM
6
06
22
AM
PDT
GP that is why we look for high contingency and functionally coherent organisation. KFkairosfocus
February 7, 2018
February
02
Feb
7
07
2018
08:26 AM
8
08
26
AM
PDT
EricMH and KF: Great points. I would like to add that in functional information the evidence for design is even stronger than in simple "order". Indeed, while order is certainly a valid independent specification, still order can arise from law, when specific conditions in the system are present. For example, while sequence 1 at #28 has formal properties compatible with a random sequence generated by coin tossing, sequence 2 can be a designed sequence, but it can also be a sequence generated by tossing a coin which is strongly biased towards heads. Even if we remain in a random system (so that both results are possible), if the probability of heads is, say, 0.99, the probability of having 40 heads in a row is not at all negligible (about 0.67). Instead, when we have functional information so that a specific configuration is defined by an objectively defined function, like in a functional protein, or in software, no natural laws in the system can be invoked, because natural laws do not work according to any understanding of other natural laws. For example, random mutations certainly happen according to natural laws of biochemistry, regarding DNA replication and the biochemistry of nucleotides, but those natural laws certainly do not understand in any way, and are not related in any way to the natural laws of biochemistry which rule the function of a protein whose sequence is generated by the nucleotide sequence according to a symbolic code. So, no explanation based on natural laws and necessity can be invoked in the case of complex functional information.gpuccio
February 7, 2018
February
02
Feb
7
07
2018
05:41 AM
5
05
41
AM
PDT
EMH:
according to probability theory, both sequences are equally likely with a 50/50 coin toss. So, probability is not the reason. Then they will say, it is the pattern exhibited by #2 that is more improbable, not the sequence itself. Aha! That is correct. There is only one sequence of all heads, while there are many sequences that are a mixture.
This is of course a simple form of the question of independent specification that allows objective, and often macro-observable clustering of configuration space states. We then can assign relative statistical weights to such clusters. In systems that are not constrained otherwise and random walk/hop on configurations, there is a spontaneous trend towards and to remain in the predominant cluster. This is where we see the grounding of thermodynamic equilibrium, fluctuations and more, with the second law of thermodynamics being closely related. In the coins case, the coins case the predominant group will be near 50-50 H-T, in no particular recognisable, simply describable pattern. That is it resists Kolmogorov-style compression and exhibits high randomness in the sense that the state would basically have to be quoted. At the same time, there will be low configuration-driven functionality. That is it is not a message or a coding of an algorithm or the like. D/RNA, of course, is complex and is coded with significant sections expressing algorithms. KFkairosfocus
February 7, 2018
February
02
Feb
7
07
2018
03:36 AM
3
03
36
AM
PDT
Note: The above extends a prior discussion which provides further context and details: https://uncommondescent.com/informatics/ai-state-configuration-space-search-and-the-id-search-challenge/ KFkairosfocus
February 6, 2018
February
02
Feb
6
06
2018
09:22 PM
9
09
22
PM
PDT
The coin flip example I like to use is the following. I have two sequences of flips. One I generated randomly and one I created myself. Can you tell which is which? 1. TTHHTTTTTHHHHTTTTHHTHTTTHTHHTTTTTTHHHTTT 2. HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH An astute observer will say sequence #2 is the sequence I generated. But why? They will say it is more improbable. I point out that according to probability theory, both sequences are equally likely with a 50/50 coin toss. So, probability is not the reason. Then they will say, it is the pattern exhibited by #2 that is more improbable, not the sequence itself. Aha! That is correct. There is only one sequence of all heads, while there are many sequences that are a mixture. And this is precisely the inference to design.EricMH
February 6, 2018
February
02
Feb
6
06
2018
08:32 PM
8
08
32
PM
PDT
Coin flip was a good illustration, KF. I'm just trying to figure out what Bob is thinking.tribune7
February 6, 2018
February
02
Feb
6
06
2018
12:51 PM
12
12
51
PM
PDT
@KF, yes, that is a good explanation why even granting the big If of correlation still leave an extraordinary amount to be explained by blind processes. Since the proponents of blind evolution are very circumspect when it comes to clearly stating what their assumptions and logic are, I've been trying to infer what their underlying rationale is. Here is the best I've been able to suss. 1. Begin with methodological naturalism, because otherwise we would mistake anomalies, which could lead to further scientific discoveries, for God's inscrutable action in our world. 2. Second, we have evidence that organisms long ago were simple, genetically speaking, and now they are complex. 3. Methodological naturalism requires us to only consider blind processes (nothing else makes sense). 4. Therefore, a blind process is responsible for the progression from ancient simple organisms to modern complex organisms. The ID camp, on the other hand, argues that for 1-4 to be true then there must be a vast number of trials for the occurrence of 4 to be expected. The MN camp responds in one of two ways: A. We have no idea what the prior for our universe is. We can assume maximum entropy and a uniform distribution, which ID does. However, at the end of the day, we are ignorant of the initial probability, and it doesn't really matter, practically speaking. B. We ascribe something special to the complexity we see, but that is merely anthropomorphism. There is not an objective specification. Any specification the ID camp refers to is arbitrary, and thus the calculations of enormous CSI are selection bias. Finally, the ID camp responds: A. Maximum entropy is justified on the very same grounds used to justify MN. If any initial condition probabilities are acceptable, then this implies we cannot infer anything from what we observe, a la Hume. There are an infinite number of models that fit the evidence, and all models are acceptable. B. This is an equivocation between the instance, and the pattern the instance exhibits. While all instances may have the same a priori probability, not all patterns have the same probability. There are objective patterns that vary in likelihood, e.g. binomial distribution, Kolmogorov complexity. A string of 100 coin flips resulting in heads makes a non random hypothesis much more likely than a random hypothesis. Hence, Dembski's explanatory filter and the CSI calculation. But, the human mind usually cannot follow this many levels of argument, and the MN camp resets to the 1-4 argument, restarting the argument.EricMH
February 6, 2018
February
02
Feb
6
06
2018
12:39 PM
12
12
39
PM
PDT
Trib, I think coin flip observations give a fairly concrete but relevant case. I suspect the context of a configuration or state space or worse a phase space is unfamiliar. Yes, a system on the scale of our cosmos will only implement a tiny fraction of the possibilities, but that does not mean that the ones not seen were impossible. And, the issue of clustering on utterly dominant gibberish vs deeply isolated islands of function also seems problematic for some. Wait till I start to speak of moving islands, like sand bar islands. KFkairosfocus
February 6, 2018
February
02
Feb
6
06
2018
12:19 PM
12
12
19
PM
PDT
--The search space is a part of the natural world, so it’s just there.-- I'm not sure what you mean by this. Can you elaborate? --Indeed. So why are Dembski & Marks averaging over (impossibilities), with a non-zero probability?-- Can you be specific as to where they are doing that?tribune7
February 6, 2018
February
02
Feb
6
06
2018
11:49 AM
11
11
49
AM
PDT
1 2

Leave a Reply