Uncommon Descent Serving The Intelligent Design Community

Brief excerpt from Bill Dembski’s new book, Being as Communion: What is intelligent design?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

William A. Dembski

William Dembski: Being as Communion: a Metaphysics of Information will be published later this year by Ashgate Publishing (UK):

Intelligent design is the study of patterns (hence “design”) in nature that give empirical evidence of resulting from real teleology (hence “intelligent”). In this definition, real 37 teleology is not reducible to purely material processes. At the same time, in this definition, real teleology is not simply presupposed as a consequence of prior metaphysical commitments. Intelligent design asks teleology to prove itself scientifically. In the context of biology, intelligent design looks for patterns in biological systems that confirm real teleology. The definition of intelligent design given here is in fact how its proponents understand the term. This definition avoids two common linguistic pitfalls associated with it: intelligent design’s critics tend to assume that the reference to “design” in “intelligent design” commits it to an external-design view of teleology; moreover, they tend to assume that the reference to “intelligent” in “intelligent design” makes any such external design the product of a conscious personal intelligent agent. Both assumptions are false.

Granted, intelligent design is compatible with external design imposed by a conscious personal intelligent agent. But it is not limited to this understanding of teleology in nature. In fact, it is open to whatever form teleology in nature may take provided that the teleology is real. The principle of charity in interpretation demands that, so long as speakers are not simply making up meanings as they go along, terms are to be interpreted in line with speakers’ intent and recognized linguistic usage. The definition of intelligent design just given, which explicitly cites real teleology and does not restrict itself to external design, is consistent with recognized meanings of both words that make up the term intelligent design. Design includes among its recognized meanings pattern, arrangement, or form, and thus can be a synonym for information. Moreover, intelligence can be a general term for denoting causes that have teleological effects. Intelligence therefore need not merely refer to conscious personal intelligent agents like us, but can also refer to teleology quite generally. quoted with permission from final pages

Follow UD News at Twitter!

Comments
Elaborating on my question, nightlight, the observation that the tinker toy computer is mechanistically following the instructions to a program, and accepting for sake of argument your conjecture or belief that the matter in this machine has consciousness, it would seem that the consciousness would have no effect on the operation of the tinker toy machine unless some element of chaotic operation equivalent to "free will" were introduced. I understand that your belief is that the conscious connectivity of matter in the tinker toy computer (along with everything else in the universe) is interacting at a different level than its gross physics . . . but what is volition without the potential to act? Would you expect that conscious "observation" (connectivity) by a massive inanimate object in a quantum mechanics environment would affect the outcome of a double-sly experiment or the Quantum Zeno effect? Or is the only observation that counts, human? Would Schroedinger's cat desperately observing a radioactive isotope be able to keep itself alive? ;-) -QQuerius
January 20, 2014
January
01
Jan
20
20
2014
12:53 PM
12
12
53
PM
PDT
nightlight, I asked you a simple, direct question and you replied with a simple direct answer. Thank you. From what you described, it sounds like your philosophical speculation is that either a massive object (i.e. a large star) or a more computationally active object has more consciousness than its counterparts by matters of degree. It seems to me that consciousness is associated with the ability to (a) sense, (b) comprehend, (c) communicate, and (d) decide. So, let's imagine a tinker-toy computer the size of the sun. Let's imagine that it senses some of its components being pushed, pulled, and rotated. Assuming that this tinker-toy computer has consciousness, what can it do? Specifically, do you think that its consciousness can affect or interact with its programming? Just asking. -QQuerius
January 19, 2014
January
01
Jan
19
19
2014
02:30 PM
2
02
30
PM
PDT
NetResearchMan: When I was younger, I had materialistic beliefs like you, that people were computers, and we would eventually reach a singularity where computers would become intelligent. But given the failures of AI research over the years, I have changed my mind, and now think sentient life is different and special.
Thank you for your excellent writings in this topic. Central to your arguments is "overview". Software and computers lack it and people do have it. Overview is mandatory in order to set goals, to organize, to solve problems.
NetResearchMan: We humans intuitively understand organization, and given a set of things can choose a plan for how to organize them, based on our goals for how we expect them to be used in the future. But a computer can’t.
Humans demand overview. Code which is incomprehensible is "bad code" which cannot be modified.
NetResearchMan: If you look at code generated by genetic algorithms, it’s terrible and wholly nonintuitive.
A year ago I argued that Nightlight's Planck scale networks lack overview, but that didn't ring a bell.Box
January 17, 2014
January
01
Jan
17
17
2014
04:13 PM
4
04
13
PM
PDT
nightlight @42: Sorry, but you don't understand software. (And yes, I have some experience with compilers and have also done programming directly in HexDec, back in the day, so I know how painful it can be for a human to do.) A compiler is a tool and it can be quite sophisticated, but it doesn't "write" code in any meaningful sense. Think of it this way, why do companies still spend billions of dollars on humans to write software if a compiler could do it? And no, I'm not just being cute. Seriously think about it - What does a compiler do, what are its limitations, beyond the limited cleanup routines (that in turn were written by another human programmer) what innovative and creative results does the compiler bring to the table? Anyway, enough on that. NetResearchMan has covered it pretty well . . .Eric Anderson
January 17, 2014
January
01
Jan
17
17
2014
02:06 PM
2
02
06
PM
PDT
Nighlight: I want to clarify that when I say you "can't understand complex software", I don't mean to say that you're dumb, or that I'm smarter than you. I mean to say that you lack wisdom to understand it, and wisdom comes from experience. You can try to explain wisdom to people, but it's very difficult, because the type of knowledge that falls under the "wisdom" category is often disbelieved by those who have not fought hard and suffered to gain it.NetResearchMan
January 17, 2014
January
01
Jan
17
17
2014
11:27 AM
11
11
27
AM
PDT
netresearchman, that was a good observation and summary of nightlights views. I read his posts from time to time, while they are no doubt intellectually stimulating, it seems he doesn't understand what CSI actually is. It is well known that simple rules can create complex and repeating patterns, but nowhere is there evidence they can create CSI, or better fCSI. His position therefore, is no better than a Darwinists, even if he feels that position itself is flawed.computerist
January 17, 2014
January
01
Jan
17
17
2014
10:56 AM
10
10
56
AM
PDT
Nightlight: Arguing with you is impossible because you support your position with an endless stream of your own set of metaphysical beliefs, which you wrap in the cloak of science. For example: "brain is also a computing process". The brain can *do* computations, but prove to me scientifically that a human is computer. Your proof must include an explanation for consciousness. Consciousness is one of the core reasons why I believe in dualism, and that the mind is not reducible to material processes. I can't see how a computer, no matter how complicated you make it, could make the switch to actually having consciousness. My reasoning is that consciousness is a binary state -- either something has it or it doesn't (and don't start talking about people in comas -- I'm talking about consciousness as an abstract state of self awareness, not a medical term). In order to have a conscious software program, you'd have to start from a slightly less complicated software program that didn't have consciousness, and adding one line of code would then make it into something conscious. I can't imagine what one line of code could be added to any software program that would cause it to cross that boundary. If you can tell me, then I'll start to believe the mind is just a software program. "determinism is a strawman because of quantum effects" (mild paraphrase). Provide a proof that adding randomness to a system can generate specified complexity (given finite resources in the universe). Obviously if I'm a proponent of ID, this is not a persuasive argument for me, but I'm not going to re-summarize the whole of ID theory in this post. "computational theory ... has freedom to use arbitrary algorithms and initial organizations of the data." This is purely a metaphysical belief. Your proof of this is circular based on an a priori belief in computational theory. "Computational theory is true -> the universe exists -> therefore the initial conditions for computational theory exist". Computational theory sounds like a theory with precisely zero explanatory power, because it can just say anything that happened was a result of environmental conditions, and it doesn't consider where those environmental conditions come from. That was the whole point of my original criticism of your point of view -- that "initial conditions did it" is your answer to everything. If that view works for you, fine, but don't pretend to call it science, it's philosophy. I guess you have a fallback metaphysical belief that a simple set of initial rules can generate a massive amount of CSI. Again, provide proof of existence of a simple rule set that generates CSI with finite probabilistic resources (and digits of PI or Game of Life are not CSI). I predict with 100% certainty your argument will involve pointing out something that's not CSI, or where CSI came from an external source, indicating that you do not, and probably never will, know the difference. The whole reason I went through my discussion of the challenges of software development (which by the way is far from exhaustive -- I could fill up multiple books on the topic given my experience) is that I feel your argument about programs writing programs reflects a radical underestimation of the difference between deterministic computation and what true intelligent agents do. I thought about going down the route of having you try to explain how a computer program could write poetry, compose music, understand humor, perceive beauty, experience love, etc. But I though going with an example that's more "logical" (software development), I could win the argument by pointing out that deterministic computers are bad at abstract thinking in general, and showing how abstract thought is an integral component for things that you might think a computer could be good at. But you've probably never worked on a software project with millions of lines of code, so it's understandable you can't understand complex software. It's ironic, given that your worldview is that everything is software. When I was younger, I had materialistic beliefs like you, that people were computers, and we would eventually reach a singularity where computers would become intelligent. But given the failures of AI research over the years, I have changed my mind, and now think sentient life is different and special. Most progress in AI is based on intelligent encoding of human knowledge or brute force -- while these things are impressive on the surface as parlor tricks (and definitely very useful at enhancing human life), we are light years away from a machine that can pass the generalized Turing test. LIGHT YEARS. Moore's law is slowing down, and at the rate we are going now, we will hit near atom scale transistors and still not be even close to simulating the behavior of a human mind. We'll never even be able to simulate the full workings of a single living cell on conventional computers... You'll probably say "quantum computers will solve everything". That's a whole separate issue which I have opinions on, but anyway, I have to bid adieu...NetResearchMan
January 17, 2014
January
01
Jan
17
17
2014
10:34 AM
10
10
34
AM
PDT
NetResearchMan, your claim about the halting problem and human minds is wrong,
I suggest you research the “halting problem” to understand why. The halting problem is a formal mathematical proof that “no algorithm can exist to prove a program works”. Not just that we haven’t yet found the algorithm, but the algorithm can’t exist period. Yet human designers with minds can construct solutions to the halting problem (for simple enough cases), which proves irrefutably the intelligent mind is capable of things no mechanistic algorithm can do.
...and how do we do that ? We step through the program and see if it works ! You glance down the code and mentally run tests cases with the code so in effect mentally step-by-step debugging which is exactly what a computer would do. Trial and error. The halting problem applies to humans too.Lincoln Phipps
January 16, 2014
January
01
Jan
16
16
2014
08:11 PM
8
08
11
PM
PDT
@Querius #45
Do you believe that a tinkertoy computer of sufficient size of would exhibit independent intelligence and will, a sort of computational Pinocchio?
We're now getting into philosophical speculation. On that question my position is panpsychism i.e. that mind-stuff is fundamental property of the elemental building blocks of the universe. Hence, in this perspective even atoms or electrons have conscious experience. As to what kind of experience, since that's a question that comes up often, I had already answered it in couple earlier posts (post1 and post2) which follow up and build upon that basic idea, examining how that works and how the conscious experience changes when you move from higher to lower systems in the hierarchy as well as for the transitions from live to non-live system (i.e. upon death of live system, such as human).nightlight
January 16, 2014
January
01
Jan
16
16
2014
08:00 PM
8
08
00
PM
PDT
@NetResearchMan #27
You clearly don't understand why a program writing a program (given software as we know it) would be impossible for any non trivial sized program. Computers parallel the general set of materialistic processes in that they have two mechanisms they are good at, and a third they are incapable of. Algorithms are equivalent to natural laws and brute force is equivalent to chance.
The entire line of argument as to whether the present human written software running on present digital computers are equivalent in capabilities to human brain, which is a different kind of computer, is a strawman unrelated to the computational perspective. Namely, the point is that in this perspective the operation of brain is also a computing process (such adaptable networks are universal computers, too), as is the operation of cellular biochemical networks. They are simply different kind of computing technology (distributed, self-programming computers, modeled by abstract neural networks with unsupervised learning) than our present digital computers. Hence, you argument makes no sense in this perspective, since it amounts to arguing about human intellect vs human brain. Note that it is completely irrelevant for this perspective whether the present science and technology can simulate human brain and its capabilities using present digital computers. We already know that brain exists, hence a computer with capabilities of human brain exists (the brain itself). When/if we will be able to replicate these capabilities with digital computers and software is still an open research question that is unrelated to the computational perspective or the claimed existence of computers with capabilities of human brain (brain itself is such computer). The issue of determinism is another irrelevant strawman, since the present fundamental laws of physics (quantum field theory) hold that the quantum objects have fundamentally non-deterministic behavior. The issue is irrelevant even without quantum indeterminism since even for the deterministic systems, the actual future states of such system depend not only on deterministic laws but also on boundary and initial conditions, which may not be under control of the experimenter or system itself (e.g. in Brownian motion or in any open system). For open systems (for which boundary conditions are not controlled), the deterministic and indeterministic systems can have indistinguishable behaviors. On top of that irrelevance of the whole argument, you are also making up ad hoc "laws" as to what deterministic system can or can't compute vs non-deterministic one. In fact, the deterministic Turing machine is computationally equivalent to the non-deterministic Turing machine since either can simulate the other. If you have some relevant theorem which backs up your ad hoc assertions about determinism, you are welcome to share it.
The halting problem is a formal mathematical proof that "no algorithm can exist to prove a program works". Not just that we haven't yet found the algorithm, but the algorithm can't exist period. Yet human designers with minds can construct solutions to the halting problem (for simple enough cases), which proves irrefutably the intelligent mind is capable of things no mechanistic algorithm can do
Thhat is a thourough misunderstanding of the halting problem. What it says is that there is no algorithm than can prove that arbitrary program works (i.e. halts). In other words, no single algorithm can decide the halting question correctly for all input programs given to it for evaluation. But a single algorithm can certainly decide halting for specific program or some set of programs e.g. simple loopless routine, or trivial loops with clear cut decisions from given variables. Similarly, if a single algorithm A1 fails to decide whether program X halts, there may be another algorithm A2 which can decide halting of X (of course, A2 may fail on some programs that A1 solves). Note also that it doesn't help if one creates single algorithm A12 which combines A1 and A2, since the required number of such combined subprograms A1, A2, A3,... is not finite (any combination of a finite set of programs A1, A2,... An can be defeated by some program X). Regarding your "human mind" speculation, the "human mind" is unrelated to halting problem i.e. there is no theorem or equation in that entire subject field that includes "human mind" (or human) in any form. You are either making things up or confusing philosophy and mathematics/computer science. Regarding the halting problem, its only relevance for the computational perspective is that it implies fundamental undecidability of the endpoint of the computations carried out by the universe. In a way that justifies the need of its designer/creator to run the universe, imperfect as it may be, since that is the only way to find out the outcome of the computation (whatever the problem may be it is trying to solve). This in turn provides reason for the existence of evil, which in computational perspective is inefficiency or wastefulness (dead ends, wasted branches, improper meshing or lack of harmonization between subroutines or components, etc) of the computation by the universe. Hence, designer could have not designed the perfect universe free of evil (in the above sense), since the only way, even in principle, to arrive at the solution (halting at the perfectly harmonized or evil-free endpoint) is to run it and wait to see what solution it comes up with. Therefore, if the halting problem has any relevance for the discussion, it only highlights the interesting ethical implication of the computational perspective, by shedding light on the origin of evil.
Since you clearly can't understand the distinction between computation and design, you ascribe magical powers to computation that don't actually exist.
In the computational perspective, brain is also a computer, hence that line of argument (pointing at the creativity limitations of the present digital computers) is a strawman.
Let's look at your example of a binary search. The hard part of searching for something is NOT executing the search itself. It's deciding what to search for. ... Your binary search example is terrible for other reasons. You haven't explained why the list was sorted in the first place, which implies planning and forethought (design) to know ahead of time the type of search you want to do.
You entirely missed the point of the binary search example (read more carefully the paragraphs right before and right after that example). That was a response to your invocation of Dembski's free lunch results. The binary search illustrates that the organization of the data and the algorithm searching do have dramatic effect on the efficiency of the search (showing exponential speedup in that example). Dembski's limitations apply only if you either require that search space is randomized (unstructured) and/or that search is random trial and error (even on well structured data). If his starting assumptions don't hold for the universe and its search algorithms, then his limitations on search efficiency don't apply. Since we actually do know that the universe has computed solutions such as life, in much shorter time than Dembski's guesstimates for origin of life time via random searchers, this implies that Dembski's assumptions about the structure of search space and search algorithms used by universe are falsified -- they contradict the empirical fact of the existence of life within the time prohibited by his results. Dembski's assumptions and results are only relevant for refuting the neo-Darwinian theory (random trial and error with natural selection as the explanation of biological evolution), not for general computational theory of universe. The latter has freedom to use arbitrary algorithms and initial organizations of the data. The only open issue for such computational theory is not whether it can work (it can), but how to minimize the amount of front loading i.e. how to minimize the degree of initial structure in the data and how to simplify the initial computing elements. Some physicists, such as Stephen Wolfram, who are experimenting with that type of models for pregeometry (the underlying model that yields physical particles, fields and physical time-space) believe that the ultimate solution for fundamental laws of universe, which is still unknown, may end up being just a handful of simple rules of operation for randomly connected network of nodes & links. Some progress toward such model has already been made. For example, it has been already known for decades that the key fundamental equations of physics, such as Schrodinger, Dirac and Maxwell equations can be replicated via very simple automata. Similarly, variety of other physical phenomena have been replicated via adaptable networks and cellular automata (a special, limited kind of network; see Wolfram's NKS book). The most interesting problems, though, such as replicating the Standard Model, Einstein's theory of gravity and computing of dozens physical constants (without having to put them in by hand) are still projects in progress.nightlight
January 16, 2014
January
01
Jan
16
16
2014
07:33 PM
7
07
33
PM
PDT
nightlight,
Do you believe that a tinkertoy computer of sufficient size of would exhibit independent intelligence and will, a sort of computational Pinocchio?
I'd be interested in your answer. Yes, I noted that you consider the entire universe a computational process, or perhaps more precisely a state transition system. Back to my question, though. -QQuerius
January 16, 2014
January
01
Jan
16
16
2014
04:49 PM
4
04
49
PM
PDT
Nightlight: You didn't answer my question. You clearly don't understand why a program writing a program (given software as we know it) would be impossible for any non trivial sized program. Computers parallel the general set of materialistic processes in that they have two mechanisms they are good at, and a third they are incapable of. Algorithms are equivalent to natural laws and brute force is equivalent to chance. The thing that computers can't do is design. Any case you name where a computer appears to emulate design is internally an example of brute force plus algorithms, without exception. The problem is that brute force has limits because computational power is finite. True intelligent agents can bypass these limits in ways we don't understand according to science. Since you clearly can't understand the distinction between computation and design, you ascribe magical powers to computation that don't actually exist. I think this is cognitive dissonance on your part, and no matter how much I were to try to explain it to you, I would fail, but I will try. As I predicted when asking the question, you claim that the problem is one of transition cost and technology, and not a fundamentally impenetrable barrier that computers can't design anything. Let's look at your example of a binary search. The hard part of searching for something is NOT executing the search itself. It's deciding what to search for. This is where your cognitive dissonance comes in. You are impressed by how the computer is so good at searching sorted lists that it doesn't even occur to you that you have utterly failed to consider the more important and difficult question of the purpose of the search. Your binary search example is terrible for other reasons. You haven't explained why the list was sorted in the first place, which implies planning and forethought (design) to know ahead of time the type of search you want to do. If you imagine a list of files, you might sort them by a variety of characteristics (file name, path, size, modified date, extension, etc), and binary search can only be done on the specific trait the elements are sorted by. Another problem is that initially sorting the data takes time that must be factored in (no free lunch). Once again you've regressed the source of the information one level, while still leaving it unexplained. You also need space to store the sorted elements. In the case of searching protein sequence space, the sorted elements would take hundreds of orders of magnitude greater space than the size of the universe. And what would you sort by? Alphabetically by nucleotide sequence? Clearly that's nonsense -- there exists no logical sort function for proteins. I know you were not specifically claiming that binary search is the actual algorithm used for protein evolution, but other types of search methods run into similar problems. Try suggesting another one and I'll pick that apart... Back to programming in general. The fundamental problem of writing software is not one of computation, but one of organization. There's no way you can keep a whole program in your mind at once, you must break it down into layers and design and follow conventions for how those layers interact. Brute force won't solve the problem. I suggest you research the "halting problem" to understand why. The halting problem is a formal mathematical proof that "no algorithm can exist to prove a program works". Not just that we haven't yet found the algorithm, but the algorithm can't exist period. Yet human designers with minds can construct solutions to the halting problem (for simple enough cases), which proves irrefutably the intelligent mind is capable of things no mechanistic algorithm can do. Can you name a universal algorithm that explains how to organize any category of things? It's hard to even define organization. We humans intuitively understand organization, and given a set of things can choose a plan for how to organize them, based on our goals for how we expect them to be used in the future. But a computer can't. When a programmer writes code, he doesn't just write code to solve the immediate problem. He must also factor in whether there are similar abstract problems that are useful to solve. It's the difference between solving 2+2 versus solving X+Y for any input values. Sometimes solving the general problem is so much harder or less efficient than the specific problem that the specific solution makes more sense. These decisions require abstract thought at many levels. Then there's maintainability. You have to think about how the code might need to change in the future, and how someone else can understand your code to be able to modify it. These decisions often involve making code behave in an intuitive manner, so someone can guess how it works by knowing how other programmers think, without actually having to do an exhaustive analysis (brute force can't do exhaustive analysis, so don't go there). Programmers that write non intuitive code create terrible problems. If you look at code generated by genetic algorithms, it's terrible and wholly nonintuitive. It works fine because such code is "write once and never modify", but general software is not like that -- it must be modified. Otherwise you'd have to start over from scratch each time. Eventually you would reach a limit where the lack of organization and coherence would prevent the software from making further progress. My final point is that software is irreducibly complex. Any significant software change requires many coordinated changes, any one of which missing causes the whole program to fail. Changes that fix one immediate problem can break something else in unexpected ways. Halting problem again. No algorithmic solution, no brute force solution. You need intuition and abstract thought to figure out what might break. It makes me wonder if all these "smart people" who think everything is reducible to algorithms have ever written a single non trivial software program in their lives? There is a reason engineers are more likely to believe in ID than scientists -- we operate in the real world, not fantasy worlds. #43: Compilers can't produce better assembly than humans -- that is an outright false statement. I work in assembly language at times as part of my job, and I've yet to see compiler generated code I would describe as anything close to perfect, although a minute fraction of code is time critical enough for that to matter, so we use compilers to save time. More fundamentally, compilers can't apply optimizations that require abstract knowledge about the problem being solved, such as what inputs are more likely to be encountered than others, or other assumptions that can be made about the inputs.NetResearchMan
January 16, 2014
January
01
Jan
16
16
2014
12:00 PM
12
12
00
PM
PDT
#42 -- Try compiling manually some source code into binary code as compilers do, and than re-think your evaluation. That's even without optimization that they do (which are better than human assembly programmer would produce). BTW, human brain is a computer, too, so this whole line of attack on computational perspective misses the point completely. The brain is not a digital CPU style computer we use presently for our gadgets, but it is a distributed, self-programming computer that runs anticipatory algorithms. At present we can simulate (on digital computer) that kind of computers via neural networks, but only at very small scales.nightlight
January 16, 2014
January
01
Jan
16
16
2014
10:27 AM
10
10
27
AM
PDT
Seriously, nighlight? A compiler constitutes "writing software." C'mon.Eric Anderson
January 16, 2014
January
01
Jan
16
16
2014
08:32 AM
8
08
32
AM
PDT
#39 typo -- "the old C to C++ translators before native C++ compilers came along" should be: "the old C++ to C translators before native C++ compilers came along"nightlight
January 16, 2014
January
01
Jan
16
16
2014
05:47 AM
5
05
47
AM
PDT
@Querius #38 -- It doesn't seem we're using the same semantics for the terms "computation" and "algorithms". See the post #39 above, which fleshes out the semantics I am using. In your semantics computational process is limited to the present CPU based devices running conventional human written programs. In my semantics, universe itself is a computational process, containing a hierarchy of computing technologies, from physical particles and fields, through cellular biochemical networks, networks of these cells (organisms), humans and human technologies. Each computing technology in the hierarchy is used to build next, larger scale computing technology.nightlight
January 16, 2014
January
01
Jan
16
16
2014
05:00 AM
5
05
00
AM
PDT
@NetResearchGuy #27
You've gone a long way into metaphysical territory with this belief that there are hidden algorithms underpinning material reality, which gives birth to complexity. You seem to have, in my opinion, irrational exuberance regarding the power of software or algorithms to emulate or even have intelligence.
I almost feel flattered that someone would imagine that I came up with the computational perspective sketched in the earlier posts. Well, thanks, but there is actually a lot more notable history and background to that approach, as described in this post (which also covers in more detail other issues touched on below). There is no real controversy as to whether that is doable or whether it works and the only real issue is the human inertia. Humans are creatures of habits and it takes great deal of time and effort to get them off the well trodden paths. As always, only few visionaries, such as Turing, von Neumann, Zuse, Fredkin, Toffoli, Wolfram... see ahead how it will be done eventually, while the rest of the herd clings together down the familiar grooves as long as possible. In fact our present natural laws modeling how universe works are already in algorithmic form, just a special case of algorithms, limited to the kinds that the ancients could run on the primitive little computers of the past eras, such as stick on sand, pen on paper, abacus, table of logarithms, little mechanical calculators, slide rule, etc. For example, when you work out the timings and trajectory of a ball dropped from a tower using pencil and paper, you are running an algorithm, the simple kind that can run on "computers" Galileo had available, paper and pen powered by human hand. Entire classical mathematical formalism and the present natural laws expressed in it, as it is currently practiced and taught to kids, are already special case of crude algorithms, those suitable for executing on slow, forgetful, error prone computers of the past centuries. Even the conceptual framing of what it means to model and explain how the universe works, seen presently as little entities obeying slavishly the laws handed down, somehow, by the creator of the universe, is a kind of feudal anthropomorphism (or even more ancient parent-child metaphor). The computational perspective brings in, deliberately and systematically instead of accidentally and occasionally, the most general algorithms as models into the epistemological toolbox of science. The ontological counterpart of these more powerful algorithmic tools is the view of the universe and everything in it, including life, as a computational process. One might also characterize this transition (which, although inevitable is presently in its early phases) as the updating of the tools and metaphors to the latest technology.
Your whole argument in this thread has been based on the premise that deterministic material processes could write software. So tell me why such material processes have not been demonstrated.
In some domains it was demonstrated, such as compilers (generating machine language from high level source), interpreters, compiler-compilers, the old C to C++ translators before native C++ compilers came along, etc. Universal computers can, in principle, generate any finite sequence of numbers, which in turn can encode ASCII characters, speech, images, movies, chess moves, android robot moves,... Hence, any finite sequence of actions, of any kind, generated by an animal or human can be generated algorithmically by computers. Of course, how much code is needed, at least, to generate any particular sequence (whatever its semantics), is the algorithmic (Kolmogorov) complexity of the sequence. As to why it is not done yet for all the tasks we imagine would be useful to have them do -- that is due to limitations of our inventiveness and economy of transition (costs, within usually short time horizon, of doing it the old way vs. costs of figuring out and implmenting automation). Eventually, everything that is useful will be done on computers (not necessarily on the kinds we have now). But that is a two way process, with both sides, the tasks (the stuff that is useful to do) and the programs doing them, mutually harmonizing with each other. When some manufacturing is automated, the result is not some kind of human-look-alike robot simulating the motions of the human workers previously doing the "same" task. Instead, it is a much more streamlined and efficient process that doesn't look anything like how it was done before. When companies are automated, there is always change on both ends, the task end and the solution end. The work flows and logistics, operational rules and procedures are changed, rather than literally simulating the old ones. Or when voting got automated, we also changed the old task definition of voting procedure (checking off our picks on the form with a pen) to touch screen actions, i.e. we just dropped the paper forms and pen altogether, although before computers it seemed that paper form was the necessary part of the task, part of its definition. The eventual harmonization toward more efficient way has changed both, it brought in the new tools and redefined the task. Generally, the present computers and software are good in generating stuff that other computers need to do, such as compiler that writes compiler on one computer, that from then on compiles source code on other computers. Looking at the related larger scale patterns, it may well be that the eventual more advanced level of harmonization of the universe may drop the humans, or the carbon based computing technology (life), out of its "task" definition altogether (as we dropped the paper forms from the old task definition of the voting procedure) and go with the newer, more efficient computing technologies for the later stages (these need not be anything like the devices we now call computers). A little hint toward such possibility is expressed succinctly by the Fermi's paradox "Where are they?" Or it may be that humans are the vital component of the future computing technology of the universe, just as physical particles and fields are (they are the computing technology built by the Planck scale networks, as in some pregeometry models). There is also some room in between the two extremes above. For example, if we consider horses and carriages, even though they were phased out in favor of the far more efficient transportation technologies, they still exist in much smaller numbers, mostly as tourist attractions. Hence, the old superseded technologies are often not wiped out completely, but survive in some tiny, highly specialized niches, as shadows of their former glory. I would imagine that for most creatures even just being an exhibit in a zoo beats the complete extinction. Unfortunately, there is no shortcut for working out the eventual outcome, since the universe itself is already the most efficient way that the chief programmer of the universe knew how to compute the answer, and that program is still running (thankfully, I guess). At present, humans don't even know what the question is being computed by the universe, let alone the answer.nightlight
January 16, 2014
January
01
Jan
16
16
2014
04:33 AM
4
04
33
AM
PDT
nightlight @ 3 speculated
For example, a computer running a program that plays chess, is a real teleological process, pursuing objectives in an intelligent manner, yet everything it does is fully reducible (explicable) in terms of physical events in the computer (charge distribution and electric pulses). No intervention in its operation is needed while it is playing a game to help the “material process” choose more intelligent moves (in fact programs nowadays play a lot better and smarter than any human).
Yikes! I don't think you understand the operation of computers and the software that runs on it. The process is completely mechanistic, no different in general principle than a lever. Computers have even been built from tinkertoys: http://www.rci.rutgers.edu/~cfs/472_html/Intro/TinkertoyComputer/TinkerToy.html Any "intelligence" is built into the machine. Do you believe that a tinkertoy computer of sufficient size of would exhibit independent intelligence and will, a sort of computational Pinocchio? -QQuerius
January 15, 2014
January
01
Jan
15
15
2014
11:46 PM
11
11
46
PM
PDT
Nightlight: You've gone a long way into metaphysical territory with this belief that there are hidden algorithms underpinning material reality, which gives birth to complexity. You seem to have, in my opinion, irrational exuberance regarding the power of software or algorithms to emulate or even have intelligence. So let me ask you a question. Can you tell me why we don't have software that writes software, in the general sense? I don't mean genetic algorithms (where the code resides in a narrowly constrained search space), but whole programs, starting from a blank page in a text editor. I.e. why has nobody written a program that can do my job as a software engineer? I mean computers are so smart they can play chess better than humans and solve other problems humans can't, surely writing software, which is already "in their language", should be easy? Your whole argument in this thread has been based on the premise that deterministic material processes could write software. So tell me why such material processes have not been demonstrated. I'll give you a hint: not every problem is solvable by an algorithm. Your whole thesis is based on all problems being solvable and goals being reachable via mechanistic algorithm. The problem of chess clearly is solvable that way, but there are other things that are not.NetResearchGuy
January 15, 2014
January
01
Jan
15
15
2014
10:37 PM
10
10
37
PM
PDT
Eric Anderson #26, my concept of an organism is holistic. An organism is a whole which expresses itself in our material world. The organization is top-down. I’ve come to this view for several reasons: 1. Bottom-up explanations are clearly insufficient. For instance the Central Dogma – DNA makes RNA makes protein makes us - doesn’t make sense. I can provide many arguments to support this claim. Also homeostasis can only be understood from the level of the whole organism – top down. Again I can provide several arguments in support. If you are interested, just let me know. BTW I’m not denying that some parts of the organism are machine-like, however I’m denying that the organism as a whole is machine-like or can be explained as such. 2. A top-down explanation is in perfect harmony with how I experience myself and my behaviour. Right now I’m typing the words that I want to type. My arms, hands and fingers are subordinate to me. What is being typed is controlled from above – top down. 3. When I look at e.g. a cat I see that all its parts are subordinate to – functional for – the whole of the cat. It wouldn’t even make sense to speak of functionality if it was not for the whole. I’ve come to the conclusion that to regard the cat as a whole is the only way to make sense. This mysterious whole – this monad if you will – is an agent which is self-sustaining, self-organizing which points to independence and freedom. However true freedom presupposes consciousness and self-awareness, such as in humans, which I believe is the source of real teleology.Box
January 15, 2014
January
01
Jan
15
15
2014
02:47 PM
2
02
47
PM
PDT
Dembski's "No free Lunch" results on random searches (via dumbest trial and error algorithms) in a space with random distribution of values are irrelevant for the actual universe, which is a highly ordered and regular, with its laws finely tuned on a tip of an extremely sharp needle (with odds, even for major constants reaching 1 in 10^hundreds, to say nothing of the laws themselves). It is even less relevant for the actual searchers and their algorithms. The searchers are adaptable networks, which are distributed self-programming computers running anticipatory algorithms and operating at all scales (from Planck scale through lifeforms, up to galaxies and their clusters). Randomly searching an array a[n] needs O(n) tries, while binary search of a sorted array takes O(log(n)) tries, which is exponentially fewer tries. The fitness landscape of the universe, including life, is a self-shaping fitness space, harmonizing itself on all scales seeking to make the searches more efficient, to make networks more mutually predictable to each other. That's precisely what makes the laws of nature not only knowable (Einstein thought that was the most perplexing fact about universe) but so unbelievably simple that one can fit all the fundamental equations of physics on couple pages of a paperback. Wigner wrote an essay titled Unreasonable Effectiveness of Mathematics where he is marveling as to <how mathematics, a result of playing with definitions and their arrangements (via rules logic) in one's head and on paper with a sharpened stick of graphite, somehow meshes so incredibly well with what universe does and how it does it, thousands or millions of light years away. It is astounding indeed, unless you realize that universe is constantly reshaping itself on all scales and all places to make such mutual predictability ever simpler and more efficient to find (discover). You can notice related instances of the universal harmonization on all scales you care to look at. Even at the scales of human societies, as the social systems, technologies and laws evolve, this kind of harmonization makes it increasingly easier to predict actions, rely on or cooperate with other humans (or machines, such as other cars on the road). For example, you and I, who don't know each other and are living hundreds or thousands of miles apart, thanks to present communication technology are mutually harmonizing our activity, talking on the same forum, same topic, discussing our subject back and forth. Just think about it for a moment this way -- there are two clusters of atoms, making up your and my body, thousand of miles apart yet somehow the two clusters are harmonizing their motions in a subtle dance of mental judo, impenetrable beyond any odds to Dembski's simple minded search algorithms and assumptions as to how universe is ordered. And billions of other such clusters of atoms are doing it similarly. None of such harmonization existed even few decades ago, to say nothing of past centuries, where these clusters of atoms would have been moving entirely independently of each other. An earlier post has addressed this objection in bit more detail (also post2, post3, post4, post5). For the hyperlinked TOC of the related series of posts, organized by topics (which includes harmonization, network computations, anticipatory systems, etc), see the second half of this post.nightlight
January 15, 2014
January
01
Jan
15
15
2014
12:38 PM
12
12
38
PM
PDT
Nightlight: There is a serious fallacy in your argument. First off, I want to point out that I am an experienced software engineer by trade, and I am an expert on algorithms. I've actually written AI, search algorithms, and Conway's game of life, and I'm extremely well versed on what computers can and can't do. The fallacy is that, according to the Law of Conservation of Information, a neural network or other system can't generate information that exceeds the complexity of the rules and environment applied to it. To give a thermodynamic example, imagine a chunk of metal, and you want one side to be hot and one to be cold. You can create that condition by putting a heat source on one side, and a heat sink on the other. According to your theory, it's not the heat source and sink that caused the heat gradient. Rather the metal atoms are agents responding to environmental stimuli. The point is that the argument is a regress, and all you've done is moved the source of the information to the environment. You then have to explain the origin of the information in the environment. It's important to distinguish between randomness or patterns and specified complexity. The game of life can generate complex patterns, but it can never generate specified complexity. For example, the game of life will never generate text in its patterns. It can't because the rule set doesn't contain enough information. You could make an infinitely more complex version of the game of life which has rules which are capable of generating letters or words, but clearly it would require incredibly more complex software containing far more information. Your argument would be that you don't need to make the software more complex, the environment could shape the cellular automata into words. Say if the environment had blocked cells that initially formed the outline of words. Again, that's a regress -- who put the pattern in the environment? In the context of evolution, you need to explain what conditions in the environment could convert a wolf like creature into a whale. What "rewards" or "punishments" the environment inflicted on the wolf that are strong enough and directed enough to change its DNA. Be specific. You'll find if you do so, it will require a very complex series of unlikely environmental changes to make any of the transitional forms (which implicitly contain severe mal-adaptations to the start and end environment), to prevent those mal-adaptations from being selected away. And that such a sequence of environmental changes represents "information", and has to be explained.NetResearchGuy
January 15, 2014
January
01
Jan
15
15
2014
10:26 AM
10
10
26
AM
PDT
@NetResearchGuy #27
Even if one accepts that such an infinite chain of regress can exist in principle, you need a theological explanation for the existence of the chain, as a unit. In other words, not a previous or first entry in the chain, but the existence of the chain itself.
You are stating the obvious, which is that any scientific theory needs a set of postulates, which are assumption taken for granted and which science doesn't explain, but from which the rest of the formalism (math and algorithms that generate scientifically valid, logically coherent statements) of the science is constructed. The only scientific theory without any postulates is an empty theory, starting with nothing, yielding nothing, which is a fine theory, too, as long as all you aspire to do is to become a Taoist or Zen monk or some such. But if you wish to do (or even talk about) natural science, then you need a theory with non-empty set of non-trivial postulates. The above response assumes a bit of unstated context, which is as follows. Since natural science aims to provide a formal or algorithmic model of the universe, its formal statements (deductions, numbers, statements... generated within its formalism) have counterparts in the real world. The real world counterpart of the postulates of natural science is what one can call "front loading" of the universe i.e. what needs to be given upfront to get the whole works started and keep them going. Hence this kind of scientific "front loading" is meaningful only relative to the given science (or its phase of evolution), not as some philosophical or theological absolute. (For brevity sake, in view of this tight correspondence, the terms "front loading" and "postulates" are treated as interchangeable below, with implicit understanding that the "front loading" refers to the universe, while the corresponding "postulates" refer to natural science modeling or describing the universe). Of course, there is no absolute prohibition against explanation for the postulates (or any given front loading). But the way that is done in science is that if you have science A with its postulates P(A), then A cannot explain P(A). But there can be another science B with its own postulates P(B) which can explain postulates P(A). As before, science B can't explain its own postulates P(B) and in order to explain these, you need some other science C with its own postulates P(C). The general objective is to find (for given domain of empirical facts) some science Z with the smallest set of postulates P(Z) i.e. with postulates making the weakest or least assumptions among all other postulate sets P(A), P(B),... that cover the same domain of empirical facts. (That's a restatement of the Occam's razor.) Note that the previous example with unlimited chain of computerized constructors of other computerized constructors, was not meant to be the most economical or practical way to provide the front loading (or postulates for science). It was contrived merely to serve as a simple counterexample to specific proposition of Dembski. In particular, the main problem is that its computers are our conventional digital computers which are meant to be programmed by humans (at some point in the chain). But there are far more realistic and more economical computational models, which can do the same job with a lot less front loading (e.g. they don't require humans to design or program them). As noted earlier, you can't avoid front loading (postulates) for any non-trivial science -- some assumptions must always be taken for granted and they can't be explained by that science. I will sketch below the most economical (regarding the front-loading) approach discovered so far -- the algorithmic formulation of natural science and the corresponding computational model of the universe (an earlier post provides a more general perspective and how that approach fits with the presently dominant formulations which are only implicitly and accidentally algorithmic). The interesting discovery of recent decades is that there are other kinds of computers which don't require humans to program them, but they program themselves. The abstract representation of such computers in mathematics and computer science are "neural networks." They are networks of "nodes" and "links" exposed to "punishments" and "rewards" where each node seeks to maximize its net (rewards) - (punishments) by changing the strengths of the links with other nodes. This works like an abstract trading game, where each trader (node) trades with some set of other traders (those that are linked to it), gaining or losing from those trades (punishments & rewards) and adjusting its trading volumes and trading partners depending on how it fared with them previously. The rules of trading, gain/loss evaluations and link adaptation can be very simple (simpler than chess). Even plain boolean networks (with all numbers having only two values, 0 or 1) can work this way (e.g. Conway's Game of Life). Note that all those undefined terms in quotes (nodes, links, punishments, rewards, etc) are mathematical abstractions (with particular mathematical properties defined i.e. rules of the game) which are not bound to any particular implementation of the abstract properties. The purely abstract mathematical theory researches their properties and behaviors for various rules and network sizes. The most interesting conclusion of this research is that such networks, when large enough, spontaneously develop collective behaviors of the nodes and links which operate like larger scale algorithms that perform anticipatory computations -- they model the environment of the network and seek to optimize collective rewards-punishments through joint strategies which arise spontaneously, even though rules of the game merely define how each node maximizes its own rewards - punishments for itself. Hence, it is as if in that of game of traders, the traders defined as purely selfish agents form alliances or cartels to maximize the total profit of the alliance, none of which was explicitly put into the rules of the game -- it just arises in large enough networks, even though each node is merely selfishly maximizing its own profits via some simple rules of the game. These spontaneous collective optimization algorithms (that no one programmed into the network) perform internal modeling of the network's environment and this model includes 'self-actor' (network's model of itself). In order to maximize the net rewards - punishments for the network, these spontaneous collective algorithms run the internal model forward in model time for different initial actions of the self-actor, then evaluate the outcome for each tried action and chose the action for the actual network corresponding to action of self-actor that provided the best outcome in the model space. The algorithms thus works like a chess player pondering his next move -- in his mental model of the chess board he makes different legal moves for his side, then makes legal responses from the opponents side, then his side again, etc, looking several moves ahead (depending on time available and his mental capabilities). At the end of each branch, the player evaluates the score, then picks the best one to make on the real chess board. What is also interesting is that these anticipatory algorithms and their general properties form spontaneously in the active networks independently of the concrete implementation of the nodes, links, punishments & reward and rules of the game. The same kind of algorithms arises whether the network is a human brain with neurons as nodes, axons and dendrites as links, or trading/economic network with human traders as nodes and their trading connections as links, or cellular biochemical networks, with nodes as molecules and links as their chemical interactions & binding sites pathways, or internet as a whole as well as its subnetworks (of computers, discussion forums, social networks etc). These networks don't even need to be made of matter-energy to satisfy the rules of some abstract 'adaptable network' game and thus spontaneously develop such algorithms. For example, languages, natural or artificial (e.g. formalisms in mathematics and sciences), form networks of words, phrases or sentences with various types of links between them (semantic, phonetic, grammatical, etc). Unlike the material networks, where nodes and links are made of matter-energy (and operate by the laws of matter-energy), these non-material networks have their own rules which are independent of natural laws. For them, the world of matter-energy is a transcendental substratum of their very existence analogous to what the Kantian 'thing in itself' (or ultimate reality or God) is to us. Yet, from the common mathematical theory of such networks, we know then these non-material instances or implementations, such as languages, also perform the same kind of internal modeling with look ahead and what-if game in order to maximize their net rewards - punishments (e.g. the usage or usefulness of language). Other systems in abstract or non-material realms, such as sciences, cultures, fashions, arts, technologies, religions, etc. also form the same kinds of networks and perform optimization of their punishments & rewards using the same kind of self-programmed anticipatory algorithms as, for example your brain does when you play chess (mentally modeling the chess position and trying out the moves in that model before making the "best" move, as far as you can figure out, on the real chess board). The linguists have, of course, noticed the similarity between evolution of languages and biological evolution, and use the same kinds of 'genetic distance' based techniques as biologists do to infer the evolutionary tree of languages and the timings of its branching points. Biologists, too, such as Dawkins, have noticed similarity between genes and elements of cultures, the memes. The view of the whole internet as a 'global brain' is also a common observation. The stock market traders see the market as an intelligent entity with a wisdom that exceeds that of individual traders. At present, though, these connections are merely seen and treated as unmotivated, one off analogies specific to the particular realms. But as the explicit, deliberate and systematic algorithmic perspective eventually becomes the way we do and teach sciences and see universe (post1, see second half of this post for links to highlights/TOC), what is presently seen as accidental analogies between unrelated realms, will be understood as precise mappings (isomorphisms) between the realms, resting on the mathematical theorems, not on inspired hunches.nightlight
January 15, 2014
January
01
Jan
15
15
2014
05:45 AM
5
05
45
AM
PDT
For the anti-IDists: Refuting the "who designed the designer?" nonsenseJoe
January 15, 2014
January
01
Jan
15
15
2014
04:11 AM
4
04
11
AM
PDT
"Who Designed the Designer?" is irrelevant. Natural processes only exist in nature and therefor cannot account for its origins but that didn't make naturalism explode.Joe
January 15, 2014
January
01
Jan
15
15
2014
04:10 AM
4
04
10
AM
PDT
Earth to RDFish- No one rejects machine intelligence because it can traced back to some designer. THe whole point is that it can be traced back to some designer. Meyer writes about this on several occasions, including "The Signature in the Cell".Joe
January 15, 2014
January
01
Jan
15
15
2014
04:08 AM
4
04
08
AM
PDT
nightlight:
The point was to provide a simple explicit counterexample to Dembski’s proposition quoted — everything that chess computer program does is explicable and predictable in full detail from its physical properties
The chess program is designed and its actions are traced back to the designer. As for your robots- well they too cxan be traced back to a conscious, intelligent agency. Look nightlight, just admit that you don't know what you are talking about.Joe
January 15, 2014
January
01
Jan
15
15
2014
04:06 AM
4
04
06
AM
PDT
NetResearchGuy: Well said. The "infinite regress" is a very old, and worn, materialist refrain. You do an excellent job of pointing out that the materialist is caught in his own web on that score.Eric Anderson
January 14, 2014
January
01
Jan
14
14
2014
09:57 PM
9
09
57
PM
PDT
Nightlight / RDFish: There is a very simple counter argument that negates the infinite regress of teleology created by material causes. Even if one accepts that such an infinite chain of regress can exist in principle, you need a teleogical explanation for the existence of the chain, as a unit. In other words, not a previous or first entry in the chain, but the existence of the chain itself. Your argument is equivalent to triumphically claiming that since cells can reproduce, it must mean they've always existed infinitely back in time, and it's silly to think about an explanation for their existence.NetResearchGuy
January 14, 2014
January
01
Jan
14
14
2014
09:23 PM
9
09
23
PM
PDT
Box @19:
However, organisms are more than just that, more than fermions, bosons and more than information. Organisms are ontologically distinct from computers. Organisms are agents with their own teleology. God is not the only source of teleology surrounded by wind up toys.
I think I would agree with that assessment, at least with respect to some organisms, such as humans. I'm wondering, however, do you view the "teleology" as the cause or the effect? How would you define "teleology"?Eric Anderson
January 14, 2014
January
01
Jan
14
14
2014
07:11 PM
7
07
11
PM
PDT
1 2

Leave a Reply