Uncommon Descent Serving The Intelligent Design Community

East of Durham: The Incredible Story of Human Evolution

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Imagine if Galileo had built his telescope from parts that had been around for centuries, or if the Wright Brothers had built their airplane from parts that were just lying around. As silly as that sounds, this is precisely what evolutionists must conclude about how evolution works. Biology abounds with complexities which even evolutionists admit could not have evolved in a straightforward way. Instead, evolutionists must conclude that the various parts and components, that comprise biology’s complex structures, had already evolved for some other purpose. Then, as luck would have it, those parts just happened to fit together to form a fantastic, new, incredible design. And this mythical process, which evolutionists credulously refer to as preadaptation, must have occurred over and over and over throughout evolutionary history. Some guys have all the luck.  Read more

Comments
That is false as consciousness is required to write the algorithm, set up the initial conditions, set the goal and provide the resources required.
That is the question posed by the origin of life. You seem to know the answer before the research is done. GP: You seem to go a bit further and suggest that genetic algorithms are incapable of creating complex structures, regardless of the nature of functional space. I'm impressed that you guys know so much about nature just from thinking about it. Makes all that tedious research seem superfluous. Petrushka
englishmaninistanbul: Well, thank you for the fun :) To next time! gpuccio
Some IDists just think like you, that avoiding a direct connection to consciousness simplifies things. Well, in my view, avoiding a direct connection to consciousness is only a valid approach if you have something to replace it. And since nobody else has yet come forward with that something I am left to assume either that the right people are not monitoring this thread, or that it is impossible. Or maybe this humble layman really is on to something here. Snigger, chortle. Now, if I were a reductionists, I would say that consciousness is an emergent property of complexity. So, consciousness is a byproduct of dFSCI, and not the other way. So, aliens can have generated dFSCI on earth, but what about their dFSCI? We are at the infinite regress problem again. True we are at the infinite regress problem. However as William Lane Craig so eloquently puts it, you don't need "an explanation of the explanation of the explanation" to have a workable starting point. (On the other hand do you need that for a unified, intellectually satisfying position. Like yours.) Playing Devil's advocate again here: If I am a reductionist and I accept the existence of consciousness, but as an emergent property of complexity, I am still forced to accept the statement "de novo dFSCI has only ever been observed to be produced by conscious entities." If I'm also prepared, as a reductionist, to accept that chance and necessity could have produced dFSCI under specific conditions on this planet, why should it be such a leap for me to accept that it could have appeared elsewhere? And if it turns out that dFSCI is unlikely to have appeared through the forces of chance and necessity operating under the specific conditions of this planet, I really have no business trying to force the evidence when I have a form of the design inference that I cannot deny is empirically valid. Just like with the Mars machine. (It puts me in mind of Francis Crick and his proposal of directed panspermia. Regardless of the validity of his position, I respect his intellectual honesty for bravely following it wherever it lead instead of hiding behind the consensus.) For those who do not accept consciousness, "dFSCI has only ever been observed to appear or be mediated as the specific function of entities" is a redraft I slipped in under the wire in my last post. I expect you raised an eyebrow at it but decided not to tackle it at the time. If any specific problems with it come to mind I would like to know. But... And yes, I am careful what I wish for (at least sometimes…) ...ever since 28.1.1.2.6 I am gradually coming to the point of saying that defending consciousness as an empirical fact is not only the best approach, it's the easiest. Ah well, so much for the humble layman's dreams of changing the world. :) At least I go away knowing I asked "the fatal question." We have probably not gone to the extremes in our discussion about consciousness theories, especially reductionist strong AI theory. There are many reasons to refute it as a valid scientific theory, but for the moment I will not go into them in detail (after all, we have to keep ourselves engaged in the next years…). [...] Reductionism is, IMO, a bad scientific and epistemologic and philosophical approach, so I presonally refute it as a valid position. I would certainly like to know more about all this, but I suspect it might be best to leave it to future discussions. "Debate fatigue" has finally set in. englishmaninistanbul
englishmaninistanbul: I have already expressed my pèersonal idea of why the "consensus" is what it is. Reducionists just try to avoid the hard problem of consciousness, because they cannot solve it the way they would like to. Some IDists just think like you, that avoiding a direct connection to consciousness simplifies things. As said, I don't agree with that position. So, I do have some idea of the why, and still I believe "that I am indeed the only one going in the right direction" (Your words. I don't really believe that I am the only one. Well, maybe there are two or three of us, after all :) ). And yes, I am careful what I wish for (at least sometimes...) This assumes that the complexity in computers is a true model of the complexity of the human brain. We still have next to no idea how the brain works, so this doesn’t look like a very tenable position to me. We have probably not gone to the extremes in our discussion about consciousness theories, especially reductionist strong AI theory. There are many reasons to refute it as a valid scientific theory, but for the moment I will not go into them in detail (after all, we have to keep ourselves engaged in the next years...). I have only asked that the reductionist position (that cosnciousness is an emergent property of some configuration of matter) be not assumed as true. However, complexity is complexity, whatever the hardware where it is implemented. An algorithm performs the same calculations on a computer as on an abacus. Obviously, we don't know exactly the software implemented in the brain, but sofwtare is software anyway. For the moment, I will not go more in detail with this argument. A reductionist could say that it is possible that consciousness arises entirely from dFSCI, and you can’t really argue it definitely doesn’t. I can certainly argue. And I never argue "definitely". It's not one of my many bad habits. Arguments are always temporary, like all scienctific knowledge. This is what I mean when I say that a really empirical approach to design should should not rule out either reductionism or vitalism. I certainly don't rule out vitalism. Reductionism is, IMO, a bad scientific and epistemologic and philosophical approach, so I presonally refute it as a valid position. How can you be so sure about this? You asked for my position. I am not specially sure, but sure enough. Maybe I am self-deluded. Let’s say one of our Mars rovers happens upon a machine. It’s nothing complicated, just a vehicle of some sort with a engine that uses some sort of fuel, obviously designed to carry things. And certain aspects of it, weight, age etc, establish beyond doubt that it simply cannot be of human origin. What would the headlines say? “EVIDENCE OF ALIEN LIFE AN ILLUSION, SAY REDUCTIONISTS”? I doubt it! You are right, I should have been more detailed. "No" was fine and quick, but some more specifications are due. The design inference can be done without entering into detailed discussions about what a designer is, but at some point someone, like you, will ask the fatal question. And, as you know, I have a fatal answer, and only one. And that answer implies the acceptance of consciousness as an empirical fact, and the requisite that a reductionist explanation of consciousness be not assumed as true. Now, if I were a reductionists, I would say that consciousness is an emergent property of complexity. So, consciousness is a byproduct of dFSCI, and not the other way. So, aliens can have generated dFSCI on earth, but what about their dFSCI? We are at the infinite regress problem again, and we are only playing Dawkins' gross game. As explained, there are only two ways to solve that problem of infinite regress: one is to assume that complexity (dFSCI) can come from a simple agent (a simple conscious "I"). That is, IMO, the only conscistent solution for a complete ID theory. The second is to assume that dFSCI can come form non cosncious, non complex systems. That is the reductionist position, and neo-darwinism is it disappointing tool. I don't believe the two positions are compatible. So, I maintain my answer. No. Even if you go along with reductionism and say that consciousness could simply arise from the sum of our parts, it still doesn’t negate the design inference. The design inference is only a part of a general design theory. However it happened, consciousness exists. It is a phenomenon of the universe. I fully agree. ust because the only beings we definitely know have it are humans, or at least beings that are confined to our planet, that doesn’t mean it doesn’t exist elsewhere. I agree. And the only entities we know of in the natural world that have function are life forms. That is reasonable. Therefore no reductionist would seriously argue that the machine on Mars was anything other than evidence of some sort of alien life, and yet he would not see that his philosophical position had been compromised. Correct. A reduxctionist can admit alien design. But then he will state that aliens evolved thorugh neo darwinism. And the problem remains. gpuccio
I suppose I do ask a lot of questions. I hope you enjoy answering them as much as I enjoy asking them.
You say that many ID thinkers shy away from explicitly referring to consciousness; why should that be? I cannot answer for them.
Well, that is a true statement, and I certainly am not saying that we should follow the consensus or anything like that. But to ask why the consensus is the way it is, is obviously a sensible thing to do. In much the same way as you react when you see everybody walking in the opposite direction to you. You at least want to have some idea of why before coming to the conclusion that you are indeed the only one going in the right direction.
If it really is that watertight, I would say that it should feature more prominently in the ID paradigm and receive the vigorous defence it deserves. I am already defending it. You can join, if you like.
Be careful what you wish for ;).
Human dFSCI originates in human consciousness. There is no doubt that humans use a complex brain to express, and in part to elaborate, the dFSCI they create. But still, the fundamental functions in creating dFSCI (understanding of meaning, purpose) are all conscious representations, and they don’t exist in computers, however huge their complexity. So, in the empirical examples of human design, complexity contributes to the design activity, but complexity alone can never explain it, if it is not “used” by a conscious agent. Again, the possibility that a simple conscious agent, who needs not a physical brain to interact with matter, could well output complex functional information in matter, remains a perfectly acceptable model.
This assumes that the complexity in computers is a true model of the complexity of the human brain. We still have next to no idea how the brain works, so this doesn't look like a very tenable position to me. A reductionist could say that it is possible that consciousness arises entirely from dFSCI, and you can't really argue it definitely doesn't. This is what I mean when I say that a really empirical approach to design should should not rule out either reductionism or vitalism.
Does ID work in a reductionist paradigm? No. Should it? No. Can it? No.
How can you be so sure about this? Let's say one of our Mars rovers happens upon a machine. It's nothing complicated, just a vehicle of some sort with a engine that uses some sort of fuel, obviously designed to carry things. And certain aspects of it, weight, age etc, establish beyond doubt that it simply cannot be of human origin. What would the headlines say? "EVIDENCE OF ALIEN LIFE AN ILLUSION, SAY REDUCTIONISTS"? I doubt it! Even if you go along with reductionism and say that consciousness could simply arise from the sum of our parts, it still doesn't negate the design inference. However it happened, consciousness exists. It is a phenomenon of the universe. Just because the only beings we definitely know have it are humans, or at least beings that are confined to our planet, that doesn't mean it doesn't exist elsewhere. Whenever we happen upon design and we have no viable theory to explain how it could arise stepwise through forces of nature, we automatically assume that it was produced or mediated all in one go as the specific function of an entity. And the only entities we know of in the natural world that have function are life forms. Therefore no reductionist would seriously argue that the machine on Mars was anything other than evidence of some sort of alien life, and yet he would not see that his philosophical position had been compromised. As I'm sure you've gathered, I'm building a tower of bricks just to see how you knock it down. Because if it can be knocked down I'd really like to know how. englishmaninistanbul
englishmaninistanbul: Wow! So many questions! I will try not to repeat what I have already said. why is it proving so difficult to silence “consciousness is an illusion” rubbish, or are they all just being pig-headed? Maybe they are all just being pig-headed :) . In general, I cannot modify my ideas about reality only because most people think differently. In a way, I am accustomed to that. I am, definitely, a minority guy. You say that many ID thinkers shy away from explicitly referring to consciousness; why should that be? I cannot answer for them. Your approach to the subject of consciousness seems perfectly robust to me Thank you. To me too, otherwise I would change it. If it really is that watertight, I would say that it should feature more prominently in the ID paradigm and receive the vigorous defence it deserves. I am already defending it. You can join, if you like. But is it not true that all hitherto observed conscious agents are complex? No. That is an assumption. I would only say that observed conscious agents (humans) express their consciousness through a complex brain. Now, maybe consciousness is independent from the brain, even in humans, as many think (including me). Or maybe consciousness is just an emergent property of the brain, like reductionists believe. I do believe that there are many good arguments in favor of the first option, but for the moment let's say that the issue must be left at least "undecided". The statement that "all hitherto observed conscious agents are complex" is an implicit assumption of the reductionist view, so it cannot be accepted as "true". Let's say undecided. dFSCI has only been observed to come from dFSCI-containing entities Again, that is an assumption. Human dFSCI originates in human consciousness. There is no doubt that humans use a complex brain to express, and in part to elaborate, the dFSCI they create. But still, the fundamental functions in creating dFSCI (understanding of meaning, purpose) are all conscious representations, and they don't exist in computers, however huge their complexity. So, in the empirical examples of human design, complexity contributes to the design activity, but complexity alone can never explain it, if it is not "used" by a conscious agent. Again, the possibility that a simple conscious agent, who needs not a physical brain to interact with matter, could well output complex functional information in matter, remains a perfectly acceptable model. “Is a protein functional? Well, does it do anything useful to the organism, or does it trash it? That's not really correct. For most proteins, we can define a specific biochemical function, measurable in the lab, independently from its utility or not for an organism. That is the immediate function, and we have to explain it. It would not be naturally selectable, but it could perfectly be intelligently selected. For example, an enzyme greatly accelerates a chemical reaction, be it useful to the organism or not. Gpuccio, does your definition of consciousness imply vitalism, or is consciousness a kind of black box? No. Consciousness and life are different concepts. Consciousness, as I have repeated ad nauseam, is an empirical fact, with specific form and aspects. Life is much more difficult to define. I am not sure of what could be called vitalism, or neovitalism. It does not seem a very tendy position. Still, I have already stated here, and I repeat now, that IMO mere biological information is not enough to explain life, whatever it is. That life comes only from life is still perfectly true. Even with all the single parts already available, and all the information already there, nobody can make a living cell from non living parts. That's the most I can say. Consciousness is not a black box. It is very much open. We can directly see much of what happens in it, between inputs and outputs. So, not a black box, definitely. Does ID work in a reductionist paradigm? No. Should it? No. Can it? No. To the next time... :) gpuccio
gpuccio, in reply to 28.1.1.2.6 Thanks for yet another razor sharp analysis. So razor sharp in fact, that I'm pretty much pared to the bone. Shame, just when it was getting fun. With regards to point 1, I agree with you that the preferred approach is, of course, to tackle the objections head on and vigorously defend the validity of consciousness. However if you and I are so right about consciousness, why is it proving so difficult to silence "consciousness is an illusion" rubbish, or are they all just being pig-headed? You say that many ID thinkers shy away from explicitly referring to consciousness; why should that be? I come back to the point I've been harping on about all along. As a layman attempting to self-educate, when researching ID I come across endless references to dFSCI et al. which are used to argue for the design inference, very effectively it must be said, and yet precious little about the source of design itself. And by the source of design I'm not talking about God, I'm talking about a nuts and bolts definition of any given designer. Maybe I'm looking in the wrong places. But if my observation is borne out, surely this is not a good thing. Your approach to the subject of consciousness seems perfectly robust to me. In fact, I would say it's my biggest take-home from this entire debate. If it really is that watertight, I would say that it should feature more prominently in the ID paradigm and receive the vigorous defence it deserves. However if, for whatever reason, it is generally seen as or found to be problematic to defend, maybe a more general definition such as what I am attempting to formulate might be useful at least as a unifying basis. Now on to my point 2. Of course, I was writing in haste and forgot to include in a) both conscious agents and dFSCI-containing entities. You do my homework for me: “dFSCI is observed in material objects only when one of the following two conditions is true: a1) They are designed by a conscious agent a2) They are produced by some entity already containing dFSCI" You are also correct about the infinite regress if we do not hypothesize a conscious agent at the beginning, and that a conscious agent is not necessarily complex. But is it not true that all hitherto observed conscious agents are complex? Just thinking out loud here: So if I took only the a2 part of your definition I would be saying "dFSCI has only been observed to come from dFSCI-containing entities." This would be akin to "all life comes from pre-existing life", which leads us all the way back to first life, and nobody really has any clue about that, at least not in empirical terms. Still that doesn't stop people hypothesizing until the cows come home about prebiotic stews and soups and whatever. At least they refill the bread basket every time it runs out, I'll give 'em that. As for my b) definition, "Entities that perform a function", I just inserted the "F" from "dFSCI" with the intention of doing a bit more reading and then coming back. As applied to amino-acid sequences in proteins, I think it's a much easier concept to apply in that domain than outside of it. "Is a protein functional? Well, does it do anything useful to the organism, or does it trash it?" Easy. I admit I have no idea where to start in defining function with regard to any design executor. So unless you or someone else would like to help me with that I'm tapped out for the moment I'm afraid. Jon Garvey says: So a beaver, or a cellular system, could be construed to be exercising a design function, and either to be acting as a mere algorithm or as a purposeful agent, which in theory ought to be distinguishable. But if beavers or cells are teleological agents, one would not like to have to attribute the same consciousness to rodents and bacteria, or one might end up with a vitalist concept of design. I must admit this is a bit over my head as well. For all modern scientists' rubbishing of vitalism, at present it seems that any really empirical approach to design should not discount the possibility of either reductionism or vitalism being borne out in design agents, because we haven't got aaaaaanywhere neeeeear enough data. Gpuccio, does your definition of consciousness imply vitalism, or is consciousness a kind of black box? Does ID work in a reductionist paradigm? Should it? Can it? And Jon, what is a vitalist concept of design and how would attributing consciousness to rodents and/or bacteria end up with one? englishmaninistanbul
Jon: You raise serious issues, but I am afraid that most of them can only be debated at a philosophical level. About beavers, I have explicitly stated that any opinion on how and how much they are conscious, or intelligent, or purposeful, is at present a matter of individual choice, because we have no way to know really what happens in a beaver's mind. The same is true, obviously, for bacteria. The problem is not really relevant for ID, I believe, because ID is not about design in general, but about detectable design. And, as we know, the only design that is really detectable in the designed object is complex design, design which exhibits FSCI. Now, personally I do believe that FSCI, and specifically dFSCI, is observed only in human artifacts and in biological information. The most difficult scenario is that of animal intelligent behaviour: in many cases we can certainly observe FSCI (although usually not dFSCI): but, as I have tried to argue, the complex part is probably guided by hereditary information (probably in the genome), as suggested by two facts: a) The functional result is largely repetitive (same function, limited architectural variation). b) The functional behaviour is shared by all members of the species, and therefore likely inherited. If that is true, then the complex information neede must be in the genome (or equivalent), probably in digital form. That information is therefore of the same kind that describes proteins. It can be found and analyzed. And the designer is, I belive, the same designer as for the rest of biological information. Finally, while I agree that intent is important, I would not say that it "defines more closely that aspect of consciousness that applies especially to “design”". As I have tried to argue, at least two fundamental aspect of consciousness must act for design to emerge: cognition (understanding of menaings) and purpose (the feeling, the desire to implement those meanings in outer reality). What use would intent be without any understanding? And understanding without purpose would never translate into action. Again, consciousness is the only link between cogntion and feeling, between meaning and intent. There is no conscious representation that is not both things at the same time. I would happily add a third aspect, that is free will. But I avoid that because it is iMO the only aspect of consciousness that is not completely empirically described, and requires a more philosophical, and not only scientific, approach. gpuccio
gpuccio I wouldn't disagree that intention implies consciousness, but maybe that it defines more closely that aspect of consciousness that applies especially to "design". It also enables one to separate the level at which design is expressed so as not to dilute the idea of consciousness, and even to address the criticism that "design" is too limited a concept to the fluidity of living systems. So I'm a little cautious to call beavers "conscious" and then have to qualify it heavily to say why beavers are not people. Even more so were mechanisms of purposeful "design" to be admitted in cellular processes once the ND paradigm goes to its allotted resting place. In that evolutionary model, "Star Trek", delegated crew members exercise purposeful intentions to switch on the hyperdrives and do whatever is necessary to anti-matter pods, and even the ship's computers perform functions, but it is Captain Picard whose "Make it so" is the origin of the course-change. In a sense he is the one significant conscious agent of the particular effect. So a beaver, or a cellular system, could be construed to be exercising a design function, and either to be acting as a mere algorithm or as a purposeful agent, which in theory ought to be distinguishable. But if beavers or cells are teleological agents, one would not like to have to attribute the same consciousness to rodents and bacteria, or one might end up with a vitalist concept of design. I don't believe beavers have a theory of self, despite the Narnia stories. The defence against that? Well, one would be to show that self-conscious agents, in your sense, are necessary to *initiate* design, even when they work through leseer teleological agents. In either case, teleology in the central concept. Jon Garvey
Jon: Thank you for your contribution. But do you really believe that intention means anything out of consciousness? Intention and purpose are special feelings applied to a cognitive map. The conscious being perceives a cognitive representation and feels that its implementation is desirable. Therefore he wills to implement it. Nothing of that has any meaning, other than as a conscious experience. Intention is an aspect of conscious experiences. Meaning is another one. I find nothing metaphysical in consciousness, considered merely as an empirical fact. As for that, "matter" is probably a more metaphysical concept. We have been hypnotized by a generic, and often incorrect, use of some words, such as "natural", "metaphysical", and so on. Those words mean nothing, and yet they bring about heavy philosophical assumptions, simply by existing. Consciousness is empirical. It cannot be denied. Please note that in my scientific arguments I never use any specific theory about what consciousness is, or how it originated, or how it can be explained (ot not explainbed). The only thing I ask is to admit that it exists, that nobodu has ever explained it in terms of obkective configurations of matter, and that it interacts with the material world. All those statements are, IMO, incontrovertible. Whatever EIIìs worries about beavers, we don't really need to know what happens in their consciousness to detail our Id theory. In my empirical definition of design, of dFSCI, and of the ID inference, I need the recognition of consciousness at two different points: 1) I need to define design as the process where cosncious meaningful and purposeful represenations are otputted from a conscious agent (the designer) to a material system (the designed object). Well, we wiltness that every day, and in our own consciousness. We have meaningful, purposeful conscious representations all the time. And we output their form to material systems all the time. That's exactly what I am doin when I am writing this post. That's what you will do if you answer it. That's exactly what has been called "design" since the word was created. Who can deny that, or say that it is "metaphysical"? 2) In the definition of dFSCI, I need a conscious agent to recognize and define a function for the difital information, a function for which the dFSCI will be computed. Only a conscious agent can do that, because only a conscious agent understands what "function" means. (Of course, non conscious entities can recognize specific functions that they have been programmed to recognize: but in no way do they understand what "function" means). So, the conscious agent recognizes a function in the digital information (if he can), and must explicitly define it, so tha it may be objectively recognized and measured by other conscious agents. A human being is perfectly apt to do that. So, to sum it up: 1) Consciousness is necessary to define design. Intent is a good tool too, but it is only an aspect of consciousness. I always speak of meaningful purposeful conscious representations as the origin of design. That includes all the necessary components: consciousness (representations); meaning (the cognitive aspect); and intent (the feeling aspect). 2) A cosncious agent is necessary to define the function for which dFSCI is measured. A human being is fine for that. And, obviously, the simple recognition that human beings are conscious, intelligent and purposeful beings is necessary when we demonstrate the connection between dFSCI and conscious agents, using human artifacts and non designed things as the two groups where we check our dFSCI definition. gpuccio
In intelligent selection a function is purposefully serache by intelligent engineers, and its appearnce is measured, even at vey low levels, where it would not significamtly contribute, for the better or for the worse, to replication power...
All you need to do is demonstrate a way of knowing in advance what it would take to increase replication power, whatever that is. Where does sexual selection and female choice fit into your scheme, and how do you balance hitchhiking effects? Petrushka
gpuccio A quick interjection, that may either be a help or a hindrance in your interchange with EII. I see his point that "consciousness", whilst a completely transparent subjective reality, is difficult to apply to other entities (like beavers) reliably. But implicit in the concept, and at the heart of "design", is "intention" (aka goal-setting, teleology). With beavers, the key issue is whether they want to build, modify or repair a dam, or whether they merely have a set of algorithms to stick a log in wherever they see running water, gnaw on the river-facing side of a tree, etc. Petrushka's evolutionary algorithms have no intentions, but perform functions according to the original designer's intention. I don't think that's disputable, because they're not designed to *have* intentions. Central to the Neodarwinist thesis is *lack* of intention. So if, for example, cells were found (as per J Shapiro) to manipulate their biology towards specific goals, it would be profoundly non-Darwinian (or "heterodox" in Jerry Coyne's parlance) because the cells would then be "designers" in at least the same way that beavers are. The question would then be not so much whether they are conscious (putting vitalism on the table) but whether the "design intentions" are their own or those of *their* designer. Nevertheless, one would have shown that the *appearance* of design was not (as in Darwinism) the *illusion* of design, and the primary designer, having foresight and intention, would be a necessary corollary, for all the reasons you state. It may or may not be, therefore, that "intention" is a more scientifically verifiable, less metaphysical, basic starting point than "consciousness". Jon Garvey
englishmaninistanbul: I am pleased with your answer too. Thank you for your patience and goodwill. :) A couple of comments. 1. I still think that such an inference would serve to show very clearly that the way of reasoning that goes, “If I can prove consciousness is not a valid scientific concept, naturalistic explanations instantly become more plausible” is utterly wrong. Frankly, I don't understand why you worry about that. No one in the world can "prove consciousness is not a valid scientific concept". Let them try. And you will easily be able to destroy their arguments. Consciousness is beyond any doubt a valid scientific fact (not a concept): it exists. Are they denying that subjective experiences exist? Not even solipstists have gone so far: solipsists merely state that only their own subjective experiences exist (which, while being false, has at least some logic, because it stresses the difference between personal observation of consciousness, a fact, and inference of consciousness in others). But who can ever deny that subjective expereinces exist? The best way, I would say the only way, to prove soething wrong is to explain why it is wrong. Looking for complex indirect ways will not help. 2. Well, b) is not completely wrong, but certainly tricky. a) is good, but not precise. I will try to explain why. First of all, when we infer design we cannot distinguish, in many cases, between a "design executor" and a "designer". So, your definition must be valid for both cases. Moreover, design is not defined by dFSCI. In many cases design is simple. Design is defined, as I have said many times, by the conscious intent of the designer. Achild can draw a very simple form representing a house. That's design. But it is not complex. dFSCI is always connected to design (an empirical observation), but the contrary is not true. I can accept a) only if it is formulated this way: "dFSCI is observed in material objects only when one of the following two conditions is true: a1) They are designed by a conscious agent a2) They are produced by some entity already containing dFSCI However, you can easily see that a2 generates an infinite regress, if we don't hypothesize a conscious agent at the beginning. The need for a1 is that we cannot assume, as materialists wrongly do, that a conscious agent needs be complex, because we don't really know the nature of consciousness. It is true that humans, as conscious agents, express their consciousness through a complex brain, that certainly has a lot of dFSCI. But, unless one agrees with the reductionist theory of consciousness (as unsupported empirically and logically false as neo darwinism), one cannot assume that consciousness requires complexity. So, a simple conscious designer at the origin of all complex designed things is the best way to avoid the problem of infinite regress. That's why you cannot say that dFSCI is observed only as coming from entities that already contain dFSCI. In that way, you are assuming that all conscious designers already contain dFSCI. So, I would say that consciousness is always the first, necessary and sufficient, originator of dFSCI, but dFSCI can be "propagated" by non conscious entities that already contain dFSCI. gpuccio
Thank you. I feel as if we're finally speaking the same language. I'm very pleased with this answer, it's exactly the sort of thing I'm looking for... Right, so "discrete environment-manipulating entity" won't work. My thoughts: 1) While it is true that defining "design executor" to include humans, beavers and printers is perhaps not the most useful of inferences, I still think that such an inference would serve to show very clearly that the way of reasoning that goes, "If I can prove consciousness is not a valid scientific concept, naturalistic explanations instantly become more plausible" is utterly wrong. 2) My current redrafts of the definition of a "design executor" as the only observed immediate producers of dFSCI: --a Entities containing dFSCI --b Entities that perform functions I have serious doubts that b is viable, but I don't have the time to elaborate right now, I will try to do so later on today. englishmaninistanbul
Petrushka: All forms of selection incorporate some feed-back system. What is you point? In NS, the feedback system is simply the differential reproduction of those replicators in that environment. That's why it is called "natural". No conscious agent has defined the function, the function is not measured exp'licitly, and the feedback does not depend on any measurement of a specific function: better repèlicators replicate better, and that's all. In intelligent selection a function is purposefully serache by intelligent engineers, and its appearnce is measured, even at vey low levels, where it would not significamtly contribute, for the better or for the worse, to replication power, and an active feedback, intelligently depending on the measure function, is applied to the system, to expand the replicators bearing the new function, even at very low levels, and even if they do not replicate better for that. This cycle is a repeated many times, and as we know for the experience of bottom up protein engineering, that function can sometimes be found in reasonable times and with reasonable resources. gpuccio
Joe: It seems simple, isn't it? But Petrushka does not like this type of answer... gpuccio
englishmaninistanbul: First of all, thank you for a post that is more detailed and answerable than others. So answer it I will :) . I apologize for the imprecision about your statement: "I am most emphatically not attempting to define consciousness without referring to consciousness. I am attempting to define a designer without referring to consciousness." You are right, obviously. I was just to quick in writing, but my meaning remains the same, given my point that a designer can only be defined as a conscious agent. You zay: I would like to point out that in that Wikipedia definition you presented the word “conscious” is conspicuous by its absence. True. But there are the following words: a plan or convention for the construction of an object or a system Do non conscious systems originate plans and conventions? a specification of an object, manifested by an agent, intended to accomplish goals, Goals. Here the specification of intent is very clear. Intent is a conscious representation. a roadmap or a strategic approach for someone to achieve a unique expectation Expectation. The person designing is called a designer Please, note the word "person". which is also a term used for people who work professionally in one of the various design areas People. The fact that th word "consciousness" does not even appear is a demonstration that the associacion of the word with a conscious agent is given for granted, not the opposite. The same happens in much ID literature. Now, let's go to your concept of "design executor". It is correct, but rather useless, indeed confounding. Let's see why. First of all, I find your specific definition very ambiguous. You define a design executor as a: "discrete environment-manipulating entity", and your statement is that: "dFSCI has only ever been observed to come from discrete environment-manipulating entities". To my perplexity about the meaning of that, you detailed: "To elaborate: I use the word “discrete” to say that after the inception of the entity in question its actions are internally caused, at least in part, ie. partially or wholly independent of external stimuli. I’d have thought “environment-manipulating” was self-explanatory." I really don't understand the discrete. Stretched, that could apply to any object. Less stretched, it can certainly be applied to radioactive substances, just to make an example. Are radioactve substances "design executors"? uantisicBut really, that could be applied to anything. A quantistic wave function evolves in time according to internal laws, that certainly modify its actions. And wave functions apply to all reality. Let's go to the environment manipulation". What does it mean? You say it "self-explanatory". It certainly is not for me. Maybe I am particularly dumb. I will consider two different possibilities: either "manipulating" implies consciousness, and is a synonim for designing, or it just means "changing". I looked at dictionary.com, and this is the result: "ma·nip·u·late? ?[muh-nip-yuh-leyt] Show IPA verb (used with object), -lat·ed, -lat·ing. 1.to manage or influence skillfully, especially in an unfair manner: to manipulate people's feelings. 2.to handle, manage, or use, especially with skill, in some process of treatment or performance: to manipulate a large tractor. 3.to adapt or change (accounts, figures, etc.) to suit one's purpose or advantage. 4.Medicine/Medical . to examine or treat by skillful use of the hands, as in palpation, reduction of dislocations, or changing the position of a fetus. I would say that the definitions here support my view that "manipulation" is a form of design, and implies consciousness and intent. That would make all your reasoning wrong and useless. But let's consider the possibility that you intended the word as "changing, modifying", without any reference to intent. Then, a raidoactive substance is certainly a manipulating agent. A stone too (at least, it bends the gravitational field, although very little). A wave function too (it can interact suddenly with the macrocosmic world, through he well know wave function collapse. Every existing thing changes the environment. And every existing thing changes, bot because of its inner states and because of its interaction with the environment. So, your definition, in this sense, is definitely too widespread. So if you say: ""dFSCI has only ever been observed to come from entities that change at least in part because of their inner states, and change the environment for a specific intent" You are again referring, in a very imprecise way, only to conscious designers, who are the only entities that can have an intent. On the other hand, if you say: ""dFSCI has only ever been observed to come from entities that change at least in part because of their inner states, and that change the environment" You are juhst saying that dFSCI comes only from something that exists, which seems rather trivial. then you say: My current definition of an “design executor” as a “discrete environment-manipulating entity” basically treats said executor as a black box. There wasn’t FSCI before, some thing–a human, an animal, a robot, whatever–came along and, presto, it made some FSCI. But that has no sense. I can accept that you treat the executor as a "black box" at some level of the reasoning, but how can you ignore the fundamental question: is there dFSCI in that black box? When you say: "There wasn’t FSCI before", that is true only if there was not dFSCI in the black box. Let's go to the example of a computer printing Hamlet. You say: the computer is a black box. When it prints Hamlet (which is dFSCI without any doubt), lo, here dFSCI appears. But it is not true. That specific dFSCI was already there, in the memoty of the computer, as a file that some human had loaded. Now, according to you definitions, the computer os the executor of design. The man who loaded the file is, I suppose, a proxy executor. And the men who lent the file to that man is a proxy-proxy executor. Of what utility io all this? The only correct answer is: who wrote Hamlet? There, truly, we3 can say that Hamlet dod not exist before, and it started to exist after it was written by Shalespeare. There we see the whole miracle of dFSCI, with all its complexity and beauty. From the author's conscious representations, from his creativity and intent. I gave you also the example of an actor playing Hamlet. There is no doubt that the actor is a designer, becasue he adds all kinds of specified components to the drama that are not really in the wrtitten form. He decides how to move, the intonations of the voice, the times, facial expressions, and so on. That is certainly FSCI. But it is not the dFSCI that we find in the written drama. That, the actor takes wholly from the paper. The actor is a co-designer of the payed version of Hamlet, of that specific version, but Shakespeare remains the only designer of the original drama. IMO, that is also the position of beaver: they are certainly co-designers of the dam: they adapt the original algorithm to specific situations, and probably modify some aspects creatively. It's difficult to compute their orifinal contributations in terms of information, obviously, because we know too little of this particular system. But the basic algorithm to build dams, I believe, is written somewhere, in theor genome or elsewhere. And they are not the designers of that information. Moreover, in the specific case of beavers, there is really no design or design executor inference for them: we know that they build dams, because we observe the process of dam building. No inference is needed. But it is legitimate to ask about the origin of the information for dam building, if it is written in their genome. There, a design inference is necessary, and it does not imply beavers at all as agents (unless someone belives that beavers have intelligently written their own genome). For all these reasons I believe that your "design executor inference" is not useful. I hope you don't take offence for that, it's what I really believe, and I have tried to detail why. gpuccio
As often said, algorithms based on random variation and intellugent selection are designed.
You keep harping on "intelligent" selection. Kindly provide a formal mathematical proof that intelligence or consciousness is required for a feedback system to work. And you are not just making it up. Petrushka
Petrushka, Do you not see the contradiction between your statements?
My point is that electronic are increasingly being designed by computers, with humans setting the “targets." ... Consciousness is not required to solve complex problems.
Computers "designing" require targets. Targets in turn require consciousness at some point. Who invented the concept of the motherboard? Who decided that for some specific problem a computer with a motherboard would be a good solution? Consciousness, consciousness. Having computers that design motherboards is also a target. Who programmed computers to design motherboards? What you call evolution is a mass of intelligent activity assisted by intelligently designed components. As soon as you realize that any and all observed instances of design always involve setting a target, you'll realize what intelligence can do that evolution can't. The target, by definition, is abstract. It does not exist until it is implemented. It can only exist as an abstract idea in the conscious mind that envisions it. Without it no one writes evolutionary algorithms, and without it they have nothing to do. You are trying to separate intelligence from design, but not succeeding. ScottAndrews2
Petrushka: I haven’t seen anything in the field of AI that is likely to lead to a conscious machine, so I find that particular debate to be without value. That's what I call a sincere observation. Only, the observation is not without value. In epistemological terms, it can be expressed as follows: The theory that AI can lead to a consacious machine is, at present, completely unsupported by empirical data. Which, in science, is not an observation without value. gpuccio
Petrushka:
If proteins are ever designed, it will be done with software using evolutionary algorithms.
We agree. I say that is how the original proteins were designed.
Consciousness is not required to solve complex problems.
That is false as consciousness is required to write the algorithm, set up the initial conditions, set the goal and provide the resources required. Joe
Petrushka: As often said, algorithms based on random variation and intellugent selection are designed. Ay you say yourself,"humans set the “targets”. IOWs, human define the functions to be achieved, And human also set the rules, the algorithms, the machines, and anything else. Your only point seems to be that RV is used by human in their design. We know that. Machines do the computing, not the inventing. If they invent at all, it's because we write what and how to invent in their code. If they by chance did invent something really new, they would not recognize it, unless we have written in their code how to recognize it. Machines cannot set targets, because they have no targets, except for those that we write in their code. All "evolutionary algorithms" you refer to are not neo darwinian algorithms. They are designed algorithms, based on RV and intelligent selection. You may repeat this point n times, you will get n times the same answer from me. gpuccio
OK, fair enough, I respect your candidness. ...your attempt at defining consciousness without referring to consciousness... I am most emphatically not attempting to define consciousness without referring to consciousness. I am attempting to define a designer without referring to consciousness. You say that a designer is necessarily conscious. If we are talking about the ultimate origin of FSCI then I happen to agree with that statement (although I would like to point out that in that Wikipedia definition you presented the word "conscious" is conspicuous by its absence). However I had hoped that by now it would be clear that by "designer" I mean the observed immediate origin of FSCI. Now that I put it in so many words, I can see why you are so adamantly telling me that I'm thinking in circles. We're talking at cross purposes, and that's not really your fault. For simplicity, I'm going to stick to your definition of a designer as the conscious originator of design, and for the purposes of this discussion I am going to use the expression "design executor" to refer to my "observed immediate originators" of design. My current definition of an "design executor" as a "discrete environment-manipulating entity" basically treats said executor as a black box. There wasn't FSCI before, some thing--a human, an animal, a robot, whatever--came along and, presto, it made some FSCI. Maybe the design executor is conscious and is executing its own design, maybe it isn't and is executing the design of the designer that made it, it doesn't matter. It is the observed, immediate originator of the FSCI, and therefore lends support to the inference of the existence of a design executor whenever FSCI is encountered. Similar to my "apples come from plants" story above, I am not denying the existence of conscious designers, I am trying to formulate a definition that includes both conscious designers and their proxies. This would, in my mind, lend even more weight to the argument that the design inference (although I suppose that would be an "design executor inference" now) is much more plausible than naturalistic explanations of the origins of information and therefore life. For example: beavers. If we ask the question "Are beavers designers?" the answer revolves around the question "Are beavers conscious?", and the answer to that question is unclear at present. So if we are searching for demonstrations of the reliability of the design inference, at the moment beavers leave us empty handed in terms of what we can dogmatically assert. However if we ask the question "Are beavers design executors?" the answer is a resounding "yes." Coz they make dams. Beavers now become a shining demonstration of the "design executor inference." Now we come back to the question, is this inference as useful as I think it might be? englishmaninistanbul
What do you mean? Do computer circuit boards emerge from natural physical systems without any human design?
Computer motherboards are being designed by evolutionary algorithms because they do a better job than humans. This technology is just a few years old. Machines will be doing our inventing and product development long before there is any serious discussion of their consciousness. I haven't seen anything in the field of AI that is likely to lead to a conscious machine, so I find that particular debate to be without value. Petrushka
That’s just silly. That’s like saying that my screwdriver and my drill hung the shelves in my closet, and I just indicated a preference. You know, because humans can’t make perfect 5/16? holes in wood.
I gave the illustration of computer motherboards, not simple tools. My point is that electronic are increasingly being designed by computers, with humans setting the "targets." This is a trend in its infancy. Eventually computers will develop the targets based on market trends. If proteins are ever designed, it will be done with software using evolutionary algorithms. Consciousness is not required to solve complex problems. Petrushka
englishmaninistanbul: Ah, just for completeness. I realize now that you also object to my word "cheating" (I had missed that part at a quick read). Well, I admit that word is a little bit stronger. But again it is targeted to the reaoning, not necessarily to the intentions of the reasoner. A worng argument, that uses improperly the words and the logic, is certainly cheating those who listen to it. The proposer of a wrong argument must take responsibility for that, especially when the erros in his reasonings are explicitly shown. Whatever his intimate intentions were. And please remember that my tone has become "increasingly accusatory", as you say, only after repeated attempts from you to stick to a wrong representation of things, not by credible arguments, but only by rephrasing the same things in a slightly different way. After all, the obstinate proposer of a wrong reasoning is cheating at least with one person: himself. gpuccio
Petrushka: My approach to consciousness, as you should know, is empirical. Consciousness in ourselves is a fact, directly perceived by each of us. Consciousness in other humans is an inference by analogy, vastly shared by almost all. Consciousness in animals is a weaker inference by analogy (the analogy is lees obvious). Anyone is free to draw the line where he likes. Inferences can subjectively be accepted or refuted. As for me, I would answer yes to all your examples. But it's just a personal opinion. Frankly, I don't understand the other statement: "There are already manufactured objects that are too complex to be designed by humans. Computer circuit boards are an example." What do you mean? Do computer circuit boards emerge from natural physical systems without any human design? gpuccio
englishmaninistanbul: my tone is in no way accusatory about you, but certainly about some of your statements. Your sincerity and motivations are your problem, not mine. I have said nothing about them, now will I, unless you behave in explcitly incorrect ways, that you haven't done. Let's see. The first phrase of mine you quote is: "All these appoaches are simply intellectually wrong." Where these approaches include, as clear from the context, compatibilism, AI reductionism and your attempt at defining consciousness without referring to consciousness. The very obvious point is that I see in all of theose approaches the same mistake: the desire to get rid of something that is essential to the concept we are trying to define. That kind of approach, IMO, is intellectually wrong. Including yours. IOWs, it is a cognitive error. The same type of cognitive error in all three cases. Including yours. That is in no way a statement about your motivations, or sincerity, or how you see yourself. It is simply a statement that, IMO, you are reasining in a wrong way. If you can't take that kind of statements in a discussion, why do you take part in intellectual confrontation at all? On the other hand, you seem to be disturbed by the fact that, while believing that you approac is wrong: "on the other you invite me to elaborate on my suggestions and attempt to prove them empirically valid." What wrong with that. I believe that your approach is wrong, but I am not any final authority. I would be happy to show in more detail why your approach is wrong, but I cannot do that if you don't develop a more complete and detailed theory from it. So, I invite you to do so. Is that an offense of some kind? Perhaps the approach I am suggesting is flawed, but at least from my perspective I do not believe I am guilty of intellectual dishonesty. I believe that it is flawed, and I am happy for you that your perspective of yourself is reassuring. In particular, I am most certainly not playing the compatibilist game. I never said that. You have said nothing about compatibilism. I just expressen my idea that the same type of cognitive error can be found in compatibilsm, in AI reductionism, and in your reasoning. That's completely different. I’m sure, quite sure, that you will attack this comparison and say that I’ve got it all wrong. I do think you have got it all wrong, but I don't want to attack you any more. You could take offense. The point is this is what I see myself as trying to do. I do not see that I am guilty of intellectual dishonesty. Whoever said you are guilty of intellectual dishonesty? I just said that you are guilty of intellectual error. Are dishonesty and error the same thing for you? How would you define them? I wish you well too. gpuccio
There are already manufactured objects that are too complex to be designed by humans. Computer circuit boards are an example.
That's just silly. That's like saying that my screwdriver and my drill hung the shelves in my closet, and I just indicated a preference. You know, because humans can't make perfect 5/16" holes in wood. You're attempting to lean on your "evolution as the designer" concept, pretending that it's been established. But it hasn't. Show me any circuit board and the company that manufactured it can trace it back to the people who designed it. ScottAndrews2
You have robustly analysed all of my statements and ruthlessly pruned them of mistakes and misexpressions, which was what I wanted in the first place, so I thank you. However I am saddened at the increasingly accusatory tone of your comments. On the one hand you tell me that the approach I am suggesting is "intellectually wrong" and "cheating", and on the other you invite me to elaborate on my suggestions and attempt to prove them empirically valid. Perhaps the approach I am suggesting is flawed, but at least from my perspective I do not believe I am guilty of intellectual dishonesty. In particular, I am most certainly not playing the compatibilist game. Let me explain why by way of an example. Let's say we have a theory that goes "All apples come from trees" and it's a perfectly empirical and robust statement. Our opponents, who want to argue that apples could come from anywhere, start by debating the meaning of the word "tree." There are two ways to refute their objections: Define trees in an empirically unassailable manner or, if that turns out to be problematic, use another definition that says "All apples come from plants of some kind" to demonstrate that such objections are irrelevant. I am doing the second. I'm sure, quite sure, that you will attack this comparison and say that I've got it all wrong. That's not the point. The point is this is what I see myself as trying to do. I do not see that I am guilty of intellectual dishonesty. At present I have nothing meaningful to add to the discussion either. So thank you, it was a pleasure to meet you, and I wish you well. englishmaninistanbul
Where do you draw the line of consciousness? A re apes conscious? Dogs? Cats? Rats? Birds? Insects? Are acephalic humans conscious? What's your objective criterion? There are already manufactured objects that are too complex to be designed by humans. Computer circuit boards are an example. This seems to be a trend. I would be willing to bet that withing 50 years, most products will be designed by humans only in the sense that humans will comprise the focus groups and consumer preference panels. Petrushka
englishmaninistanbul: For the last time. The words "design" and "designer" have no meaning ouside a conscious agent. A designer is a conscious agent that outputs consciously represented forms to a material system. Design is the process of doing that. Thar describes perfectly drwaing, painting, sculpure, architecture, writing, programming, and so on, exactly those activities for which human have created the word "designer". Here is the Wikipedia definition for "design": "Design as a noun informally refers to a plan or convention for the construction of an object or a system (as in architectural blueprints, engineering drawing, business process, circuit diagrams and sewing patterns) while “to design” (verb) refers to making this plan.[1] No generally-accepted definition of “design” exists,[2] and the term has different connotations in different fields (see design disciplines below). However, one can also design by directly constructing an object (as in pottery, engineering, management, cowboy coding and graphic design). More formally, design has been defined as follows. (noun) a specification of an object, manifested by an agent, intended to accomplish goals, in a particular environment, using a set of primitive components, satisfying a set of requirements, subject to constraints; (verb, transitive) to create a design, in an environment (where the designer operates)[3] Another definition for design is a roadmap or a strategic approach for someone to achieve a unique expectation. It defines the specifications, plans, parameters, costs, activities, processes and how and what to do within legal, political, social, environmental, safety and economic constraints in achieving that objective.[4] The person designing is called a designer, which is also a term used for people who work professionally in one of the various design areas, usually also specifying which area is being dealt with (such as a fashion designer, concept designer or web designer). A designer’s sequence of activities is called a design process.[5]" Is that clear? It's as simple as that. There can be no definition of designer that does not imply consciousness, because the nmeaning of the word designer is "a conscious agent that goves form to things" Still, you insist: I still think that a definition that does without consciousness might be useful. It's just the opposite. Such a definition would be simply false, manipulative and harmful. Like compatibilists trying to define free will without free will. Like AI redictionists trying to define consciousness without consciousness. All these appoaches are simply intellectually wrong. A word means what it means. Redefining it so that it seems to mean some eother thing, and still it seems to retain its meaning, is simply cheating. So, I don't follow you. You want to do that, do as you like, express your final results clearly, and then I will comment on them. You say: Gpuccio has done a very good job of defining a designer based on “consciousness”, Well, it was not difficult indeed. That's what the word means. Just looking at a dictionary, you colud have done the same. however he notes that the ID movement does not seem to have come to anything definite yet on this question. That's only your interpretation. I have said that other IDists speak of design without explicitly making the connection to consciousness, but that speaking of design implies that connection. And I have also stated very clearly that what you call "the ID movement" is indeed a field of scientific reflection where different appoaches are present. However, as far as I know, no relevant IDist has ever said that design and designer mean anything different from what they mean. That's all, I hope. Now, either you bring new interesting arguments to the discussion, or I will consider it closed for me. I don't think I have anything else useful to say on this point. gpuccio
One of the oldest chestnuts is "ID tells us nothing about the designer", and while it is true that we don't need to identify the specific designer for any given designed object, we do need a clear, empirically-founded, bare minimum definition of what does and does not constitute a "designer", otherwise you only have half a theory. Gpuccio has done a very good job of defining a designer based on "consciousness", however he notes that the ID movement does not seem to have come to anything definite yet on this question. I still think that a definition that does without consciousness might be useful. Whether that is right or not, I do think that the whole question deserves to be made clearer. I'd be very interested to hear your comments on 26.5.1.1.6, by the way. A small footnote: I am a translator by profession, so my working life revolves around the definitions of words. Maybe that would explain why I am so preoccupied with them. englishmaninistanbul
I'm good at splitting hairs, and I enjoy the process of refining words to capture a precise meaning. But I get the impression that much of this is over my head. I don't mean that in any bad, anti-intellectual way. And yet I always suspect that if someone cannot recognize design in simpler terms, such as when observing the transfer of DNA information or the metamorphosis of butterflies, that more precise definitions of "consciousness" or "designer" aren't going to do much good. These things themselves seem intended to make the case for design, and do so eloquently. Calling attention to that evidence often suffices when one has overlooked it is often beneficial. But if one can see it and reject it, I don't know what further elaboration accomplishes. Not to go off on a tangent, but I'm always astounded at how the details of the natural world are revealed to us. How much knowledge and technology did we have to accumulate in order to perceive the greater knowledge and technology underlying life? How many thousands of years did take us to comprehend the tiny points of light in the sky, look beyond them, and realize how vast the observable universe is? If our eyes are open we never cease to be amazed by the incomprehensible wisdom, intelligence, and power behind creation, because the more we advance the more we discover to humble us. Every advance we make gives us a glimpse into something even greater, and gives us reason to feel awe. And for what was all this developed? To process data faster or to accelerate some particle? Apparently just so that we could enjoy our lives - family, friends, food, fun, and work, and even share in the same pleasurable activity of using our own intelligence to design, create, improve, and do good for others and ourselves. And so that we could appreciate the gift. All of this was apparent before the electron microscope or the Hubble telescope. Now we can see the same things with better focus and in more detail. It might as well be written across the sky. ScottAndrews2
SA: Maybe these are word games, but I have a feeling they need to be played. Thank you for your input. gpuccio: FAN-TASTIC! I love detail. True, I am looking for a concise way of defining a designer, but you can't do reductio ad absurdum without the details!
In my view, the empirical definition of cosnciousness is the only way to a completely empirical definition of design, of CSI, FSCI and dFSCI, and of the final deisgn inference. That’s what I believe. Do others in ID believe the same? Well, I think that some fundamental ID thinkers probably would not explicitly make the connection to consciousness, probably because they think it would imply a philosophical stand.
If they don't explicitly make the connection to consciousness, how do they define a designer? Would you care to elaborate, or at least give me a link to a page that does? Right from the beginning I've been looking for ID's definition of a designer. With your help, I've come to see that there is one that takes conciousness as a starting point, and I'm trying to take some small, baby steps towards devising a definition that doesn't require consciousness (with some feedback from ScottAndrews2 and yourself). So there are others? What are they? Surely it would be a good thing if the ID movement could either come up with a single definition or clearly define each alternative. One or the other.
If you start from the statement “dFSCI has only ever been observed to come from discrete environment-manipulating entities”, all instances of design mediated by the animal kingdom become demonstrations of that proposition and not merely inferences that base themselves on it. That’s not my position. I do believe that such a position would lead to many logical contradictions. So I refute it.
Again, I would love to know exactly what logical contradictions they would be.
But if you want to elaborate on it, and prove it emnpirically valid, I will listen.
Thank you for keeping an open mind. Or maybe now it is you playing the optimist! :) To elaborate: I use the word "discrete" to say that after the inception of the entity in question its actions are internally caused, at least in part, ie. partially or wholly independent of external stimuli. I'd have thought "environment-manipulating" was self-explanatory. That's it.
Now I’m not saying the first definition is faulty as such, both seem to me to be equally viable definitions in and of themselves. I don’t agree. As I have said, I am strongly skeptical about your second definition, and would never use it.
Too right, neither would I! This is just my current draft, all I have at the moment is a kind of faith that a better draft is possible. I would be very happy if someone could help me with it. Scott? Anyone?
I am not aware that anyone else, except you, is so concerned that animals should be included in the set that demonstrates the inference. But I could be wrong.
Hah yeah, I'm asking that question myself. I just look at a beaver dam, and I instinctively feel that it should be part of what justifies the design inference instead of being an inference in itself. But maybe I am alone in that, judging by the fact that it's only me you and Scott still sticking around this thread. If there are any observers lurking I'd really like to know what they think. That's right, I'm talking about you! englishmaninistanbul
englishmaninistanbul: And here are my comments (not certainly corrections, because obviously I have no special authority). But are animals consciously intelligent agents? Debatable in each and every case. So if we take consciousness as a starting point for defining designers, only human beings qualify in an empirically unassailable way. More or less. The concsciousness in higher animals is an inference that many would agree with, even if it is certainly weaker than the inference for human beings (the "analogy" is weaker). Any kind of inference, design or otherwise, depends on past examples that demonstrate the reliability of the inference. Correct. So if we take consciousness as a starting point for defining designers, only human beings qualify in an empirically unassailable way. I agree. There is another reason for that. It's not only the fact that the inference of consciousness is weaker (although personally I accept it). The most important point is that tha representations in human are accessible, both in ourselves (directly) and in others (through language). That is not true for animals. Why is that so important. Because, according to my definition of design, the crucial point to have a design process is that conscious representations are purposefully otputted to a material system. That is the whole point of design. Something that exist before as a subjective experience is then outputted, as form, to a material system. That's the true meaning of the words designer and design. Now, in hu8mans we can really observe the whole process of design: we can witness the existence of the subjective representations (again, both in ourselves and in others), we can observe the design implementation and we can observe and analyze the designed object. That's why I rely only on humans to define design and demonstrate the relationship between dFSCI and design. But that does not exclude other non huma designers. As we have said, the crucial point is the causal connection between subjective representations and the final output. Any conscious being can qualify, if we can demonstrate the subjective representations. Therefore, I would not consider the animals to demonstrate the reliability of the design inference. For that, humans qualify better. “dFSCI has only ever been observed to come from consciously intelligent agents.” The demonstrations of that proposition are many, but all of them come from human beings. True. But please, consider that the important pointis the connection between conscious representations and the designed object. In that sense, what we observe in humans is potentially valid for any conscious being. Assuming beaver dams contain dFSCI, are beavers designers? Your answer is “Either they are, or whatever designed their genome is.” I would like to qualify that better. In a sense, there is no doubt that beavers are designers, if we assume that they are conscious (as I do). In building the dam, they are certainly guided by cosncious representations, so they are in that sense designers. But a problem remains. Let's assume that the dam exhibits FSCI, and is therefore an object for which we can infer design. In that case, we believe that the dam is designed, and we need not know who the designer is to do that. OK? The, we do ask oursleves: who is the designer of the dam? In a sense, as I said, the beaver is. The beaver certainly contributes to the design of the dam. What is, then, the difference with human design? The difference is: beavers only build dams. If they design, they design always the same type of object, even is withremarkable individual adaptations, and always for the same function. IOWs, they create no new functional specifications. Moreover, there is reason to believe that waht they do is mainly inherited, because not only beavers buld only dams, but all beavers build dams. That's only a more refined way of saying that their behaviour has all the formal characteristics of what we call (both in animals and humans, an "instinctive" behaviour. That is not true of human design. Although many components of human behaviour are certainly instinctive (and the need to design could well be considered as such), the important fact is that humans create new, individual specifications, conceive functions that have never been observed before, and support those functions with original FSCI. So, I insist: is the beaver the designer? I would say: it is certainly a co-designer of the dam, because its conscious representations certainly contribute to each individual output. But still, as the beaver's behaviour is largely repetitive, the functional specification is always the same, and the functional complexity is similar, it is reasonable to believe that the functional structure of dams is largely based on pre-existing information, very likely to be found in the beaver's genome. So, the beaver can be a co-designer of the dam, and yet it is not, probably, the conscious originator of the sepcification and of most of the functional information implied by the building of the dam. The designer of the beaver's genome is the true designer of those things. Sorry to be so analitical which you seem to prefer quick definitions: but if you make analytical questions, I have to answer them in detail. If we do not yet have enough data to point to a designer (on a consciousness-based definition), that would mean that beaver dams do not demonstrate design, they are shunted to the category of instances where we can merely infer design. Absolutely correct. This doesn’t feel right. Why? It feels perfectly right to me. Beaver dams never turn up through chance and necessity, they are obviously a product of design. That is absolutely correct, provided that we assume that we have computed the FSCI in dams, and found that it is present. As alredy said, we infer design in dams if and because they exhibit FSCI. We do not observe directly the whole design process of dams, unless we have access to the conscious representations of the beaver. If we could, we could better judge if those conscious representations are sufficient to explain the dam, or not. And beavers, if they are not designers, are at least design proxies. They are “the [immediate] providers of specified information required to implement a function”, to hijack Scott Andrews’s tabled definition. Correct. And so? However I suspect that saying that “dFSCI (digital functionally specified complex information) has only ever been observed to come from providers of specified information required to implement a function” might seem rather circular. Correct. That's why I would never say such a silly thing. I detest circular reasoning. It all hinges on the word “provider”, which I would tentatively describe as a “discrete environment-manipulating entity.” No. It all hinges on the concept of consciousness. If you renounce to the connection bewteen conscious representations and the designed object, IMO you cannot define design in any reasonable way. You have shown yourself how such an attempt leads to circularity. If you start from the statement “dFSCI has only ever been observed to come from discrete environment-manipulating entities”, all instances of design mediated by the animal kingdom become demonstrations of that proposition and not merely inferences that base themselves on it. That's not my position. I do believe that such a position would lead to many logical contradictions. So I refute it. But if you want to elaborate on it, and prove it emnpirically valid, I will listen. Now I’m not saying the first definition is faulty as such, both seem to me to be equally viable definitions in and of themselves. I don't agree. As I have said, I am strongly skeptical about your second definition, and would never use it. But which one does ID work from? Can it work from both? Does it work from both? The answer is very simple. ID is not a dogmatic theory. Many people have different approaches. Maybe some work better than others. I have clearly stated my approach. I take responsibility for it. In my view, the empirical definition of cosnciousness is the only way to a completely empirical definition of design, of CSI, FSCI and dFSCI, and of the final deisgn inference. That's what I believe. Do others in ID believe the same? Well, I think that some fundamental ID thinkers probably would not explicitly make the connection to consciousness, probably because they think it would imply a philosophical stand. I don't agrre with that. Again, if one uses words like "designer" and "choice", IMO he is implying consciousness. Those words cannot be even defined out of consciousness. And consciousness is a completely empirical fact, provided that we don't superimpose to it our personal theories about what it is or means. Finally, I would like to kust remind a couple of important points: a) The design inference needs not an explicit knowledge of who the designer is. b) As far as I can know, all ID theorists refer to human design to demonstrate the connection between design and CSI, or one of its subsets. I am not aware that anyone else, except you, is so concerned that animals should be included in the set that demonstrates the inference. But I could be wrong. gpuccio
EMII,
dFSCI has only ever been observed to come from designers (demonstrations), therefore whenever we observe dFSCI we are justified in assuming the existence of a designer even when unable to identify it (inferences).
I think the noun is less important. We could replace "designer" with "thing" and say that dFCSI has only been observed to come from things, therefore from dFCSI we can infer the existence of a thing. It's the adjective "intelligent" that matters. What if we took out "designer" and "intelligent" and made the noun "intelligence?" (Even though these are word games in a sense, I really enjoy it.) ScottAndrews2
gpuccio: I get the feeling I am trying your patience with my repeated attempts to express myself. I would like to assure you that I am not being deliberately difficult, and that your replies have all been very instructive, and I greatly appreciate them. Thank you for taking the time to correct many of my inaccurate statements. Despite the failings in my arguments so far I still feel I have a point that has been misunderstood, or rather, one that I have not yet succeeded in expressing correctly. So I would like to try to put it another way. Any kind of inference, design or otherwise, depends on past examples that demonstrate the reliability of the inference. "dFSCI has only ever been observed to come from designers (demonstrations), therefore whenever we observe dFSCI we are justified in assuming the existence of a designer even when unable to identify it (inferences)." So where do demonstrations stop and inferences begin? To answer that question we need to define the word "designer." Your definition starts with consciousness. This is an empirically proven reality, and I thank you for the powerful way that you argued that. "I am conscious, and every moment of my existence gives me reason to infer that all other human beings are conscious." Nobody can seriously argue with that either. But are animals consciously intelligent agents? Debatable in each and every case. So if we take consciousness as a starting point for defining designers, only human beings qualify in an empirically unassailable way. "dFSCI has only ever been observed to come from consciously intelligent agents." The demonstrations of that proposition are many, but all of them come from human beings. Assuming beaver dams contain dFSCI, are beavers designers? Your answer is "Either they are, or whatever designed their genome is." If we do not yet have enough data to point to a designer (on a consciousness-based definition), that would mean that beaver dams do not demonstrate design, they are shunted to the category of instances where we can merely infer design. This doesn't feel right. Beaver dams never turn up through chance and necessity, they are obviously a product of design. And beavers, if they are not designers, are at least design proxies. They are "the [immediate] providers of specified information required to implement a function", to hijack Scott Andrews's tabled definition. However I suspect that saying that "dFSCI (digital functionally specified complex information) has only ever been observed to come from providers of specified information required to implement a function" might seem rather circular. It all hinges on the word "provider", which I would tentatively describe as a "discrete environment-manipulating entity." If you start from the statement "dFSCI has only ever been observed to come from discrete environment-manipulating entities", all instances of design mediated by the animal kingdom become demonstrations of that proposition and not merely inferences that base themselves on it. Now I'm not saying the first definition is faulty as such, both seem to me to be equally viable definitions in and of themselves. But which one does ID work from? Can it work from both? Does it work from both? I submit my thoughts in good faith and await your comments and corrections. englishmaninistanbul
Scott: Very well said. I think that is exactly what I have tried to say in my posts here. It is simple, after all, if one just stops one moment to understand. Functional information can be identified in the final designed object. That is enough to infer intelligent design, but does not tell us when the information was inputted (the time of the design process) and who inputted it or how (the identity of the designer and the modalities og implementation). It is perfectly true that those aspects are not necessary to infer design. However, those aspects are certainly, in principle, amenable to scientific inquiry. The time of the design process in the time where functional information appears for the first time in a material system. If that time can be known, than the time of the design process is known. The modalities of implementation can be known in principle, either directly (by observing the design process), or more often indirectly, by scientific inference. For instance, I have argued many times that biological design by direct writing (guided variation) and biological design by RV + intelligent selection are two valid possibilities, but will have different natural histories, and so can in principle be distibguished by a detailed knowledge of the natural history of biological beings, of genomes and of proteomes. The identity of the designer is stil another independent issue. The case of the beavers is an exellent model of how, after having inferred design, we can still be in doubt about the identity of the designer. Both the beaver itself and the designer of the beaver's genome are valid possible candidates. The problem is not trivial, and can be solved on an empirical basis, as I have said (for instance, by identifying the specific information sufficient to build dams in the beaver's genome). But in no way do those problems compromise the design inference for dams, provided that it is supported by a correct evaluation of the presence of FSCI in the final object. gpuccio
I remember seeing a tiered beaver dam when I was living on a farm in New Hampshire (recovering from graduate school). The brooklet it was exploiting was quite smaall, and so the beavers had actually constructed three dams, neatly spaced. Quite fascinating. Doubt they thought of it themselves. allanius
englishmaninistanbul: Once again I miss the logic of some of your remaks. You say: So now we have a way of scientifically describing designed objects, so that nobody can justifiably claim on scientific grounds that design is or could be an illusion. And that's true. But now we come to the question of the designer, and sometimes the objection is raised that “consciousness is an illusion” or “free will is an illusion.” And I had the feeling that a similar rigorous scientific definition for designers is lacking. And I already answered you that free will in not necessary to define a conscious designer, and that consciousness is an empirical fact that cannot be denied by anyone. Please, explain, mif you don't agree, how consciousness could ever "be an illusion". Obviously, an illusion is something that can take place only in a consciousness (It is essentially a representation that does not corresponds to reality). So, to state that consciousness is am illusion is mere semiotic nonsense. Threfore, your "feeling that a similar rigorous scientific definition for designers is lacking" is simply a wrong feeling. Is a beaver dam a designed object? If we can measure its functional information with reasonable approximation, and if it is high enough in relation to an appropriate threshold, we can relioably infer design for it. There is no problem in the concepts and methods. There can be, obviously, some practical difficulties in the individual measurements, as always happens in all sciences. But if we accept that a beaver dam has dFSCI, is the beaver a designer? I have explicitly answered that too. If the functional information derives from conscious representations in the beaver, than the beaver is the designer. If, instead, the functional information, or at least most of it, derives from automatic behaviours, coded for instance in the beaver's genome, then the designer of the beaver's genome is also the designer of the dam. It is very simple: the designer is the conscious first originator of the functional information we observe in the final object. The only "problem" here is that we have not enough data: first of all, we know very little about conscious processes in beavers (and that can be a very difficult point to improve). But we also lak any understanding of if and how "dam building" is based upon genomic information in the beaver. That can certainly be understood in the future, as research goes on. Research is already being made to understand the genomic basis of instinctive behaviour in animals. The only problem I see is your final, strange statement: It seems it should be a much easier question to answer than it is at present… Why? I really don't understand why you think it should be easy at all. We have a good theory, good definitions and good tools. But the solution of individual problems depends critically on other things, especially on existing data. Sometimes, solutions just require time to be found, even with the best available methodology. Quantum mechanics is a very powerfull scientific theory, and yet most simple systems in nature cannot be realistically analyzed by QM. That is true even of Newton mechanics, in many cases. Even in mathemathics we have many potentially treatable problems that have not been solved up to now. Does that mean that mathemathics is not a good discipline? So, your statement that deciding if the beaver is the designer of the dam "should be a much easier question to answer than it is at present…" is simply nonsense, and suggesting that the "problem" you see is due to faulty definitions in the theory is simply wrong. gpuccio
GD: I see your:
I am working from the principles and actual practice of information theory (although I haven’t touched on thermo here). My objection is that you are not . . . [& Re my: "Signals, are intelligently created, noise is based on — in general — random, stochastic natural processes, e.g. Johnson, flicker, shot, sky etc."] This is not how the terms are defined and used in information theory.
I must beg to disagree, once we focus on the significance of the signal to noise ratio and the fact that we already know from massive experience that signals exist and have certain objective characteristics, which we can then use to sufficiently accurately measure signal power, and the same holds for noise. In particular, we know that thermal agitation creates Johnson noise, a statistical thermodynamic result; that shot noise comes from the stochastic nature of currents in semiconductors; that sky noise comes form various natural processes giving rise to a random radio background; and more, much more. We can therefore even assign noise factor/figure values to equipment, or a noise temperature value, as can be seen from LNB's for satellite dish receivers. The last, reflecting the open bridge to statistical thermodynamics considerations. Clipping Wikipedia on noise, as a convenient source speaking against known ideological interest:
Electronic noise [1] is a random fluctuation in an electrical signal, a characteristic of all electronic circuits. Noise generated by electronic devices varies greatly, as it can be produced by several different effects. Thermal noise is unavoidable at non-zero temperature (see fluctuation-dissipation theorem), while other types depend mostly on device type (such as shot noise,[1][2] which needs steep potential barrier) or manufacturing quality and semiconductor defects, such as conductance fluctuations, including 1/f noise. In communication systems, the noise is an error or undesired random disturbance of a useful information signal, introduced before or after the detector and decoder. The noise is a summation of unwanted or disturbing energy from natural and sometimes man-made sources. Noise is, however, typically distinguished from interference, (e.g. cross-talk, deliberate jamming or other unwanted electromagnetic interference from specific transmitters), for example in the signal-to-noise ratio (SNR), signal-to-interference ratio (SIR) and signal-to-noise plus interference ratio (SNIR) measures. Noise is also typically distinguished from distortion, which is an unwanted alteration of the signal waveform, for example in the signal-to-noise and distortion ratio (SINAD). In a carrier-modulated passband analog communication system, a certain carrier-to-noise ratio (CNR) at the radio receiver input would result in a certain signal-to-noise ratio in the detected message signal. In a digital communications system, a certain Eb/N0 (normalized signal-to-noise ratio) would result in a certain bit error rate (BER).
Notice a key, telling contrast: the noise is an error or undesired random disturbance of a useful information signal, i.e. it is quite obvious that informational signals are intelligently applied and functional, while noise is naturally occurring and degrades or even undermines function if it gets out of hand. In addition, we can see that there will be observable, contrasting characteristics, and a relevant way to come up with statistical models of noise that will show that noise imitating signals is maximally implausible, and with high reliability, practically unobservable. We can therefore characterise the statistics of signals, and those of noise, and with high certainty know that in a given situation, the noise power level is x dB, and the signal power level is X dB, so that our Signal to Noise ratio can be estimated off appropriate volts squared values, etc. And, we can use 'scopes to SEE the patterns of signal and noise, as the eye diagram -- your irrelevant distractor notwithstanding -- shows. As in, how open is the eye? Something as simple as snow on a TV set vs a clear picture is a similar case. AmHD gives a useful summary on what signals, by contrast are, one that is instantly recognisable to anyone who has had to work with signals in an electronics or telecommunications context:
sig·nal (sgnl)n.1. a. An indicator, such as a gesture or colored light, that serves as a means of communication. See Synonyms at gesture. b. A message communicated by such means. 2. Something that incites action: The peace treaty was the signal for celebration. 3. Electronics An impulse or a fluctuating electric quantity, such as voltage, current, or electric field strength, whose variations represent coded information. 4. The sound, image, or message transmitted or received in telegraphy, telephony, radio, television, or radar . . .
I would modify that slightly, to make more room for analogue signals, i.e the variation represents coded or modulated or baseband, analogue information. So, I am quite correct based on longstanding praxis that the whole context of discussing how much information is passed per symbol, on average, on a - SUM [pi*log pi] measure, i.e. H, is that we already know to distinguish signal and noise in general, and can measure the power levels of noise and signal. Now, there is of course a wider usage of H, which ties back to thermodynamics, going back to implications of Maxwell's demon and Szilard's analysis which was extended by Brillouin. Jaynes and others have carried this forward to our time, amidst a debate over the link between information and thermodynamics, which now seems to be settling down in favour of the reality of the link. As my always linked note shows, I have long used Robertson's summary of it, that a distribution of possibilities and reduction in uncertainties is associated with information. In particular, the lack of information about microstates of matter leads to a situation where we have to work at gross level, and so limits the work that can be extracted from heat, etc. This allows a bridge to be built from Shannon's H metric, to statistical measures of entropy. It turns out that Shannon's assignment of the term "entropy" to the metric that had an analogous mathematical form, has substantial support. I now clip from my notes on the subject, including a cite from Wikipedia on the current state of play:
Further to this, we may average the information per symbol in the communication system thusly (giving in terms of -H to make the additive relationships clearer): - H = p1 log p1 + p2 log p2 + . . . + pn log pn or, H = - SUM [pi log pi] . . . Eqn 5 H, the average information per symbol transmitted [usually, measured as: bits/symbol], is often termed the Entropy; first, historically, because it resembles one of the expressions for entropy in statistical thermodynamics. As Connor notes: "it is often referred to as the entropy of the source." [p.81, emphasis added.] Also, while this is a somewhat controversial view in Physics, as is briefly discussed in Appendix 1 below, there is in fact an informational interpretation of thermodynamics that shows that informational and thermodynamic entropy can be linked conceptually as well as in mere mathematical form. Though somewhat controversial even in quite recent years, this is becoming more broadly accepted in physics and information theory, as Wikipedia now discusses [as at April 2011] in its article on Informational Entropy (aka Shannon Information, cf also here):
At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann's constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing. But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon's information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more" . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate. >>]) Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).
Summarising Harry Robertson's Statistical Thermophysics (Prentice-Hall International, 1993) . . . we can see, perhaps, that this should not be so surprising after all. (In effect, since we do not possess detailed knowledge of the states of the vary large number of microscopic particles of thermal systems [typically ~ 10^20 to 10^26; a mole of substance containing ~ 6.023*10^23 particles; i.e. the Avogadro Number], we can only view them in terms of those gross averages we term thermodynamic variables [pressure, temperature, etc], and so we cannot take advantage of knowledge of such individual particle states that would give us a richer harvest of work, etc.)
So, what is happening here is that Shannon's H-metric is conceptually connected to thermodynamic entropy, the latter being a measure of the degree of freedom or uncertainty remaining at microscopic level, once we have specified the gross, lab level observable macrostate. In short,the missing info that would have to be supplied -- and would require a certain quantity of work to do so -- to in principle know the microstate. Going beyond, and back to the key matter, the definition of Shannon's H metric and its use exists in a context, it is not isolated form the considerations already given. In particular, we routinely know the difference between intelligently applied signals and naturally occurring noise, and that it is maximally implausible for noise to mimic signals, though it is logically and physically possible in principle. As I discussed in Appendix 8 of the same note, a pile of rocks on a hill on the border between England and Wales could -- logical and physical possibility -- fall down the hill and spontaneously form the pattern of glyphs: "Welcome to Wales." However, the number of configurations available for an avalanche, and the high contingency is such that with maximum confidence, we can rest assured that this is operationally implausible, i.e scientifically unobservable on chance plus necessity. If we ever go to the railway line on the border and see "Welcome to Wales" spelled out in rocks, we can with maximal assurance infer that this was intentionally put there as a signal, by an intelligence using the glyphs of the Roman Alphabet as modified for English and in effect using stones as pixels. Or, equivalently, we can infer that such an observation is operationally inconsistent with the reasonably observable results of blind chance and mechanical necessity acting on stones on a hillside. All of this ties right back to the case of an empirical observation E, from a narrow zone or island of function T, in a much wider sea of possible configurations, W. Which is precisely the design inference in action, on a concrete illustration, that shows both the issue of complex specified information, and that of functionally specific, complex organisation per the Wicken wiring diagram, and associated information. indeed, of digitally coded functionally specific complex information. Of course, H can be used to look at distributions and patterns in general [the above applied it to thermodynamics!], but that has nothing to do with its telecomms and information theory context, that of our routine ability to distinguish and measure the difference between noise and signal. And, on characteristic signs and the relevant statistics of sampling of a space of configurati6ons, we can and do routinely recognise that we may reliably distinguish information from noise on characteristic patterns, so much so that we use S/N as a key measure in evaluating something as basic as theoretical channel capacity. Put in other words, the inference to design on observable characteristics and background knowledge and experimental technique, is an integral part of information and telecomms theory and praxis. As matters of commonplace fact. GEM of TKI kairosfocus
KF:
Sorry, but your above response shows the root problem: you are cyclically repeating objections, instead of working forward from first principles and actual praxis of information theory and thermodynamics.
First, as far as I can remember this is the first time I've raised these objections to you. I've raised them to other people, and you've probably had them raised to you before, but that does nothing to indicate they're wrong in any way. Second, I am working from the principles and actual practice of information theory (although I haven't touched on thermo here). My objection is that you are not.
Signals, are intelligently created, noise is based on — in general — random, stochastic natural processes, e.g. Johnson, flicker, shot, sky etc. There is also of course cross-talk that is an unwanted effect of an intelligent signal, and there are ground loops with mains pickup etc.
This is not how the terms are defined and used in information theory.
So, of course the metric H itself does not “distinguish” signal from noise, it is rooted in a situation where we already know — and can measure — the difference and use H in a knowledgeable context. In short, we know what noise looks like, and what signal looks like, and we can measure power levels of both. The eye diagram is a classic example of that, as I pointed out already. So is the good old “grass growing on the signal.”
What a signal looks like depends entirely on the encoding -- the eye diagram page you linked gives examples of several examples of different encodings, and some encodings won't look like an eye pattern at all. What noise looks like also depends on the particular noise source. The only way to distinguish a particular signal from a particular type of noise is to start with some assumptions about the each of them.
The very fact that we routinely mark the observable distinction to the point where there is an embedding of the ratio in the bandwidth expression that was one of Shannon’s main targets in his analysis, is revealing.
Shannon was considering a specific case: a band-limited channel (an assumption about the limits placed on the signal) with Gaussian white noise (an assumption about the noise). Make different assumptions, and you'll get different results.
PS: I see another recycled objection that I will pause on. As has already been pointed out in the onward linked and in earlier discussions that have obviously been strawmannised in the usual fever swamp sites, the first default is that a situation is best explained on necessity, and/or chance; in which case S = 0 by default. In short, the explanatory filter STARTS from the assumption that chance and/or necessity are the default explanations. It is when there is a positive, objective reason to infer specificity, that we assign S = 1. As I gave above, a few days ago, here at UD I found out the hard way that if a picture caption has a square bracket, the post vanishes. The function disappears over a cliff, splash. The same occurs with protein chains, and the same occurs in program code, etc etc etc. With car parts, you had better get the specifically right part, or it will not work; and of course the information in the shape etc of such parts can be reduced to clusters of linked strings, as say would occur in a part drawing file. This is not hard to figure out — save to those whose intent is to throw up any and all objections in order to dismiss what would otherwise make all too much sense.
...I'm honestly not sure what this has to do with anything I wrote. If it's a response to the question I asked about the dummy variable S, I was just asking for clarification, and this hasn't clarified it at all for me. Gordon Davisson
GD: Sorry, but your above response shows the root problem: you are cyclically repeating objections, instead of working forward from first principles and actual praxis of information theory and thermodynamics. Signals, are intelligently created, noise is based on -- in general -- random, stochastic natural processes, e.g. Johnson, flicker, shot, sky etc. There is also of course cross-talk that is an unwanted effect of an intelligent signal, and there are ground loops with mains pickup etc. So, of course the metric H itself does not "distinguish" signal from noise, it is rooted in a situation where we already know -- and can measure -- the difference and use H in a knowledgeable context. In short, we know what noise looks like, and what signal looks like, and we can measure power levels of both. The eye diagram is a classic example of that, as I pointed out already. So is the good old "grass growing on the signal." H, being the average info per symbol, in effect, is then fed into the channel capacity. It is that capacity that is set by signal to noise ratio -- as already and separately identified -- and bandwidth. The very fact that we routinely mark the observable distinction to the point where there is an embedding of the ratio in the bandwidth expression that was one of Shannon's main targets in his analysis, is revealing. GEM of TKI PS: I see another recycled objection that I will pause on. As has already been pointed out in the onward linked and in earlier discussions that have obviously been strawmannised in the usual fever swamp sites, the first default is that a situation is best explained on necessity, and/or chance; in which case S = 0 by default. In short, the explanatory filter STARTS from the assumption that chance and/or necessity are the default explanations. It is when there is a positive, objective reason to infer specificity, that we assign S = 1. As I gave above, a few days ago, here at UD I found out the hard way that if a picture caption has a square bracket, the post vanishes. The function disappears over a cliff, splash. The same occurs with protein chains, and the same occurs in program code, etc etc etc. With car parts, you had better get the specifically right part, or it will not work; and of course the information in the shape etc of such parts can be reduced to clusters of linked strings, as say would occur in a part drawing file. This is not hard to figure out -- save to those whose intent is to throw up any and all objections in order to dismiss what would otherwise make all too much sense. kairosfocus
KF:
Pardon, but information obviously embraces a spectrum of interconnected meanings.
I'd agree with that, but with the qualification that even though they're interconnected, they diverge quite a bit from each other.
A good place to start with is how Shannon’s metric of average info per symbol, H, is too often twisted into a caricature that is held to imply that it has nothing to do with information as an intelligent product.
I'll disagree with that, although with a qualification: H doesn't distinguish between information from intelligent sources or unintelligent sources, so in that sense it has "nothing to do with information as an intelligent product". But information from intelligent sources contributes to H, so in that sense they do have something to do with each other. Essentially, H = (intelligent-origin information) + (unintelligent-origin information). So H counts information from intelligent sources, but doesn't distinguish it from information from unintelligent sources.
To see that his is plainly wrong, let us simply move the analysis forward until we come to the step where Shannon puts H to work in assessing carrying capacity [C] of a band-limited [B] Gaussian white noise channel:
C = B*(1 + S/N) . . . Eqn 1
See that ratio S/N? It is a log ratio of signal power to noise power. That is, it is premised on the insight that we can and routinely do recognise objectively and distinguish signals and noise, and can quantitatively measure the power levels involved in each, separately. (In a digital system, the Eye Diagram/’scope display is a useful point of reference on this.)
This is irrelevant, since the signal/noise distinction has nothing to do with whether the information is from intelligent sources or not. To see why this is, consider some examples of signal noise from intelligent sources. First, some noise from ID sources: if you look at the noise a modern radio communication system has to exclude, a lot of it is due to other radios using the same (or nearby) frequencies (or leaking radiation at unintended frequencies, etc). Generally, this is limited by FCC (and similar bodies') regulations that limit who's allowed to transmit at what frequencies and power, but in less-strictly-regulated frequency bands interference is common. Different Wi-Fi networks, for example, will interfere with each other (and with cordless telephones, and bluetooth devices, and...) if they're too close and using the same frequencies. You can also get interference from ID-but-not-meaningful sources; for instance, microwave ovens tend to leak radiation in the 2.4 GHz band used by 802.11b, g, and n. One can see similar things in non-radio contexts as well: for electronic signals, crosstalk (leakage of signals between nearby wires) can be a significant problem. As with radio interference, this is generally dealt with by a combination of designing the system to limit the amount of crosstalk, and designing the receivers to ignore crosstalk they do receive. Second, some signals from non-ID sources: radio astronomy comes to mind as an example of where the receiver (the radio telescope) is designed to receive a signal from non-intelligent sources (e.g. stars), and exclude noise from terrestial sources (radios, etc). In fact, the same is true of all other types of astronomical telescopes as well, and for that matter even normal cameras (depending on what you're taking a picture of). So then what is the signal vs. noise distinction? IMHO, it's really a distinction between the information you want (signal) vs. information you don't want (noise); as such, it's not a distinction that originates within information theory, but a distinction imposed on it from the outside. If I tune my radio to station A, it's supposed to play whatever station A it sending, and any interference from stations B, C, D, etc is defined as noise. If I change the station to B, then B changes from noise to signal and A from signal to noise. Information theory can help with designing the radio receiver to obey these whims of mine, and evaluate its success in doing so, but it cannot tell me what those whims should be. That's the practical situation; what about the theoretical side of information theory? It is, if anything, even further from what you described. If anything, information theory seems to go out of its was to ignore the distinction between ID and non-ID information. Certainly, it does not limit information production to intelligent sources. As far as statistical information theory is concerned, the defining characteristic of an information source is that it is not completely predictable; whether its unpredictability is a result of intelligent choice or simple randomness does not matter. From the introduction to Shannon's original paper:
The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem. The significant aspect is that the actual message is one selected from a set of possible messages.
And from section 2, The Discrete Source of Information:
We can think of a discrete source as generating the message, symbol by symbol. It will choose successive symbols according to certain probabilities depending, in general, on preceding choices as well as the particular symbols in question. A physical system, or a mathematical model of a system which produces such a sequence of symbols governed by a set of probabilities, is known as a stochastic process.[3] We may consider a discrete source, therefore, to be represented by a stochastic process. Conversely, any stochastic process which produces a discrete sequence of symbols chosen from a finite set may be considered a discrete source. This will include such cases as: 1. Natural written languages such as English, German, Chinese. [these are ID sources -GD] 2. Continuous information sources that have been rendered discrete by some quantizing process. For example, the quantized speech from a PCM transmitter, or a quantized television signal. [might or might not be ID, depending on the continuous source -GD] 3. Mathematical cases where we merely define abstractly a stochastic process which generates a sequence of symbols. [followed by examples] [i.e. an idealized random source, which is not ID -GD]
In my opinion, there are two primary reasons that information theory ignores intelligence and meaning: first because it doesn't really matter to the communication system what (if anything) the messages it's passing mean or where they come from (see the first Shannon quote); and second because we have extensive mathematical tools for describing and analyzing random processes, but no analogously powerful tools for dealing with intelligence and meaning; ignoring the distinction allows the theory to apply the random-based tools to situations where they don't strictly apply. I think of statistical information theory's treatment of intelligence in information sources as being a bit like theistic evolution's treatment of God: yeah, we know it's there, but the theory works best if we pretend it doesn't. BTW, another argument I sometimes see made based on Shannon's theory that his theory treats transmitters, receivers, codes, etc as being intelligently designed (and I made the same assumption in at least one place above), and therefore these things must be intelligently designed. This is wrong; Shannon assumed this because he was interested in analyzing intelligently designed communications systems, not because that was the only type possible. Since that was an assumption of the theory, it can't also be a conclusion without making the argument circular. Finally, I haven't had a chance to reply on your ID Foundations 11 posting, so let me throw in a couple of quick requests for clarification here. First, in the equation "Chi = – log2(2^398 * D2 * p)", are D2 and p the same as Dembski's ?S(T) and P(T|H), respectively? Second, can you clarify what you mean when you define "a dummy variable for specificity, S, where S = 1 or 0 according as the observed configuration, E, is on objective analysis specific to a narrow and independently describable zone of interest, T"? Is it simply that S=1 when E is in T, and S=0 otherwise? Gordon Davisson
oops, not logged, actual values. kairosfocus
I wrote my previous reply in haste, so now it is time to repent at leisure. It seems that a lot of the great leaps forward in certain fields of science start with a scientist who grasps something genuinely new. His self-confident opponents are working from an established paradigm with an extensive toolbox of arguments. They are the orthodoxy, and science is a game they think they have played and won. So our scientist's next task is to rigorously translate that insight into the language of science to show that his idea is indeed superior to what went before. All the way back in 14, I mentioned how Stephen Meyer describes how he set out to find a scientific way of describing and justifying the intuitive design inference. You need that to counter the "design is an illusion" objection, which usually finds its source in materialism. Materialism claims science as its own turf, so "beating them at their own game" means doing good science, but also defining things empirically in an unassailable way, or "in words of one syllable", so that the truth will out. (A poor choice of expression to be sure.) So now we have a way of scientifically describing designed objects, so that nobody can justifiably claim on scientific grounds that design is or could be an illusion. But now we come to the question of the designer, and sometimes the objection is raised that "consciousness is an illusion" or "free will is an illusion." And I had the feeling that a similar rigorous scientific definition for designers is lacking. Is a beaver dam a designed object? We have dFSCI for that, if only we had enough time to ponder the question. But if we accept that a beaver dam has dFSCI, is the beaver a designer? It seems it should be a much easier question to answer than it is at present... Maybe I'm Don Quixote tilting at windmills. Still, from underneath the twisted pile of what was my shining armour, I would suggest that it would be helpful if the salient points from our discussion could be crystallized and made available on the comment policy page or somewhere else for easy reference. englishmaninistanbul
That's an interesting idea. I can see how it might go through a number of iterations as each definition allows for or excludes something unintentionally. It could also prove impossible given enough hair-splitting. If "intelligence" is clearly defined enough, then perhaps the rest isn't really so difficult. I don't know that the word "designer" is important. The adjective is - it just needs a noun to go with it. What's important is the intelligence. The noun could be just about anything as long as that attribute can be applied to it. Function is central to Intelligent Design. I think it follows that the intelligent agent, or designer, is the entity that originates the implementation of a function. Too wordy? The intelligent designer provides the specified information required to implement a function. That would cover the function of arranging words to convey meaning, or the function of building dams to benefit both beavers and the ecology. In the case of the dams, having determined that they perform a function (for the beavers and/or the ecosystem) and that this function requires specified information (how to gather materials and assemble a dam, where, and when) then intelligent cause is inferred. That leaves open the question of whether the beavers possess it, but the determination does not hinge on the answer. If they do not, the information must have been provided to them. It is an observed reality that intelligent agents can produce unintelligent agents that follow intelligently formulated instructions, even if the instructions are so complex and the inputs so varied that the agents appear intelligent. So how's that: The intelligent designer provides the specified information required to implement a function. ScottAndrews2
englishmaninistanbul: It is not my purpose, and I hope not the purpose of any serious IDist, to "beat materialists at their own game". Our only purpose, I believe, is to do good science. We can happily leave the materialist game to materialists. And defining a concept in "words of one syllable" has never been, as far as I know, an epistemological requirement of good science. Finally, I believe you are really an optimist. Even if we could do what you ask (that we cannot and want not), materialist would argue just the same. I am afraid that you have not understood what "their own game" really is. gpuccio
englishmaninistanbul: I still don't agree with defining intelligence out of consciousness. Consciousness is the primary condition. Intelligence has no meaning out of conscious cognition. If beavers are designers, that's because they are consciously intelligent. The computer is not intelligent. It is a depository of intelligent choices made by humans. Even if it "manipulates" the environment, it does so only because it has been programmed to do so, not because it understands, or represents, or cognizes, or has purposes. Computers, or other non conscious, designed systems, are only passive executors of conscious plans deviced by conscious beings. Beavers are conscious, and that's why the discussion about them is more difficult, as I said from the beginning, and as proven by the interventions here. But the original source of dFSCI is a conscious, meaningful, purposeful representation. The first beginning of dFSCI is the conception of a function. A machine cannot do that, unless passively, automatically executiong instructions that have been written in it, without any awareness of their meaning or purpose. Therefore, I insist: Only conscious intelligent agents have ever been observed to produce de novo dFSCI. Designed machines can output dFSCI according to the information that has already been inputted in them by their designers. As Abel emphasizes, functional information is only the result of choice determinism, of the application of choice to comfigurable switches, something that neither necessity nor chance contigency can ever realize. Machines work by necessity, or sometimes chance and necessity. They have no choice determinism. gpuccio
Oh absolutely. I'm just saying that when you infer design, you imply the existence of a designer, and although we all know intuitively what the word "designer" means in a general sense, people with a materialist world view might insist that that concept is undefinable scientifically and rubbish your entire theory. If you can find a way to define the concept of "designer" in words of one syllable that even they can't argue with then you have a chance of beating them at their own game. englishmaninistanbul
Thank you for that link kf. I had seen that page before, if only I'd read it more carefully when I first saw it. I would just like to quote one section because I think it epitomizes what we've been discussing up to this point:
There plainly are other cases of FSCO/I that point to non-human intelligent designers, albeit these are of limited [non-verbal] forms. Where this gets interesting is when we bring to bear the Eng Derek Smith Cybernetic Model of an intelligent, environment-manipulating entity: In this model, an autonomous entity interacts with the environment through a sensor suite and through an effector array, with associated proprioception of internal state that allows it to orient itself in its environment, and act towards goals. The key feature is the two-tier control process, with Level I being an in-the-loop Input/Output [I/O] controller. But, the Level II controller is different. While it interacts with the loop indeed, it is supervisory for the loop. That allows for projection of planned alternatives, decision, reflection on success/failure, adaptation, and more. That is not all, it opens the door to different control implementations, on different “technologies.” For instance, it could be a software entity, with programmed loops that allow an envisioned degree of adaptation to circumstances as it navigates and tacks towards impressed goals. That sort of limited autonomy could indeed be simply hard wired or even uploaded as an operating system for a robot or a limited designer.
This is the sort of thing I've been looking for, in the sense of an empirically warranted way of defining an "intelligence." I don't see how even a determinist can deny the validity of this. Humans, beavers and other animals, and even certain postulated computer programs do in fact qualify as "intelligent, environment-manipulating entities." Only intelligent, environment-manipulating entities have ever been observed to produce dFSCI. Personally, this sounds like a more lucid position to hold than to say that "only intelligent agents have ever been observed to produce dFSCI", because it's much harder to come at it saying it's anthropocentric or open to interpretation or philosophically-grounded or what have you. englishmaninistanbul
The example of the beaver demonstrates the variety of ways in which intelligent agency can be demonstrated. One could imagine a dam and build it with his own hands, or one could design a new "machine" or specialize an existing one for building dams. Either way the dam itself ultimately has the same designer. But the second case adds the design of the machines and has a purpose, not only for a single dam, but for a pattern of dam-building. It takes into account more than the immediate benefit of a single dam. But it shows the need to separate design from implementation and from replication. If someone writes an autobiography, how much difference does it make whether they typed, dictated, wrote by hand, or described events to a ghost writer? Or if someone designs an exercise machine, does it make a difference whether they assemble it with their own hands or specify a bunch of parts with assembly instructions for someone else to manufacture and someone else to assemble? Or what if several people collaborate? In both cases the implementation or even the method of design could be impossible to discern from the finished product. Can we still infer design in the case of the autobiography and the exercise machine based upon their specified complexity without identifying a designer or the implementation? I think we can and routinely do. ScottAndrews2
EII: Pardon, but all that is needed is an objective basis for recognising that intelligences exist and act. That has long since been trivially answered, as we are such, and arguably the likes of beavers are such too (observe the sketch maps). I here notice how such beavers adapt their dam-building to the circumstances of stream flow, which is quite a design feat. I do not know how beavers were born as dam builders, on empirical observational grounds, but I can see what they are doing and recognise the intelligence implied. GEM of TKI kairosfocus
GD: Pardon, but information obviously embraces a spectrum of interconnected meanings. A good place to start with is how Shannon's metric of average info per symbol, H, is too often twisted into a caricature that is held to imply that it has nothing to do with information as an intelligent product. To see that his is plainly wrong, let us simply move the analysis forward until we come to the step where Shannon puts H to work in assessing carrying capacity [C] of a band-limited [B] Gaussian white noise channel:
C = B*(1 + S/N) . . . Eqn 1
See that ratio S/N? It is a log ratio of signal power to noise power. That is, it is premised on the insight that we can and routinely do recognise objectively and distinguish signals and noise, and can quantitatively measure the power levels involved in each, separately. (In a digital system, the Eye Diagram/'scope display is a useful point of reference on this.) Now, information theory is unquestionably a scientific endeavour, and BTW, one closely connected to thermodynamics. So, we have here a relevant context in which AN INFERENCE TO DESIGN IS FOUNDATIONAL TO THE SCIENTIFIC AND TECHNOLOGICAL PRAXIS. That is why in the note that is always linked from my handle, and which I have linked above, Section A, I noted:
To quantify the . . . definition of what is perhaps best descriptively termed information-carrying capacity, but has long been simply termed information (in the "Shannon sense" - never mind his disclaimers . . .), let us consider a source that emits symbols from a vocabulary: s1,s2, s3, . . . sn, with probabilities p1, p2, p3, . . . pn. That is, in a "typical" long string of symbols, of size M [say this web page], the average number that are some sj, J, will be such that the ratio J/M --> pj, and in the limit attains equality. We term pj the a priori -- before the fact -- probability of symbol sj. Then, when a receiver detects sj, the question arises as to whether this was sent. [That is, the mixing in of noise means that received messages are prone to misidentification.] If on average, sj will be detected correctly a fraction, dj of the time, the a posteriori -- after the fact -- probability of sj is by a similar calculation, dj. So, we now define the information content of symbol sj as, in effect how much it surprises us on average when it shows up in our receiver: I = log [dj/pj], in bits [if the log is base 2, log2] . . . Eqn 1 This immediately means that the question of receiving information arises AFTER an apparent symbol sj has been detected and decoded. That is, the issue of information inherently implies an inference to having received an intentional signal in the face of the possibility that noise could be present. Second, logs are used in the definition of I, as they give an additive property: for, the amount of information in independent signals, si + sj, using the above definition, is such that: I total = Ii + Ij . . . Eqn 2
In sum, we cannot properly drive a wedge between information in the Shannon sense and the issue of an intelligent signal and distinguishing that from noise. So also, the inference to design is foundational to information theory. of course, the metric use3d to basically quantify information, has the odd property that since it addresses redundancy in real codes that leads to a distribution of symbols that as a rule will not be flat-random, a flat random string of symbols, which has no correlations between symbols, will have a maximal value of the Hartley or Shannon metrics, for strings of a given length. This oddity has been turned into a grant metaphysical story, but in reality is simply a consequence of how we have chosen to measure quantity of information in a string of symbols. Going further, we can see that we have a further relevant feature of information, that it is typically used to carry out a purposeful function. That may be linguistic, as in posts in this thread. It may be prescriptive/algorithmic, as in object code for a computer. In either case, it intelligently restricts us to a narrow zone of relevant function (T) within a sea of possible configurations for a string of the given length [W], i.e. strings of symbols are confined by rules of vocabulary and the relevant syntax and grammar, then also the semiotics of meaning. Once such a functional -- something that can often be directly observed or even measured -- string that is observed [E] gets to be of sufficient length that we observe its confinement to a narrow and sufficiently isolated zone of meaningful function, T, in a sufficiently wide space of configs W, we have very good reason on results of sampling theory to infer that the best explanation of that string E is design. You may wish to work through my reasoning on that here, I will simply sum up for the solar system scale and cosmological scale: Solar system (500 bits): Chi_500 = I*S - 500, bits beyond Observed Cosmos (1,000 bits): Chi_1,000 = I*S - 1,000 In each case the premise is that a blind search of the config space -- inherently a high contingency setting -- driven by chance and or necessity without intelligent guidance, will be so overwhelmed by the scope of possibilities that to near certainty, the sample will tend to pick up what is typical, not what is atypical. That is, the issue is operational implausibility on sampling theory: some haystacks are simply too large to expect to find a needle in, on accessible blind search resources. Of course, in relevant contexts such as origin of life or of major body plans, one may assert, assume or imply that the cosmos programs the search and cuts down the space dramatically. That is tantamount to an unacknowledged inference to design of the cosmos as a system that will develop life in target sites, then elaborate that life into complex and intelligent forms. At this stage, of course, I have very little confidence that this will be persuasive for those committed ahead of any evidence to a priori evolutionary materialism, that they need to reconsider their thinking in light of the unwelcome evidence they have locked out, as I have pointed out here on. All that likely response goes to, is that it shows the circle of question-begging, deeply ideologised philosophical thought that has donned a lab coat and seeks to inappropriately redefine science in its materialistic interests -- regardless of the damage done to science when in such hands it ceases to be an evidence-led search for the empirically anchored truth about our world and its roots. But, more reasonable onlookers, will be able to see what is really going on for themselves. GEM of TKI kairosfocus
Oops it seems you already did. Any being who has conscious subjective experiences which bear for him the connotations of meaning and purpose, and outputs those representations as forms in a material system. Can it be streamlined any more than that? englishmaninistanbul
Oops. Read 26.2 first to understand the previous post (26.1.1) englishmaninistanbul
Oops it seems you already did. Any being who has conscious subjective experiences which bear for him the connotations of meaning and purpose, and outputs those representations as forms in a material system. Can it be streamlined any more than that? englishmaninistanbul
And we are another step closer... Do you agree with the statement that ID does not posit supernatural causes? If that is the case, ID must be able to work within a materialist/determinist paradigm. By "materialistically acceptable" what I meant was that somebody working from the hypothesis of causal determinism should not be able to reject a definition of a design agent out of hand. Otherwise are we saying that ID incorporates an a priori denial of materialism? I hope you didn't think I meant to sniff at your definition of free will, your comments on that subject and indeed all your comments in this discussion so far seem very reasonable. My criticism derives entirely from my trying to play Devil's advocate. Would you care to frame an empirical definition of a conscious designer? You seem to be better at it than me. englishmaninistanbul
englishmaninistanbul: My ideas: 1) Wrong. There is no epistemological need to have a "materialistically acceptable" definition of a design agent. Materialism is a phylosophy, not a requirement for science. What we nedd is an empirical definition of a designer. My definition if sully empirical. You can avoid the "free will" part, if you prefer, and just define a designer as any being who has conscious subjective experiences which bear for him the connotations of meaning and purpose, and outputs those representation as form in a material system. That definition is completely empirical, and based on the facts of subjective representations. There is no need for it to be "materialistically acceptable". If materialists are reasonable, and want to make science, they will accept it because it is empirical. If materialists are dogmatic, and stick to their ideology in spite of empirical reasons, they will refute it. That's fine for me. 2) We don't need it. See point 1. 3) I agree. Free will is a philosophical position. And so is materialistic determinism (if affirmed as a universal principle). But, as I said, the concept of free will is not necessary for the empirical definiton of a conscious designer, although it is very useful for a more general philosophical theory of design. I apologize if I have included it in my definition when I answered you, I just thought you could be interested in the concept. gpuccio
Gordon: I fully accept your edict. Indeed, I think I have always tried to obey it. But please, consider that here we are in a blog, and we caanot always expect absolute terminological rigor from all. :) Well, at least on the two points you have mentioned, I would say that we have succeeded in "agreeing strongly without getting disagreeable". Which, after all, should not be difficult :) . gpuccio
Thank you everybody who has commented already in threads 14 and 20! Part of what I love about this site is that anybody with a genuine question can ask it and get respectful, well thought out answers. I want to open a new thread. 20 is basically a continuation of 14, however some of us are still commenting in 14 while 20 has digressed somewhat. I would just like to reiterate the specific topic I raised at the beginning so we can get a clear idea of what everybody thinks. If that's OK with you. After all this discussion, I am left with three opinions, that I am willing to change on persuasive arguments: 1) It is important to have a clear-cut, materialistically acceptable definition of a design agent. If you want to win people over from an opposing position it isn't good enough to just have a well defined effect if you don't have an equally well-defined cause. It doesn't have to be a narrow definition, it can be inclusive, it just has to have a well-defined limit that can be tested by reductio ad absurdum. The answer to the "Are beavers design agents?" question that goes "Either the beaver is, or whatever made the beaver is" is perhaps correct on a certain level, but that approach leaves you with human beings as the only hitherto observed design agents, which makes things much harder for the ID camp than they need to be. A more general definition that regards as irrelevant the distinction between the design originator and its proxy, and is therefore flexible enough and yet rigorous enough to include beavers etc, might be an easier starting point to defend and also help to clarify the debate, redirecting it towards more salient issues. 2) We do not yet have a clear-cut, materialistically acceptable definition of a design agent. We have such definitions for designed objects themselves in the form of dFSCI etc, which is a very clearly defined and appliable concept. However the debate sparked by my question "Are beavers design agents?" merely served to demonstrate that as yet we have no such clear definition of design agents, at least not one that doesn't fall back on "free will." 3) Free will is not a scientifically defendable premise. I happen to agree with it, but until you can find anything solid against the determinist hypothesis, a scientific defintion of a design agent that stipulates free will is a non-starter. Again, these are merely my current thoughts, I hope to be convinced otherwise. englishmaninistanbul
Thanks Eric. When I say "disrupt the operation of known physical and chemical laws" I am trying to say that an intelligent agent, with no direct external causal factor, intercedes in his environment. For example, a falling object should normally keep falling until it hits the ground, due to the operation of a known law: gravity. But if I catch that falling object, thus preventing it from hitting the ground, I have disrupted the operation of that law, and the cause can be traced back to a discrete, autonomous agent: me. I don't pretend to have digested the rest of your post yet and I have to go to work now, so I'll have to come back later. englishmaninistanbul
I see you insist on the same issue.
You wish to trivialize my question, but regardless of your sarcasm, it remains a rather important unanswered question. There's a large part of the ID community that asserts that humans have mastered biological design because we have observed the physical implementation of the code, and because we have the means to transfer genes from one organism to another. That's equivalent to noticing that the Voynich manuscript is made up of "words" separated by spaces. It doesn't provide us with the means of translation. And in my hypothetical case, knowing how DNA words are delineated doesn't help us translate them into function. In the absence of such a Rosetta Stone I'm going to assert that ID cannot rest on the analogy with human design, because humans can't design complex biological molecules. No one ever has. The closest we can come is to set up industrial evolution machines that crank out billions of candidates and sieve them. Chemistry is faster than computation. Petrushka
We can understand the functional space of proteins well enough, and have enough computing power, that we can simulate the folding and activity of the sequence, and understand the single anomaly that prevents its functionality.
Ah, it would be nice to have an actual theory of design. What if it turns out that protein folding is mathematically complex, that is to say it diverges rapidly based on small differences in parameters. At any rate, would you agree that we do not currently have any theory of folding that would make computation practical. At the moment it would seem that the fastest way to solve the problem would be to use chemistry. C2 and c3 are ruled out because we haven't designed any realistically long sequences from scratch, and there aren't enough particles in the universe to store a database of functional sequences. My follow-up question is, given a functional sequence, how do you compute the number of functional bits. How can you be sure that some parts of it are mutable without changing function? Do you have a theory for determining whether the entire sequence is critical? Petrushka
Gaah, link fail. My mention of the Bennett et al article should've linked to http://www.scientificamerican.com/article.cfm?id=chain-letters-and-evoluti. Gordon Davisson
gpuccio:
Gordon: Again I want yo thank you for your reply, and from the heart. I deeply appreciate your contribution, and it was a pleasure for me to read it.
Thank you as well; I greatly appreciate your civility (in the face of my recalcitrance) as well. I'm glad we can disagree so strongly without getting disagreeable. Your replies raise some issues that deserve serious discussion, but unfortunately I haven't had much time to write... so I'll reply on a couple of minor, easy topics instead:
[From my summary of the status of various parts of evolutionary theory:] - Common ancestry (all — or at least many — organisms are related to each other): very strong for the “many” version, much weaker for “all”.[...] I must say that I am a little disappointed that you conflate here ID with the negation of common descent, as many do. I would have expected maybe some more explixit distinction, given the high level and pertinence of your discussion.
Actually, I agree completely with you here, and I didn't mean to imply otherwise. I included it in my list because it's an important part of evolutionary theory, some ID supporters dispute it, and it (together with the facts that mutation and selection actualy occur) imples that mutation and selection have participated in shaping modern organisms (whether that participation was significant or not is another question). My favorite example to show the compatibility of ID and common ancestry is an article in Scientific American from 2003: "Chain Letters and Evolutionary Histories", by Charles H. Bennett, Ming Li, and Bin Ma. They use the example of variants of a chain letter (created & at least sometimes modified via ID) to show how phylogenetic reconstruction works, and the result is that yes, the chain letters show common ancestry, and the standard techniques for analyzing their history work fine. And on a completely different topic:
Let’s keep it simple, When we speak of information in ID, we are considering what Abel would call “semiotic information”. IOWs, we are interested to information that conveys a meaning.
I'll disagree here, because I see a number of different kinds (/definitions/whatever) of information used/discussed in ID contexts. Dembski, for example, looks at CSI and active information, neither of which match what Abel's talking about (at least as far as I understand it). I also see other kinds of information thrown in, usually without realizing that they are different kinds. For example, I see people claiming the importance of information in physics supports ID; but the information that shows up in physics is Shannon-information (and its quantum equivalents), which have nothing to do with conveying meaning. Confusing different meanings of "information" is rather a pet peeve of mine, so I offer an edict (for whatever it may be worth): speak of dFSCI, or semiotic information, CSI, or Shannon-entropy, etc as much as you want, but do not use the word "information" without qualifying what sense you're using it in. Ironically, the word "information" by itself has so many possible meanings that it is effectively meaningless. Gordon Davisson
Joe: I bow to your experience. But you don't really have to convince me. I said from the start that the complex and intelligent behaviour of animals is really a big mystery. Still, I think there is some value in the resonings I have proposed. But I am really happy to leave the issue quite open. :) gpuccio
Petrushka: Ah, I forgot. So that would rule out Paley’s watch and nearly every object in the universe that is not a computer program or a genome. Well, let's say that it woul rule out those things from my personal evaluation, obviously for my limited ability. Others can certainly do that work better than me. gpuccio
Petrushka: I see you insist on the same issue. But I will answer you just the same. How much dFSCI does the non-functional version have? This is easy. dFSCI cannot be evaluated if we do not recongize a function. That does not mean that the sequence, being a close approximation of a functional sequence, has not functional information. It has potential functional information. So there are two possibilities: a) we don't know that the sequence is almost a functional sequence. In that case we cannot recongize any function, and we cannot affirm any dFSCI (dFSCI has meaning only for a specific, defined function). That does not mean that the sequence has no functional informattion. It just means that we cannot recognize any function, so for our analysis it is a negative. One of the many false negatives. b) If, instead, in some way (see later) we are aware that the sequence is a close approximation of a functional sequence, we can still define a function. We can, for example, define: "any sequence that is no more than one aminoacid distant form a functional sequence with this function, and can therefore be a close precursor of this protein." That is a valid functional definition, and we can measure dFSCI for that definition. c) There are at least three ways that we can be aware of the true functional potential of the protein. c1) We can understand the functional space of proteins well enough, and have enough computing power, that we can simulate the folding and activity of the sequence, and understand the single anomaly that prevents its functionality. c2) We can know already the functional sequence, and compare the two sequences. c3) We are those who created the non functional sequence from the functional one. In all three cases, we can define an appropriate functional condition for tha apparently non functional sequence. But I want to state again that, if we cannot define a function and measure dFSCI for the "non functional" sequence, and then we can measure dFSCI for the functional one, in no way that means that the sequence has "gained" a lot of functional information by the change of one aminoacid, as I have already argued in my post #18. gpuccio
That is very, very interesting. But does it mean that they possess their own intelligence, or are they acting on programming/instinct? Ultimately I can't claim to know the answer. But it still strikes me as incredibly advanced programming, simply because they do one thing very well but not much of anything else. What do they do if removed from an environment where they can build dams? Do they exhibit the ability to analyze other environments or plan solutions to other problems? I'm not trying to limit them by using the term "programming." Perhaps if we understood instinct better we might find that the analogy doesn't fit at all. If that's what it is, it's far more advanced than anything that runs on a processor. They are using actual brains. They apparently are able to exercise foresight with regard to very specific activities. ScottAndrews2
Like I said, some in the field consider it (consciousness) an emergent property, but not all and although searle may have identified himself with that philosophical position, others who are strong AI researchers do not. To quote Wikipedia on Strong AI:
There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in science fiction and the ethics of artificial intelligence: consciousness: To have subjective experience and thought. self-awareness: To be aware of oneself as a separate individual, especially to be aware of one's own thoughts. sentience: The ability to "feel" perceptions or emotions subjectively. sapience: The capacity for wisdom. These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the rights of animals. Also, Bill Joy, among others, argues a machine with these traits may be a threat to human life or dignity.[13] It remains to be shown whether any of these traits are necessary for strong AI. The role of consciousness is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the neural correlates of consciousness, would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, naturally emerge from a fully intelligent machine, or that it becomes natural to ascribe these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.
Take it from someone working in this field, the role of consciousness in strong AI is not clear and a 'strong AI' in the sense that it is used by researchers in this field does not necessarily require consciousness. GCUGreyArea
gpuccio, ScottAndrews, et al- RE beavers as designing agencies- A little background- I deal with those not-so-little flat-tailed engineers on a daily basis. They have a habit of damming up a culvert down the street, that if left unchecked could cause serious flooding. There are many sreams and creeks with beaver dams in my general area.. These animals inspect and repair their work. They know when there is an issue and they set about fixing it. They cut and fall trees with the accuracy of a top-notch logger. They know that the work is that way and they make sure the tree falls in that direction. I challenge any of you to actually get out and watch what these animals do. Granted it will take some time but it should clear up some of your misconceptions- these guys work out problems and with what they have to work with they are pretty impressive. Joe
gpuccio, I think you mentioned this earlier - the information content of the programming for building dams is perhaps even better to look at. As with an orbital web, I try to imagine, having a machine capable of processing sensory inputs, movement, and manipulation of objects, how much programming would it take for it to analyze a space and construct a web or a dam? The latter is interesting because it involves both the gathering of materials and cooperation between multiple beavers. Also interesting are the numerous benefits beaver dams provide to the ecosystem rather than just to themselves. This is a pattern that repeats throughout nature and appears to exhibit foresight. Bacteria decomposing plant material have no interest beyond the immediate act of consuming something. And yet they are the reason why leaves can fall from trees without piling up indefinitely. If changes to individual organisms are explained in terms of benefit to themselves, how does one explain the incomprehensibly interwoven relation between living things in an ecosystem? Individual entities attempting to survive the best they can could never possess the foresight to contribute to an ecology which would facilitate their reproduction for countless generations. Granted, there is no specific "perfect" ecological balance. But how is any initial balance achieved? If one creature consumes excessive resources until none are left it eliminates both itself and more efficient variations equally. How, then, could any living thing evolve to participate in an ecology rather than competing to exhaust all resources until everything died? (I wasn't starting with that thought - I just went off on a tangent.) ScottAndrews2
GCUGreyArea: I found the following on Wikipedia: Searle identified a philosophical position he calls "strong AI": The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.[19] The definition hinges on the distinction between simulating a mind and actually having a mind. Searle writes that "according to Strong AI, the correct simulation really is a mind. According to Weak AI, the correct simulation is a model of the mind." That would seem in line with my use of the term. gpuccio
Ok, as I also pointed out, you will find plenty of people working towards strong AI who do believe consciousness is an emergent property, but there are others who don't and the field of strong AI research encompasses both views - and causes many heated debates amongst my peers! GCUGreyArea
GCUGreyArea: As you have seen, I have chosen for the moment to qualify explicitly the term. I appreciate your suggestion, however. I think that I have found the term "strong AI theory" used as I used it, and contrasted to "weak AI theory" (maybe in Penrose), but I could be wrong. I will check that as soon as possible, and if really my use is incorrect, I will change it. Anyway, thank you for your input. gpuccio
...the measurement of functional information in analogic objects, while certainly possible (as KF often correctly reminds us), is more difficult, at least for me.
So that would rule out Paley's watch and nearly every object in the universe that is not a computer program or a genome. I'm just thinking out loud here, but I spend more time than I should thinking about Douglas Axe's experiment. Suppose you have a gene comprised of, say 1000 base pairs. Suppose it is mutated by a change in one base pair, so that it is completely non-functional, even lethal. How much dFSCI does the non-functional version have? Suppose you are given the sequence for this gene in its non-functional version. Can you propose any way of determining how close it is to functionality, in the absence of any close relatives? Suppose you were given a million such strings, and all but one are completely random. Could you tell which one is just one mutation away from being functional? Petrushka
2. My personal claim is that strong AI theory (intended as the statement that consciousness can arise as an emergent property of some specific configuration of objective matter) is logically and empirically unsupported, and should be refuted, or at leat considered as what it is: a bizarre theory, neither interesting nor useful.
As I have pointed out previously, you should not be referring to Strong AI theory as the statement that consciousness can arise as an emergent property of some specific configuration of objective matter. Strong AI is a research goal that defines an area of scientific investigation, not all the researchers who believe it is achievable also believe that it is directly about consciousness, or that a system that would qualify as exhibiting 'strong AI' needs consciousness. Perhaps a more accurate term, if you want to use it, would be 'The emergent consciousness hypothesis'. Of course you can choose to qualify your use if the word 'Strong AI Theory' in the way you have, but you would be misrepresenting what the term Strong AI actually refers to. It does not refer to a theory of consciousness. GCUGreyArea
Joe: I have nothing against establishing that the dam is designed, be it by counterflow or any other argument. I cannot do that, because i have always concentrated on the evaluation of dFSCI in biological information. I am not dicussion here of "ultimate causes", bit of designers. Maybe the designer of some specific thing is an "ultimate cause" (I suppose he is, if he is God). Maybe not. If I design a post such as this, I don't usually consider myself an "ultimate cause": just the cause of the post. The problem is: the designer is the agent who inputs for the first time the necessary functional information. He is the agent who "configures switches", to use Abel's terminology. He is not the person, or object, that copies or executes the information. The designer is the original source of that information. A computer printing Hamlet is not the designer of Hamlet. And neither is the actor who plays the role (even if he can be considered the designer of the specific features of that intepretation). So, if the beaver's consciousness is not the source of the information to build the dam, as it seems likely, and it is only an "actor" palying a part, even if with personal features, then the beaver is not the designer of the dam, even if it is certainly its proximate cause. I hope I have expressed clearly my thought. gpuccio
gpuccio, I would think we could establish that the dam exhibits counterflow and go from there. And in the end we may be able to only determine and study the proximate, rather than ultimate cause(s). Joe
I think a more interesting question would be, is there a purely objective way to determine whether such artifacts as beaver dams, honeycombs, nests, and such are designed. Without having any information at all about their history or any information at all about possible makers?
Gee whiz, golly- where have I heard that before?
Intelligent design begins with a seemingly innocuous question: Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause?
Guess who said that (and also answered it) Joe
ScottAndrews2: If we can ascertain that the dam has CSI (I really don't know, see my answers to Eric and Petrushka), then we can infer that it is designed. The problem remains: is the designer the beaver, or the designer of beavers? To answer that, we need further information. gpuccio
Petrushka: I don't know if CSI is exhibited by dams, honeycombs and so on, because, as I have said many times, the measurement of functional information in analogic objects, while certainly possible (as KF often correctly reminds us), is more difficult, at least for me. That's why I stick in the discussion to digital information. In these cases, as I have suggested, we could measure the digital information in the genome (if it is in the genome) that allows the realization of those objects. That would be simpler, and certainly specific. Unfortunately, we have at present no idea of how those "instinctive" functions are implemented in the genome (or wherever else). So, we have to be patient, and stick to our "simpler" protein genes model. If the animal is conscious or not is relevant, but what is really relevant is if the animal is the conscious agent that originates the functional information. If the functional information is in the genome, then the animal, conscious or not, is not the designer, but the dam, or whatever else, iscertainly designed. gpuccio
Eugene: 1. You are right, the fact that we have no idea of what "instinct" really is makes the discussion difficult (see also my answer to Eric). But it is true that such instincts, whatever they are, seem to be transmitted in the individual species, and implement always the same type of functional output (with some variability that can be attributed to the environment, but also to some individual intervention of the animal). Beavers build dams, bees build hives, butterflies migrate, and so on. It seems that beavers cannot learn other tipes of machine building. In the same way, the behaviour seems to be mainly inherited, even if certainly some stimulation from environment and active learning is implied. Whether intelligence necessarily means consciousness depends on how we define the word. For me, it is better to define intelligence as the cognitive meaningful representations of a conscious being. Obviously, those representations can be "outputted" to material systems, like a computer, a book, and so on. Then we can say that there is "intelligence" in the computer or the book, but that only means that inteliigent representations hav outputted a form to a non conscious material system. So yes, in my terminology only conscious beings can be intelligent. Collective intelligence is a deeply interesting point, but I would say that at present it is a complete mystery, even more than individual instinct, especially if you consider that huge collective intelligence seems to operate, for instance, in bacteria. I would really leave that part alone for the moment, because I don't think we have any data to even propose an answer. Anyway, I am pretty sure that in all those manifestations, the role of the initial input of information from the original designer of life is overwhelming. 2. My personal claim is that strong AI theory (intended as the statement that consciousness can arise as an emergent property of some specific configuration of objective matter) is logically and empirically unsupported, and should be refuted, or at leat considered as what it is: a bizarre theory, neither interesting nor useful. When I discuss science, I am never interested in proving that something is "impossible" to refute it. It is often impossible to prove that something is impossible. That does not make that something real. In science, we refute theories becasue they don't explain what they should explain, or because they are logically inconsistent, or because they are really unsupported by facts. Luckily, more or less, all of those conditions are satisfied both for neo darwinism and for strong AI theory (intended in the sense that I have specified). gpuccio
Technically I agree. What if we were observers who saw beavers building dams but knew nothing else? The dams are the subject of the examination, not the beavers. ScottAndrews2
Eric: I would answer yes to all your questions. The important point is that we have no idea of how animal instinct is implemented. That would allow a more specific discussion, also about the complexity of the output. IOWs, we could measure not so much the functional complexity of the dam (that is probably more difficult to evaluate), but the functional complexity of the genetic (oe else) information necessary to have the beaver build a dam, and that would be easier, especially if that information is in digital form (which is likely). gpuccio
Again, thank you all. After some thought, I would like to take another step towards crystallizing exactly what I'm getting at. (Even I'm working this out as I'm going along.) Someone with a purely materialistic worldview will tell you that free will is an illusion, that when we eventually come to an understanding of the inner workings of the brain, we will discover that all our thoughts and actions are produced by mere chemical and electrial reactions interacting like so many wheels and cogs. Even someone who believes in God, if he also believes in fate, might actually find that such a hypothesis fits in quite well with his worldview. Since nobody actually knows precisely how the brain works, and therefore can produce no empirical evidence to prove or disprove such a viewpoint, we are left with philosophy and religion. Perhaps consciousness is something that can be regarded as an undisputed fact, because we all experience it. However I fear that "free will" falls short of that ideal. And if free will is disputable, you have no scientifically watertight way of arguing that there is any fundamental difference between a human, a beaver and my theoretical computer program. As far as I can see. I suppose you could say that my previous, ham-fisted attempt at defining an "intelligent agent" was an attempt to bypass this problem. englishmaninistanbul
GP, 1. It would be extremely interesting to clarify what consistutes "instinct" in terms of information. I hazard a guess that designing intelligence can be attributed to animals, although 'simple' animals have more collective intelligence than individual (such as termites, bees, vs. wolves, apes). But as we go up the complexity ladder, once collective intelligence is observed, it is consistently present. E.g. humans apart from high individual intelligence also have collective intelligence, which is also powerful (e.g. brainstorms). I don't think that intelligence necessarily means consciousness but I may be wrong. 2. Indeed not. The blind watchmaker videoclip on youtube caused me trouble until I realised that the algorithm there used the available albeit piecemeal information about the broken watch which 'by magic' assembled itself in a number of generations (and even improved upon itself, if I remember rightly). So the information was essentially already there. While it can be impressive, it does not refute the claim that strong AI is impossible. Eugene S
Are beavers design agents?
I think a more interesting question would be, is there a purely objective way to determine whether such artifacts as beaver dams, honeycombs, nests, and such are designed. Without having any information at all about their history or any information at all about possible makers? The ability to distinguish design, lacking all information about history, seems to be an essential feature of all ID paradigms. If animal products are considered to be designed, then it is irrelevant whether the animal is conscious or not. the products are obviously purposeful, and sometimes quite complex. Many of them cannot easily be replicated by humans. Petrushka
gpuccio: I agree that the question of animal behavior and instinct is difficult. If beaver dams meet minimal criteria for design detection (although we could perhaps debate the point), would you then say that beaver dams were designed, but not by the beavers? In other words, would we look to the beaver's instinct and conclude that whoever programmed that instinct is the real designer of the dams? I realize we don't have a great handle on what instinct is or how it is controlled, but on the reasonable assumption that it arises from the beaver's physical makeup -- receiving signals from the environment, initiation, feedback and control systems within the organism -- would it be reasonable to say that whoever set up the system and the programming is the real designer? If so, then in that sense perhaps we are suggesting that animal instinct is really another form of progamming, similiar to the computer (#2 in your response)? Eric Anderson
gpuccio: I think you've outlined some very thoughtful and helpful ideas in terms of defining what an intelligent agent is. Thanks. Eric Anderson
Thanks, englishmaninistanbul:
Leaving that aside, as I understand it the logic goes, “If DNA does in fact have a designer, it is logical to assume that that designer would not have left whole chunks of genome with no function.” Playing Devil’s advocate, I still have trouble seeing why that assumption is any different to the “no designer worth his salt” argument that is so often wielded against ID, which, it is effectively argued, is fundamentally philosophical in nature and therefore scientifically moot.
Two points: First, yes, there is an aspect of expectation with regard to designed things. Designed things can break down or degrade; they can also be imperfect or strange or unusual or quirky from the outset. But in our uniform and repeated experience, things designed are heavily characterized by function, not junk. We also have no experience with any highly complex integrated system that happens to function just wonderfully while peeking out from a sea of detritus, but that is what the junk DNA proponents would have us believe. It goes against experience. So yes, there is an expectation from what we know about other designed things. This is not the only reason to suspect little junk, however. In the case of junk DNA there are also other reasons to suspect function, as noted in my last comment (particularly point 2) at the end of this thread: https://uncommondescent.com/junk-dna/vidthe-debate-that-never-was-craig-vs-dawkins-junk-dna-does-show-up-though/ Second, the ID view is not analogous to the philosophically-based “bad design” argument. No ID proponent is arguing that design has to meet some artificial expectation of “perfect” design. Further, machines break, degrade. Bad design, in the limited sense of non-functional design, is an empirical issue that can be studied. "Bad design" in the sense of arguing that there should be “perfect design” or “no designer worth his salt” comments are non-scientific philosophical judgments. ID acknowledges the possibility of imperfect design. ID acknowledges the possibility of breakage, etc. ID doesn’t try to get into the designer’s head and decide whether the designer is “perfect” or even argue that there is such a thing as “perfect” design. Our uniform and repeated experience suggests that it is logical to assume an overall modicum of function, and it is perfectly valid to infer from that experience that things designed will exhibit overall function. This inference is not parallel to the typical bad design argument, which tries to get into the mind of some putative creator and say what the creator would or would not do. Furthermore, the bad design argument in biology has a very bad history, as it has almost universally turned out to be wrong as we learn more.
An intelligent agent is a discrete entity that autonomically acts to disrupt the operation of known chemical and physical laws.
I’ll defer to pguccio or Upright Biped for a moment on the definition, but just want to mention that there is no reason to suggest that a designer would “disrupt” the operation of known chemical and physical laws. When an aerospace engineer designs and builds an airplane to soar above the Earth, he doesn’t disrupt the law of gravity. He uses it, in conjunction with other known principles of aerodynamics and attributes of thrust, lift, weight, etc. to achieve a purpose. I don’t mean to be pedantic on this point, but I’ve seen lots of people (including Denton early on, unfortunately) fall into the trap of thinking that design is somehow unpalatable in certain situations because it would be an act in violation of the "laws of nature,” so I just want to make sure your definition isn’t heading that direction. Eric Anderson
Scott, I agree that what YOU designed YOU are he designing agency, even if you design something to perform some task and it performs it. And yes I can see that can be extrapolated to say that beavers are not the designers of their dams and lodges, whatever designed them is. And I can see that would also apply to humans- we are just programs rum amok. Hey that way no one claims responsibility- but I digress. Say you are in a new world- a new land, new continent- you are an explorer. Your land/ world/ continent doesn't have beavers nor any dam-building animals besides humans. You are trodging along in this new place and you come across a stream in a river bed. Knowing something about rivers and streams, because they exist where you live, you follow go upstream just because you are curious as to why a stream is in a river bed- is the source drying up? Maybe there is a tribe up there. So you go about a mile and there it is- a huge dam holding back a lake of water. The dam is a crude contstruct made of mud and wood- branches and tree trunks, all of which end in a nice shaven point. You look around and see tree stumps that also have nice, neat shaven points. Where you live there are dams that hold back rivers too. Some are made up of stacked logs plastered with mud. Do you A) suspect there was some agency that built the dam? or B) suspect some non-agency didit? Joe
ScottAndrews2: I see that you had already answered in the same line of thought of my post, although with remarkable greater synthesis. :) I agree with your points. gpuccio
englishmaninistanbul: Good questions. 1) The first one is more difficult. Animal behaviour often generates complex outputs, that are obviously designed. The problem is, is the individual animal the designer? I see it more or less this way. Personally, I have no doubts that those animals are conscious, although probably in a rather different way than humans. So, they could be designers. But it is also true, as far as I can understand, that complex animal behaviours and outputs are repetitive. Indeed, they are generally attributes to "instinct", whatever that may mean. It is true that animals, in general (there may be exceptions) are not capable of generating truly new dFSCI, implementing truly new functions. Certainly, they usually lack structured language. They don't design new types of machines. That has always been recognized as one of the fundamental differences between animals and humans. That's also the reason why humans have a detectable history, where new dFSCI accumulates, is transmitted from generation to generation, and constantly added to. IOWs, animals IMO have conscious representations, but I am not so sure that they really have meaningful complex cognitions and truly free purposeful actions. If they are mainly guided by instinct (whatever it is), then the complexity should be found in the basis for that "instinct". Now, we don't really know what that basis is. It is reasonable to hypthesize that it derives from the genome, or from other epigenetic information. In that case, the animal would not really be the designer, even if it is conscious. I will be more clear. We humans have in our body a specific algorithm to increse the specific affinity of the primary response, after thye primary immune response. It is an amazingly intelligent algorithm, a wonderful example of protein engineering. And it probably outputs new dFSCI (the more specific antibodies), even if that new information is derived from preexisting information: the genetic algorithm, plus the specific information about the external epitope stored in antigen presenting cells. Now, would we say that the individual human is the designer of new dFSCI because his body is producing a new, specific antibody? Not really. As we have seen, it is not that the individual huamn consciously generates that output. Information already present in him, plus information derived from the environment, accomplishes the task. In this case, the judgement is easy, because the individual in no way is conscious of what is happening. But the same reasoning can be applied also to partially conscious behaviours. Evne movement of the body is in itsef a complex function (very complex indeed), and we are certainly conscious of the fact that we moce, and partially of how we move. But the huge amount of information that allows us to move was not created by us. In all these cases, the output of complex function can be outstanding, but there is a common feature: the function is repetitive. Our amazing immune algorithm can very efficiently engineer more specific antibodies, but it can do only that. Algorithms, indeed, cannot IMO create new dFSCI, except in the measure of the information they already have, plus eventual information from the envirnment that they are directly ore indirectly programmed to process. The identification of new functions, and the cretaion of complex information to implement them, seems to be a prerogative of cosncious intelligent beings, with at least the human level of meaningful maps of reality, and of purposeful, free actions. So, my humble opinion is that no, beavers are not designers in the sense I have defined. But the point is not easy, and I am ready to discuss it. 2) The second one, instead, is easy. The computer is not conscious. It is not a designer. Whatever functional complex output it can generate, is the result of the functional complexity in it, designed by its designer, plus eventual information derived from the environment that the designed system is prepared to process. That, as I have tried to say in the previous point, creates fundamental constraints to what the system can do. For instance, any computerized system can never truly "recognize" a new function, because it cannot understand what "function" means. It is not aware of purposes, so it cannot certainly recognize them, unless it has been objectively programmed to recognize some pattern as a "purpose". But the main point is: a computer is not conscious, and never will be. It has no subjective representation. Therefore, it can never be a designer, in the sense I have defined, because my definition of design is: "A process where a conscious intelligent purposeful being outputs, directly or indirectly (maybe through a computer), specific conscious intelligent purposeful representations to a material system." gpuccio
Joe, Your last question is the one that matters, and the answer is yes. I wrote a simple program that creates huge, very difficult mazes. The initial random lines it draws create its environment, and then it responds to that environment by filling in the rest. I am the designing agency, not it. ScottAndrews2
Are beavers design agents? I don’t think so.
Could beaver dams and lodges exist without them? Beaver dams and lodges can be traced back to the activity of the beavers.
They do the same thing over and over without knowing why.
And you talked to the beavers to make that determination?
If they were truly design agents, one would expect them to design something new once in a while.
If what you are currently designing works for your purposes there isn't any reason to change.
The “programming” they enact which enables them to analyze a site, collect materials, and build what they do is impressive, but there is no evidence that at one time beavers did not do it and then used their own intelligence to formulate the idea.
Is that a requirement to be a designing agency? Joe
Are beavers design agents? I don't think so. They do the same thing over and over without knowing why. Their work benefits others unbeknownst to them. If they were truly design agents, one would expect them to design something new once in a while. The "programming" they enact which enables them to analyze a site, collect materials, and build what they do is impressive, but there is no evidence that at one time beavers did not do it and then used their own intelligence to formulate the idea. Am I splitting hairs? You could call them agents of design, even designers, but not intelligent designers. The same is true of the computer program example. Regardless of its behavior, its seemingly intelligent output is a direct effect of someone's intelligent input. ScottAndrews2
GD, Okay, I continue to wait. Meanwhile GP has said many useful things on the linked information theory stuff. And I previously linked on the issues and how they are connected. I simply highlight here that isolated and identifiable, unrepresentative zones T in a large -- 10^150 - 10^300 is the lower bound on "large" -- config space W, will be such that the best explanation of an observed case E coming from T will be design, e.g. posts in this thread. Sampling theory will tell us why -- note no need to work out detailed probabilities, we are talking about searching large spaces on chance plus blind necessity. A small sample -- size of one straw to a cubical haystack 3 1/2 light days across is typical -- will with overwhelming confidence come up with straw, even if a whole solar system lurks within. And this is very closely related to the grounding of statistical thermodynamics and the second law, which points out why the spontaneous direction of change is towards the statistically overwhelmingly dominant clusters of microstates. I add, that T can be defined on function. Just yesterday I discovered a WP bug. If you put a square bracket in a caption for a picture, the post will break and vanish. That is a cliff of a boundary for the island of function! KF kairosfocus
Yes beavers, ants, termites, bees- are all designing agents. As for your computer it could be traced back to YOU and you are a designing agent. Joe
Thank you very much! I finally have that definition I've been looking for (and much better than my attempt): Design is produced by the agents who have all these three things: a) Conscious representations (consciousness) b) Meaningful cognitions (cognitive maps of reality, intelligence) c) Purposeful actions (originated by free will) I think it might be helpful in countering the "design is an illusion, consciousness is undefinable, free will is an illusion, if you say there's a designer who designed the designer" sort of objections if such a definition were standardized and given a term like "design agent." Whether or not this is in fact needed would probably also be a subject for fruitful discussion. :) I'm afraid I've only just been able to catch up with everything else that's been going on in this thread since my last post, and I'm still mulling it over. But before the iron goes cold, I'd like to strike it with two questions: 1) Are beavers design agents? 2) If I build a computer that can identify the objects around it, work out what would happen if it did what, and has a randomized "purpose determiner" to decide what it will do next, would that qualify as a design agent? englishmaninistanbul
englishmaninistanbul: I find now the time to comment a little on your post #14, which touches some fundamental issues. You say: Just having a little think to myself, it occurs to me that part of the trouble is in defining “consciousness” and “agency.” Correct. I will start with "consciousness". I have defended for a long time here a purely empirical definition of consciousness. According to it, consciousness is the simple fact that we have subjective experiences. Why is it a fact? Because each one of us perceives his own subjective experiences, and knows that they exist. All other "facts", about outer reality, science, and so on, come after that. They happen in the subjective experience of each one of us. So, consciousness is "the mother of all facts". What about consciousness in others? Is it a fact? No. It is an inference by analogy. We see other people, we see that they are similar to us (have a similar body, behave like us, speak like us, share with us inofrmation about their own cosncious experiences). On that base, by analogy, we are convinced that they too have subjective experiences similar to ours. But we don't perceive those experiences directly. We infer them. It is, however, an universally shared inference, on which we build our whole map of reality. (Solipsism is an expception, but it does not really change the general scenario). So, consciousness is part of reality, an empirical fact that cannot be denied. Must we "explain" consciousness? I don't think so. Do we "explain" what matter or energy are? What a force is? No. They are parts of our map of reality, and we "explain" other things by them. The same is true with consciousness. We cannot "explain" it as the result of some objective configuration of objective matter. No such theory could ever explain "the hard problem of cosnciousness", why subjective experiences exist. So, what can we do about consciusness? Many good things: a) Accept it as a fundamental part of reality b) Describe it, its formal properties, its rules and laws c) Investigate the relationship between consciousness (subjective experiences) and matter (including body, brain, and all objective entities). Some thoughts about b). Our consciousness has some distinctive formal properties, that have generated the emergence of words and concepts that describe those properties, and that can never be defined or understood without any reference to the subjective expreinces that have generated the concepts. IOWs, those concepts and words merely "describe" the form of subjective experiences, and in no way "explain" it on a purely objective basis. The first, fundamental property if subjectibe experiences is that they are modifications related to a perceiving "I". While every representation changes, the I is perceived as the same. That would require some more specifications, because of the wrong use that is often done of the word "I", but for the moment this will be enough. The second, fundamental concept is cognition. And the main expression of cognition is the concept of "meaning". That is the cognitive aspect of consciousness, and it maps reality giving it meanings. Its first expression is the concept of judgement, of "true" and "false". The third, fundamental concept is free will. And the main expression of free will is the concept of "choice". That is the moral and active aspect of consciousness, and it outputs cognitive maps to reality, attributing purposes to that interaction. Its first expression is the concept of feeling and moral polarization, of "pleasure" and "pain", of "good" and "evil". Ehen we speak of "intelligent conscious agents" as the originators of design, we are probably not complete. Indeed, design is produced by the agents who have all these three things: a) Conscious representations (consciousness) b) Meaningful cognitions (cognitive maps of reality, intelligence) c) Purposeful actions (originated by free will) So, how can we be reasonably sure that another agent is conscious, and has all those properties? Again, we easily infer that for other human beings. We infer it form their appearance, but especially from their behavious, language, form what they do, form how they interact with us. We don't use definitions for that. We just infer from our experience. ID theory has brought a new angle to all that. Having identified in design an activity absolutely specific of conscious intelligent purposeful agents (design can be defined as the projection into a material form of conscious representations that are meaningful and have purpose), ID hase found a formal property that can be recognized in designed objects, or at least in many of them: CSI, with its subsets like dFSCI. Tha is very interesting. Not only it is the foundation for inferring design in biological information, which is a very important issue without doubt. But also, it gives us for the first time a reliable marker for consciousness, beyiond our intuitive inferences by analogy: CSI is the product of conscious intelligent purposeful agents, and only of them. That is something, indeed. For the first time, we find that consciousness is not only an ornament to reality, but that it can do something that no non conscious system can do: generate huge amounts of CSI. Flew confirms that he accepts there must be “an intelligence”, and then reasons that the intelligence must be “omnipotent”, but that we’re not entitled to infer anything else in a religious sense. When questioned on whether that intelligence is eternal, he says “you can’t really separate the eternity from the omnipotence.” The interviewer asks if this must be a “personal force or being”, and among other things Flew says “He’s got to be conscious if he’s going to be an agent.” And so on. Well, I find Flew's answers very reasonable and sincere. Intelligent design does not speak to the nature of designers anymore than Darwin’s theory speaks to the origin of matter. I don't think that is a good way to put it. Two different things are both true: a) For design detection, it is not necessary to know details of the designer or of its nature. b) From the analysis of designed things, many things can often be inferred about the designer. But not always, and not all. Regarding predictions, I invite you to read my post 15.1 here. I don't believe that the prediciton of function for non coding DNA is the best prediction of ID theory, but it has some value. That seems confirmed by how many classical darwinists stick to the position that most non coding DNA has no functions. I do believe that many functions will be proven for most non coding DNA. It is not a question of "elegance". It is simply the most reasonable assumption, from a design perspective. Nobody writes code to use only 1.5% of it. Then you quote, about the issue that "The Designer Must be Complex and Thus Could Never Have Existed": This is obviously a philosophical argument, not a scientific argument, and the main thrust is at theists. So I will let a theist answer this question Well, as the "answer" was indeed taken form a post of mine, I quote it here, for mere narcissism: “Many materialists seem to think (Dawkins included) that a hypothetical divine designer should by definition be complex. That’s not true, or at least it’s not true for most concepts of God which have been entertained for centuries by most thinkers and philosophers. God, in the measure that He is thought as an explanation of complexity, is usually conceived as simple. That concept is inherent in the important notion of transcendence. A transcendent cause is a simple fundamental reality which can explain the phenomenal complexity we observe in reality. So, Darwinists are perfectly free not to believe God exists, but I cannot understand why they have to argue that, if God exists, He must be complex. If God exists, He is simple, He is transcendent, He is not the sum of parts, He is rather the creator of parts, of complexity, of external reality. So, if God exists, and He is the designer of reality, there is a very simple explanation for the designed complexity we observe.” I must add that for ID there is no necessity that the designer be God. Aliens have always been a valid alternative, as Dawkins himself admits. Aliens could be the designers of life on earth. The design detection would be wholly satisfied by such a scenario. It is true, however, that of aliens have a physical body, and express themselves through a complex physical nrain (or equivalent), and are in some way "biologically complex", then the question "who designed the designer" is legitimate. We should ask ourselves how the complex information in alien bodies (if empirically confirmed) originated. I was just stating that instead, if God or any other non physical agent is the designer, there is not need for that god or physical agent to be "complex". Indeed, what characterizes consciousness in humans is exactly a "simple" perceiving I. Most conceptions of a god believe that he is simple in essence, although his creation is very complex. It is true: these are phylosophical arguments. But also the original objection ("The Designer Must be Complex and Thus Could Never Have Existed") is philosophical. Affirming that the designer must be complex is the same as assuming that it must have a physical body and brain. That is a philosophical assumption, and has nothing to do with science. Usually, I don't use much philosophical arguments in my ID discussions. But if I have to answer a philosophical question, I feel that I am allowed to use philosophical answers. That's all. gpuccio
I would have to disagree that everything is missing. Many fossil lineages have smaller morphological differences than those separating breeds of dogs.
That does not address anything I said. What was the incremental genetic change? If it's as small as you say, what distinguishes it from the background variations between nearly every specimen of everything? Why was it selected? How do a series of them add up to something novel? You could call those questions "gaps," but what does that leave? What is between the gaps, and what good is it if it doesn't include a single account, real or hypothetical, of anything evolving described in terms of actual variations or mechanism? How can the 'cornerstone of biology' have no concrete examples in biology beyond bacterial loss of function adaptations and fishes that change color? "Gaps" is a marketing word. It sells the idea that there is a continuum of knowledge with missing pieces. A gap is what you have when you are missing a tooth, not when you have one or none. ScottAndrews2
Petrushka: I appreciate your appreciation. And I appreciate your openness to empirical verification, which, as already said, is something I fully agree with. About OOL, I will make a personal consideration: I have no strong evidence at present, but my favourite model is that life originated, in the span of a few hundred million years between the time when earth became compatible with life and the time when the first division into archea and bacteria occurred, pretty much as we know it today: with an already complex eukaryote, more or less what we now call LUCA. Personally, I don't believe in a gradual origin of life. I am ready to change my mind if and when real data will support a different scenario. So, IMO, it was neither DNA first, nor protein first, nor RNA first, nor metabolism first. It was DNA, protein, RNA and metabolisl all together, as we see it today. A big, rather sudden informational leap from inanimate matter, the result of a huge design effort. Nothing like that happened for a long time after that, at least up to the origin of eukaryiotes, and then later with the origin of multicellular macroscopic beings, with the ediacara and cambrian explosions. In my view, design is implemented both in "punctuated" forms and in more gradual forms. Speciation often allows a more gradual information implementation, while fundamental transitions in the biological plan seem to happen rather suddenly. But again, this is only a suggestion, entirely guided by what we at present know. As I said, I am ready to change my mind according to new evidence. gpuccio
With ID and Creation no intervention is required wrt biology after the initial set-up.
Well, there are so many divergent opinions among ID proponents that it is difficult to discuss this. Petrushka
I have to admit that every time a gap in the continuum is filled, two new gaps are created, so there will never be a gapless continuum. I would have to disagree that everything is missing. Many fossil lineages have smaller morphological differences than those separating breeds of dogs. That is true even of "punctuated" species. Since most dog breed are only a few hundred years old, we know that morphological changes can occur very quickly. Petrushka
Again with the twisting in the wind; proving my point that you simple cannot even speak of the evidence against you. If someone provides blatantly observable physical evidence against your worldview, then they are told they must produce either a God or beaker of proto-life (with instructions) in order to be taken seriously. This isn't a demand of science or empiricism; it's a defense for a defeated ideology. But yet, for you to swallow your own narrative whole, one is told they only need the see the results of the process (ie: we are here, so we must have evolved from a unguided chance event in chemical history). Again, what mechanism is causally-adequate to explain an observable immaterial property being instatiated into a physical object? Upright BiPed
With ID and Creation no intervention is required wrt biology after the initial set-up. As for the consilience of evidence, well, seeing that ID is evidenced in fields such as physics, astonomy, cosmology, biology and chemistry, I would say THAT constitutes a consilience of evidence. BTW you are allowed to believe and imagine anything you want about evolution. However sooner or later, for science anyway, you will need something we can actually test. Joe
Actually, angels as well as angles. Petrushka
No one has ever observed Pluto making a complete orbit of the sun. It’s all just extrapolation.
It's a good extrapolation, a warranted one. That does not make "extrapolation" a magic word that grants credibility anything and everything. Yours is wishful, not warranted. To explain nothing and call the unexplained portion a "gap" would be quite generous. A gap is something missing from a continuum. The continuum is inferred from all the stuff that isn't missing. Everything is missing. Your comparison to Newton is also unwarranted.
We believe Whales and bats and amphibians and birds and such evolved incrementally because we keep finding gap fillers, and because technology has given us genomics, which provides a parallel method of looking for nested hierarchies.
My fingers are bleeding from typing this so many times. The increments of evolution are individual genetic changes. The proposed mechanisms of evolution are selection and drift and the flavor of the month. You cannot identify any of these from taxonomies, hierarchies, or fossils. You are taking what is observed, namely the diversity, and adding a post-hoc explanation. Even then your post-hoc explanation omits - guess what - the incremental changes and the specific selection of each. That's "evidence" for college students with glazed-over eyes who've read it so many times they don't stop for ten seconds to think about it, which is all it takes to see what's wrong with it once you realize that you're allowed to. ScottAndrews2
Information is neither matter nor energy yet interacts with both. Joe
Of course I'm interested. I follow origin of life research. Saying I don't know isn't the same as not being interested. You seem to know. You could save everyone a great deal of time an trouble simply by publishing the time and place where life was breathed into inanimate matter. Perhaps a detailed description of the first replicator using the genetic code. Petrushka
No one has ever observed Pluto making a complete orbit of the sun. It's all just extrapolation. Newton extrapolated the laws governing the motions of the planets from the motions of cannonballs. It's how science works. Prior to Newton, cannonballs and planets were not considered to engage in regular motion. They required angles to push them along. Since Newton, science has defaulted to regularity rather than to intervention. It leaves a lot of unexplained gaps, but the history of science is the history of filling gaps. We believe Whales and bats and amphibians and birds and such evolved incrementally because we keep finding gap fillers, and because technology has given us genomics, which provides a parallel method of looking for nested hierarchies. It is not any particular line of evidence that provides proof, but the consilience of many separate, independent lines of evidence that strengthens the case. I'm not aware of any example in the history of science where an intervention hypothesis has been confirmed. Petrushka
Since Newton, science assumes that an observed regular process is the best explanation for the history of anything within its scope.
Thank you, that's a point for me.
One difference is that no one has ever seen a human or any other intelligent entity design a biological molecule.
No one has seen anything, least of the invention of innovative new features by iterative variation and selection. But where you're quick to point out the one difference, the difference between fish changing color and fish developing lungs and walking on land and then changing their minds and evolving into whales sails right past you. You'll draw lines, erase others, and believe what no one has seen as long as it's the one you've picked. Your selective skepticism is transparent. As UB has repeatedly explained, life depends on semiotic information which is an unmistakable, undeniable mark of intelligent purpose. Anyone acquainted with the evidence who denies it is willfully beyond the reach of reason. In order to accept the fantastic proposition that RM+NS (+ etc., etc., X, unknown but unintelligent) adds up to a body of innovation that dwarfs the entire history of human technology, we must rule that the intelligence responsible for making life possible absolutely played no further role in its development. We must make one baseless assumption to justify another. That is justification of a preconception, not science. ScottAndrews2
"From my viewpoint the only issue that can rationally be discussed is whether genomes can evolve without intervention." However, you are not the least bit interested in HOW they do so; the semiotic mechanism that makes it all possible. If the very process by which the transfer of information displays "intervention" in the most profound dynamic manner possible, then to hell with it - 'let's not talk about that'. That, my friend, is intellectual fraud. It is the bane of empirical science. Upright BiPed
I'll just chime in here to show appreciation for the nature of this discussion. As for comments about common descent, I'll have to say that bot ID and Evolution carry a lot of baggage that has nothing to do with the history of life. From my viewpoint the only issue that can rationally be discussed is whether genomes can evolve without intervention. Petrushka
Stop it with the nested hierarchies already- the theory of evolution can live with them or without them and it does not expect a nested hierarchy based on traits. Not only that descent with modification doesn't expect one based on traits for the simple fact that traits can be lost. Lose a trait and lose containment. Nested hierarchies require containment. As for Newton- EXACTLY. The ONLY process we know of that can create new functional and useful multi-part systems is design, intentional design, including intelligent design evolution as observed in all genetic and evolutionary algorithms. Also changes in populations over time have only been observed to create a wobbling stability- no progress, no new systems. And once we know the detailed history of those molecules ID will become the reigning paradigm. What else is there? Stuff happened and life emerged? Joe
Give me a counterexample. Show me a theory or a research program that does not assume that matter is whatever interacts with matter. (Matter and energy being different expressions of the same thing.) Petrushka
For what evolutionary transitions have you calculated the probabilities, and how, so that you can say it is probable rather than improbable?
Since Newton, science assumes that an observed regular process is the best explanation for the history of anything within its scope. That is why the nested hierarchies of fossils and genomes is critical. that is why Lenski and Thornton have spent decades demonstrating that incremental evolution is possible, and that multi-step adaptations can be bridged by neutral or nearly neutral changes. Probability theory cannot be applied to events that have already happened, for the simple reason that anything can be analyzed as the product of a long string of individual events, each of which has low probability. Take the last dozen Lotto winners and ask the probability that those exact people would win in that sequence, for example. ID looks at structures an assumes that they are necessary or pre-planned. That is certainly justified in looking at human artifacts like watches, but there are two important differences when looking at biological structures. One difference is that no one has ever seen a human or any other intelligent entity design a biological molecule. Not without using a process equivalent to evolution. No one can outline a process for biological design that doesn't involve cut and try. So the analogy to the watchmaker fails at the most elementary level. No one has seen a biological watchmaker. The other difference is that we have observed natural processes that create changes in populations over time. The remaining questions all revolve around our ignorance of the detailed history of genomes. ID survives because we don't know the detailed history of the molecules. Petrushka
Pertrushka, in my last comment to you I said that you would say anything before you'd allow yourself to address the evidence in earnest. Like clockwork, you did exactly that.
It hasn’t been demonstrated to be irreducible
Bullshit. Pure bullshit. What part of the codon, tRNA, aaRS system can be removed and still function as the mechanism to transfer genomic information? Answer the question, Petrushka, it was your bald assertion, so answer it. If you cannot answer that question, then common intellectual honesty requires you to retract your remark (instead of justifying it with equivocation and higher grades of cow poo).
...and your argument is simply a restatement of the fact that we don’t know the history of the origin of life.
My argument is not a restatement of the OOL mystery, it is an argument that the tranfer of information from the genome is observably semiotic, and logically/necessarily so. That argument is supported by a) 100% of the evidence, b) logical coherence, and c) the embarrassing fact that not a single materialists has thusfar been able to refute it by observation (nor even offer a conceptual counter-example). Deal with it. ;) Upright BiPed
Gordon: Antievolutionists have been trying to use information to refute evolution for a long time. I think A. E. Wilder-Smith may’ve been the first to seriously pursue this line of reasoning. He basically claimed (IIRC, it’s been a while since I read him) that information theory says that information can only come from intelligent sources. Unfortunately for him, both the statistical and algorithmic theories of information (the two primary theories) say almost the opposite: in the statistical theory, information sources are generally assumed to be random (even when they really aren’t, because the difference because the difference between intelligently-created and random information doesn’t matter to the aspects it studies). The algorithmic theory doesn’t generally deal explicitly with the creation of information, but random processes are certainly capable of producing Kolmogorov complexity (AIT’s measure of information). Let's keep it simple, When we speak of information in ID, we are considering what Abel would call "semiotic information". IOWs, we are interested to information that conveys a meaning. The so called theories of information, including Shannon's, are not interested in meaning. They are essebtially cybernetic or communication theories. Very useful to compute complexity. But they avoid the problem of "meaningful information". ID tries instead to approach tha problem rigorously, and quantitatively, introducing the concept of specification. Let's go on. Some later, more clueful antievolutionists (mainly Dembski) realized that the definitions from the standard theories of information didn’t give them any basis to refute evolution, and so set out to create their own theories and definitions that could provide a framework they could use. I would like to say here that the reason to be interested in meaningful, semiotic information is certainly not only to refute evolution. The problem of meaning is fundamental in all human knowledge, and it has been avoided for too long by science. The more interesting question is something more like blueprint-style information. If it (or something similar) could be properly defined (as opposed to what I’ve done) and it can be shown that evolution has no way to add it to the genome, then you’d have a case. I have a case. I have properly defined dfSCI. And it can be shown that non design evolution has no way to generate it. The definition you’re using of dFSCI takes a very different approach to ruling out natural production. Where Dembski is rationalist (mathematical derivations of why natural processes can’t produce CSI), you use an empirical argument (natural processes have never been observed to produce dFSCI). I am very happy of these words. I feel that you have correctly understood my approach, that is completely empirical. I think that you describe well the difference with Dembski's approach, which is certainly more important and depper than mine. But they are however different, even is my empirical approach obviously uses many tools created or detailed by Dembski. I discussed some general problems with this approach earlier, but let me take a closer look at this particular argument. I think it’s pretty clear that evolutionary processes can produce increases in dFSCI, at least if your measure of dFSCI is sufficiently well-behaved. Consider that there exist point mutations that render genes nonfunctional, which I assume that you’d consider a decrease in dFSCI. Point mutations are essentially reversible, meaning that if genome A can be turned into genome B by a single point mutation, B can also be turned into A by a single point mutation. Therefore, the existance of point mutations that decrease dFSCI automatically implies the existance of point mutations that increase dFSCI. Ah! Now we are coming to something really interesting. I must say that I have really appreciated your discussion, and this is probably the only point where you are explicitly wrong. No problem, I will try to show why. Please go back to my (quick) definition of dFSCI in my post number 9 here. I quote myself: "No. The dFSCI of an object is a measure of its functional complexity, expressed as the probability to get that information in a purely random system. For instance. for a protein family, like in Durston’s paper, that probability is the probability of getting a functional sequence with that function through a random search or a random walk starting from an unrelated state (which is more or less the same)." Well, maybe that was too quick, so I will be more detailed. a) We have an object that can be read as a digital sequence of values. b) We want to evaluate the possible presence of dFSCI in that object. c) First of all we have to explicitly define a function for the digital information we can read in the object. I we cannot define a function, we cannot observe dFSCI in that object, It is a negative. Maybe a false negative. There are at different ways to be a false negative. The object could have a function but not be complex enough: it could still be designed, but we cannot say. Or we could not be able to understand the code or the function in the object. d) So, let's say that we have defined a function explicitly. Then we measure the dFSCI for that function. e) To do that. we must measure the functional (target) space and the search space. Here various possiblities can be considered to approximate these measures. For proteins genes, the best way is to use the Durston method for protein families. f) The ratio of the target space to the search space if the complexity of our dFSCI for that object and that function. What does it express? As I said, it expresses one of two things, which are more or less equivalent: f1) The probability of obtaining that functional sequence from scrtach in a purely random system: IOWs, for a protein gene, the probability of obtaining any sequence that produces a protein with that function in a system that builds up sequences just adding randomly nucleotides. f2) The probability of obtaining that functional sequence through a random walk. That is more relevant to biology, because the usual theort for genes is that they are derived from other, existing sequences through variation. But the important point, that IO have explicitly stated in my previous post, is that it expresses "the probability of getting a functional sequence with that function through ... a random walk starting from an unrelated state. Starting from an unrelated state. That's the important point. Because that's exactly what happens in biology. Basic protein domains are unrelated states. They are completely unrelated at the sequence level (you can easily verify that going to the SCOP site). Each basic protein domain (there are at least 2000) has less than 10% homology with any other. Indeed, the less than 10% homology rule bears about 6000 unrelated domains. Moreover, they also have different structure and folding, and different functions. So the question is: how does a new domain emerge? In the example I cited about the human de novo gene, it seems to come from non coding DMA. Many examples point to transposon activity. In no case a functional, related precursor is known. That's why dFSCI is a good measure of the functional information we have to explain. Let's go to your argument. You say: "Consider that there exist point mutations that render genes nonfunctional, which I assume that you’d consider a decrease in dFSCI." No. That's wrong. We have two different objects. In A, I can define a function and neasure dFSCI. In B, I cannot define a function, and dFSCI cannot be measured. Anyway, I could measure the dFSCI implicit in a transition from B to A. That would indeed be of one aminoacid (about 4 bits). And so? If you have a system where you already have B, I will be glad to admit that the transition from B to A is of only 4 bits, and it is perfectly in the range of a random system. IOWs. the dFSCI of that specific transition is of only 4 bits. But you have to already have B in the system. B is not unrelated to A. Indeed, you obtained B from A, and that is the only way you can obtain exactly B. So, can you see why your reasoning is wrong? You are not using the concept of dFSCI correctly. dFSCI tells us that we cannot obtain that object in a purely random system. It is absolutely trivila that we can obtain that object in a random system starting from an almost identical object. Is that a counter argument to dFSCI and its meaning? Absolutely not. For instance, if you can show that a basic protein domain could have originated from an unrelated state thorugh an intermediate that is partially related and is naturally selectable(let's say from A to A1 to B, where A and B are unrelated, A1 is an intermediate between A and B, and A1 is naturally selectable), , then we are no more interested in the total dFSCI of B. What we have to evaluate is the dFSCI of the transition from A to A1, and the dFSCI of the transition from A1 to B. The assumption is that A1 can be expanded, and its probabilistic resources multiplied. Therefore, if the two (or as many as you want) transitions have low dFSI, and are in the range of the biological systems that are supposed to generate them, then the whole system can work. I hope I have been clear, but I would be happy to discuss any aspect of that that you don't agree with. gpuccio
Evolution is learning and doesn’t violate any laws or probabilities.
What does it mean that something doesn't "violate any laws?" That's not a meaningful way to validate anything. Laws are an easy way to rule something out. I can't jump 100 feet into the air because of the law of gravity. But can I run 100 miles without stopping? Does it violate any laws? How is that a useful question? For what evolutionary transitions have you calculated the probabilities, and how, so that you can say it is probable rather than improbable? How you calculate or estimate that probability speaks volumes about how well you understand the process itself. As it is, every evolutionary narrative is far too vague to really determine how probable or improbable it is. That should be held against the theory as further evidence that it is vague and lacking any actual explanation. Instead it's used to support the theory. As long as it doesn't explain anything, the probability of the explanation can't even be estimated, so no one can say that it's improbable. Is there even a name for this fallacy, using the inability to explain or apply an idea as a defense against criticism of it? In any other case that would be a weakness, but it's darwinism's greatest strength. ScottAndrews2
Gordon: Let's go to your specific evaluations: - Abiogenesis happened: very strongly supported. The very early Earth (& before that the early universe) couldn’t support anything like life, and it’s here now, so it must’ve originated at some point. Perfectly true. - Abiogenesis happened by X path: very weak at this point. In the first place, we don’t have any fully worked out paths by which abiogenesis could have happened (though that doesn’t mean there are no such paths) Second, there’s very little evidence left to go on (there is some evidence, like possible moleculars from an RNA world, but not much of it). True. But I have more faith that our incresing knowledge will give us more understanding. Personally, I believe that we will never find evidence of an RNA worlds, because I don't believe it ever existed. But we will see. - Common ancestry (all — or at least many — organisms are related to each other): very strong for the “many” version, much weaker for “all”. Basically, the evidence we have supports common ancestry; for parts of the family tree we have lots of evidence about, this is very strongly supported; for parts we have less evidence for, it’s proportionally weaker. Examples of areas where it’s weaker: species we haven’t discovered yet (that’s sorta the extreme case), the relations between archaea, eubacteria, and eukaryotes (the split happened so long ago there’s relatively little evidence left), and relations among eubacteria (evidence is mostly limited to genetic similarity, and that has a low signal-to-noise ratio due to horizontal transfer). An example of an area where it’s strong: mammals, including the one everyone cares about, humans (here we have a variety of sources of evidence, all pointing to pretty much the same history — some of the evidence is fairly strong on its own, but the principle of consilience means means the whole is even stronger than the sum of its parts). Perfectly true. And so? (You’re probably going to ask if I’ve been reading Jonathan M’s recent articles on the evidence for common ancestry, and the answer is no I haven’t, and yes I probably should. But when I’ve dug into such things in the past, evolution’s come up the winner.) I am not going to ask. The reson is simple. I do accept common descent (not necessarily universal) as the best explanation for what we observe. I must say that I am a little disappointed that you conflate here ID with the negation of common descent, as many do. I would have expected maybe some more explixit distinction, given the high level and pertinence of your discussion. So, I have to state it again: I, like many others in ID, aceppt common descent. Not for ideology. Not for any strange reason. But because I find the evidence for common descent the best explantion. Exactly the same reason why I accept ID, and not neo darwinism. I think that Behe has the same position. And many others here. Others differ. Those who don't accept common descent have their reasons, and they are welcome to express them. But, at present, I am not convinced by those reasons. For the nth time: ID theory and common descent are two separate issues, with some interaction, but no more than that. - Mutation and selection contribute to evolution (i.e. have contributed to the differences between modern organisms and their ancestors): very very strong. Both are observed in the lab and in the wild, and given our understanding of genetics it’s hard to see how either could not happen. Perfectly true. How and how much they contribute, obviosuly, is all another matter. - Mutation and selection are the only mechanisms of evolution: known to be false. Don’t leave out genetic drift and gene flow, lots of special variants on the primary mechanisms (hypermutation, chromosome fission&fusion, meiotic drive, hitchhiking, the founder effect, etc), and a few outliers (e.g. endosymbiotic capture)… It is false. But genetic drift and the rest are anyway random variation. So, if you include RV instead of RM in the neo darwinian algorithm (as I always do), all these are included. The point is, NS is the only part in all these explanations that has a necessity rule in it. The rest is a variety of blind events that are necessarily devoid of any information about life and function. NS. at least, derives form the existence of the reproductive function, and so has some connection with function, at least with reproductive function. But, even if you count all the things you have listed, the statement remains false. - The known mechanisms of evolution (see above) are the only ones in operation: sort-of assumed for both for methodological and Occam’s razorish reasons, but probably false. We keep finding new machanisms (or at least variants), and there’s no reason to think that’s going to suddenly stop tomorrow. It may sound nonsensical to assume something that’s probably false, but it’s actually a good assumption in certain ways. Yes and no. It is false, and I don't belive it is a good thing to assume it. The reason is simple. There is the design theory, that explains things much better, and that can be used to interpret data in a much more functional way. So I would say: those who believe in non design theories must assume that this statemetn is true, and pursue their research acoordingly. And those who accept the design theory should do the opposite: refute the statement, and pursue research accordingly. The important point is, data, however found, should be freely interpretable from both perspectives. None of these two positions should be considered a priori non scientific. A simple analogy: suppose we make a low-accuracy measurement (say, weighing a hog with Robert Burns’ method); we may be pretty sure the result is wrong, but since we don’t know how far off or in which direction, it’s nonetheless going to be our best estimate of the actual value. Basically, it’s reasonable to use it as a working assumption, and as a starting point for further investigation, just don’t actually fall into believing it’s true. This is a good reasoning, that I would happily apply to the computation of dFSCI in biological objects :) - Evolution happens by entirely naturalistic processes: weak-to-nonexistent. Scientific investigations assume this both for methodological reasons and for lack of evidence otherwise; but it’s really just an assumption. Pretty much like the analogous abiogenesis one. We have lots of evidence for naturalistic mechanisms of evolution, but that hardly rules out non-naturalistic contributions. Not true. Well, it is true that evidence for purely naturalistic processe as an explanation for biological evolution are nonexistent :) . But it is not true that there is lack of evidence otherwise. And it is not true, absolutely, that we have "lots of evidence for naturalistic mechanisms of evolution". What do you mean? We have evidence for common descent (already discussed) and we have evidence for a minimal role of RV and NS in microevolution (some forms of antibiotic resistance, and little else). Where are your "lots of evidence"? - Mutation and selection are the most important mechanisms of evolution: legitimately controversial, as well as subjective (depending on what you consider important). Most DNA-sequence-level differences between organisms are neutral, so selection’s irrelevant to their origin, so mutation and drift seem to be the major players at this level. I agree that the role of NS is minimal. And the role of RV is minimal too. IOWs, the whole theory is inconsistent. But if you look at differences in phenotype rather than genotype, the non-neutral differences are the ones that matter, and hence selection plays a much larger role. How much larger, and whether (/how much) it outweighs other factors is something scientists argue about… All reasonings about phenotype are irrelevant, if we cannot describe the molecular basis of variation. Only molecular reasoning allows us to discuss the nature and complexity of the observed variation, and therefore to compare causal expalnations. That's why I never discuss fossils, for example (another good reason could be that I have not the competence :) ). ID and neo darwinism can be compared only at molecular level. There, and only there, the true cause of variation can be analyzed. - The known mechanisms of evolution (see above) are sufficient to account for the properties of modern organisms: weak, but in the absence of counter-evidence, reasonable to assume. But there is a lot of counter-evidence. Each basic proitein domain is absolute counter evidence! And we have thousands of them in the proteome. And a lot of other things, obviously, form the cambrian explosion of body plans to OOL, from the genetic code to the huge Irreducible Complexity of biological machines, and so on, and so on. (I know, this is the one where you hit the ceiling. Please wait, and hear me out first.) Ouch! :) Well, I am listening... There’s been a lot of effort by both creationists and ID supporters to find & describe properties of organisms that known evolutionary mechanisms couldn’t produce, and (in my opinion and that of the mainstream scientific community) they haven’t found any. Completely false, but well, you are entitled to your opinions. Since you’re particularly interested in information, I’ll discuss that. And in next post, information at last! gpuccio
Eric, How could you forget about one very powerful thing, trial and error?! No designer at all, just apparent design. We all have a collective hallucination. Eugene S
Gordon: Here it becomes easier. I haven’t checked into whether he’s right about that, but the thing I found disturbing was that he didn’t try to construct an ID explanation either. I haven't read the articles, but I agree that a design explanation must be proposed is we refute the calssical explanation. If I were an IDist scientist looking at that pattern, I’d be trying to figure out where the pattern could’ve come from: is it a design goal in and of itself? Is it a side effect of some other goal, and if so what goal and how does the side effect arise? Is it a result of some feature of the design orpcess or how it was implemented? Think of things that could’ve caused the pattern, and (if possible) figure out ways to test them. Unless you can do that, ID can’t claim this as a point in its favor I agree. ID can and must do exactly that. I try, for what I can, to have always that kind of approach. The problem is, the ID approach to biological issues is just in the beginning. The resources are really minimal, and the resistance of the academic world is huge and dogmatic and very, very intolerant. However, people like Behe, Axe, Gauger, Durston and others are doing a splendid work, in the midst of all difficulties. At present, however, most of the experimental help for ID comes from the research made by darwinists, even against their intentions. Luckily, Data are always data, whoever provides them. You can find an example of ID reasoning on biological data in my previous post. An analogy is not a scientific theory. It might be the basis for a theory, but the bare analogy? No. The analogy is the basis for the scientific theory of ID. It serves to establish the hypothesis that functional information has been added to the biological world whenever we can witness the emergence of new dFSCI. A lot of work obviously remains to verify and detail this hypothesis with data. Moreover, I have much more faith in inferences by analogy than you seem to have. All our human knowledge, including science, is based on a shared inference ny analogy: the inference that other human beings are conscious exactly as we are. That is a mere inference by analogy, shared by all (except maybe solipsists), and yet it is the foundation of all our cognition of the outer world and all our shared knowledge. Consider some opportunities for your theory to make predictions: if we have an object we know was human-designed, does your theory make any prodections about it? No, they theory essentially says it might have dFSCI (and indeed, some human-designed objects have dFSCI, and some don’t). How about organisms? Again, the theory says they might have dFSCI. How about things other than organisms that aren’t human-designed? Well, since one of the possibilities on the table is that the entire universe and everything in it were designed, everything might have dFSCI. Here maybe there is some confusion. Let's clarify. dFSCI is a formal property often observed in human designed things. If we know that an object is designed (because we have evidence of the design process), still it can have dFSCI or not (which can be objectively verified on the object itself). dFSCI is never observed in objects that we knoe for certain were not designed by humans, with the only exception of biological objects. If we apply design detection by looking for dFSCI in objects that could be designed by humans, and if we can after verify, we can see that dFSCI is a reliable tool for design detection, becasue it gives no false positives, and a lot of false negatives (if the threshold of complexity is taken appropriately). Applying the search for dFSCI to the biological world, we find a lot of objects exhibiting dFSCI. So, we express the hypothesis that those objects have been designed. That is the simple reasoning. All the rest comes after, and consists in analyzing exisitng data, and new data, in the light of that kind of explanation. It implies researching the possible ways of implementation of functional information in natural hystory, with specific reference to the when and why. It implies defining better ther relationship between the functional protein space and the serch space. It implies trying to understand and quantify the digital complexity of regulation networks, of body plans, and of many other things that probably are not enough understood at present. And so on. You say: "Well, since one of the possibilities on the table is that the entire universe and everything in it were designed, everything might have dFSCI." No. That's wrong. dFSCI is observed in specific objects, and is objectively measured in them. If they have it, they have it. Otherwise, it is not observed. I repeat: no object in the physical universe, that we are sure was not designed by humans (or aliens, as far as we can know), exhibits dfSCI, except for biological objects. The fine tuning argument for the whole universe is a valid form of the cosmological argument for God, but IMO it remains in part phylosophical, because when you consider the whole universe as an object it is difficult to reamin merely empirical. And anyway, while it is an argument for design, it is formally completely different from the argument for biological design, which is completely empirical and based on dFSCI. You can improve the situation a bit by adding subsidiary hypotheses. For instance, if you add the hypothesis that organisms are the only designed-by-other-than-human things, you can get some predictivity. That's exactly my position. But not much, because properly testing a theory requires that you test it against evidence independent of the evidence that let to its formulation. I’m pretty sure your theory derives from considering a wide variety of objects and their properties, which doesn’t leave much room to test it against new objects (especially, objects that aren’t basically more of the same). I don't agree. The tool dFSCI can be tested blindly against any object in the two categorie: designed by humans or certainly not designed by humans (but not biological). It will always win (in the sense of nopt giving false positives; as already said, it will give a lot of false negatives). And there is a reason for that. The reason is that the process of design is the source of dFSCI. IOWs, it is the conscious representation of reality with intuition of meaning, and the conscious represenatation of function and purpose, that allow the generation of what Abel call semiotic information, and in particualr prescriptive information, and which in practice is confirmed by the observation of dFSCI. Our hypothesis is only that some designer, probably not human, can have the same kind of conscious representations and intent, and therefore input functional information into biological beings. Now, we must not make confusion: one thing is to test the dFSCI tool for design detection. That can be made as much as we want. And it can also easily be falsified. Producing objects in a non designed system that exhibit dFSCI would obviously falsify the whole theory of design detection. The design inference for biological beings, based on the concept of dFSCI, is another thing. It is a more general theory that competes with the only existing non design theory for biological information, neo darwinism. The two explanations must be compared according to how well they explain existing data, and of how well they predict and explain new data. The example of ether is very revealing. Indeed, as long as it was the best explanation, it was considered such. When data (or a better reasoning) made it unnecessary, it was discarded. That's perfectly fine. So, let's say that we in ID do believe that at present ID theory is the best explanation for the oriogin of biological information, and that it should be considered a legitimate scientific approach to that problem, and that it should have the respect, resources, and attention that it deserves. The, we will see. Personally, I am very sure that ID will fare much better than ether :) . More in next post. gpuccio
And as if the practice of Science had a rich sense of humor, it turns out that this very system of information transfer (the very heart of it all) is the most prolific form of irreducible complexity in the known universe. It’s logically undeniable.
It hasn't been demonstrated to be irreducible, and your argument is simply a restatement of the fact that we don't know the history of the origin of life. You are making assumptions about the outcome of research in progress. Petrushka
Gordon: Again I want yo thank you for your reply, and from the heart. I deeply appreciate your contribution, and it was a pleasure for me to read it. Answering it is not too difficult, because I agree with many of the things you say, so I will just state where I agree, and concentrate on the points where I differ. I agree with you about your approach to science and methodology. I am a little bit less convinced about the prominent value of predictions. They are an important part of scientific support to a theory, but not the only one. IMO, two things are equally important in giving credit to a scientific theory: it is the best explanation for observed things, and it can make right predictions about things still to be observed. I am not trying to underemphasize the importance of the second point: I do believe it is fundamental for ID. But it is true also that, in dealing with explanation of historical events, like in evolution problems, one cannot expect the same role of predictions as, say, in physics. That is true both for ID and neo darwinism. That said, I do believe that ID and neo darwinism imply very different predictions about biological information, and that goes far beyong the point, often made here, of junk DNA. Let's put it this way: we are now comparing two existing theories about the origin of biological information: a) one (ID) states that whenever a non trivial rise in dFSCI is observed, there must have been an input of functional information in the form of "switch configuration" (to use Abel's term) from some conscious agent. b) the other (neo darwinism) states that in all those cases an explanation based on RV + NS is feasible. Please note that those are IMO the only scientific explanations currently available. For epistemological reasons, I don't accept the argument, so often made here by darwinists, that even if their theory does not work, the design theory must be refited just the same, because some other naturalistic explanation in principle could be found. That is nonsense. Science works with available empirical explanations, not with logical possibilities. Your example of ether is perfect for that: ether was accepted as the best explanation, and that was methodologically correct, until evidence favoured a better, explicit, satisfying explanation. Science is all about competing explanation, not about ideological positions a priori (such as "the explanation must be materialistic"). So, we are compering these two specific theories. Do they engender different predicitons? Of course they do. I will try to describe two very important ones. 1) Neo darwinism states that the functional organization in biological beings is not what it appears: IOWs, not designed, not purposeful. That bilogical beings appear designed and purposeful I will take for granted, as even Dawkins agrees on that, but if you disagree we can discuss that point. So, according to neo darwinism, the appearance of fucntional teleology is only a byproduct of a blind mechanism, RV + NS, which "simulates" teleology in the end. Indeed, as ID well shows, the neo darwinian mechanism is completely unable to explain that appearance of function and purpose as we know it today. But that's not the point here. My point is: the complexity of biological beings as we knoe it today is one thing. The complexity of biological beings as we will understand it in, say, ten years from now, is another thing. ID believes that the functional complexity in biologcial beings is the product of an intelligent designer. Moreover, a simpèle analysis of the level of function as we understand it now easily shows that the designer is "much smarter" than we are (a simple, empirical consideration). The empirical evaluation of how our understanding of biological functions has incersed in the last ten years (often in completely unexpected ways) can therefore justify the following prediction: in ten years from now, we will find huge new levels of functional complexity in living beings. Now, that is a simple prediction. It can happen or not. I strongly believe it will happen. Is that a prediction compatible with the neo darwinian model? Well, darwinists will certainly say it is. But is it really? Let's see. For that model, functional organization is just the byproduct of a blind algorithm, and chance is largely the main factor in building new information. OK, NS is not chance, it is necessity, but it has a really indirect relationship with functional organization, and you can probably agree that now, already, its role in explaining things is really stretched (a very kind euphemism) in classical neo darwinisms (and almost non existing in alternative forms). So, why should neo darwinism predict ever increasing and ever deeper levels of "apparently" teleological organization? It cannot even begin to explain what we already know, its best hope is really that we will as soon a s possible "hit the wall" of this bogus functional organization, and concentrate on explaining what we have already accumulated. So, I make a prediction: in the next ten years, we will discover tons of new unexpected levels of functional organization in biological beings. And that is perfectly compatible with the design hypothesis, and totally against the neo darwiniam explanation. 2) But let's go to something more specific. How and when does new biological information appear? Here the two models differ very much. The neo darwinian model requires that it appears gradually at the genome level, and that the probability barriers implicit in RV be very often "shunted" by the necessity mechanism of NS. Do we agree on that? The ID model is much more flexible. The main implementation modalities are: direct writing (top down), or some algorithm based on random variation + intelligent selection (bottom up), or a mix of the two. In direct writing, graduality is not required (although it is still possible). In the second scenario, some graduality is expected, but the times can be freatly accelerated compared to the neo darwinian scenario, and above all there is no need for naturally selectable intermediates (intelligent selection can act in very different ways). So, as our knowledge abot when and how new information appears in natural history grows, we can certainly discriminare better between those two proposed scenarios. Let me make just one example. A paper has many times be cited here by darwinists which details the possible emergence of a new functional brain protein in humans. Please remember that I am citing this paper only for methodological reasons. I cannot give any final judgement about the particular issue, because the protein has not yet been directly isolated, and its specific possible biochemical function is not known. But both those points can be quickly solved by reaserch. The point is that, according to the paper, that protein (if it really exists) is 184 AAs long. Therefore, if it is functional, and if it is a new basic protein domain (it should be, because the sequence has no homology with all known proteins), it is a well definite "de novo" gene, appearing for the first time in humans. Moreover, because of ots lenghth, it is absolutely reasonable to assume that its dFSCi is high, even if that would require further research about the sequence function relationship in that particular protein (the Durston method cannot be applied to a single gene). But that research can be done. It is derived from part of a non coding DNA gene, emerging for the first time in primates. Four final mutations in humans transform the non coding gene in an ORF. In particular, one mutation eliminates an internal stop codon that made the ORF impossible. Therefore, the ORF did not exist in primates, and it was never translated before its activation in humans. THerefore we have the foillowing scenario: a) A non coding sequence appears for the first time in primate. It changes, mutates, but is never translsted. Therefore, no NS of the results is possible at this level. b) The, in humans, 4 final mutations (one of them a frameshift mutation, another one a stop codon removal) "suddenly" activate an ORF in that sequence, and the ORF corresponds to a new protein, with a completely new fold and function. Do you agree that such an empirical scenario is best explained by the design theory (in particular, I would say, by the direct writing variety), rather than by the RV + NS theory? That's what I mean when I say that the two theories have very different implications (IOWs, they make different predictions), and that our growing empirical knowledge will allow us to check what predictions are verified or falsified. I stop here for now. The rest in another post. gpuccio
11.1.1.1.3 Petrushka, As regards small probabilities, compare 1 out of 10^3 vs. 1 out of 10^70. Can you relate a practically occurring event on the planet to the latter figure? Can you see the difference? Incremental change, hypercycles and other palmistry are good for books, not for reality. Eugene S
Thank you very much Eric and Biped for taking the time to read my rather rambling post and to then give a meaningful response. Of course "purpose" is another hazily defined concept, and it turns out we were indeed talking at cross purposes, Eric. You point out that "purpose" is used to mean the function of any given system, while I was using it in a more teleological sense. Leaving that aside, as I understand it the logic goes, "If DNA does in fact have a designer, it is logical to assume that that designer would not have left whole chunks of genome with no function." Playing Devil's advocate, I still have trouble seeing why that assumption is any different to the "no designer worth his salt" argument that is so often wielded against ID, which, it is effectively argued, is fundamentally philosophical in nature and therefore scientifically moot. Also I still feel that my fundamental point remains unaddressed. Maybe I didn't explain myself very clearly. As I see it, we are talking about two concepts, "that which was clearly designed" (X) and "the designer(s) that must exist" (Y). ID is very good at defining X: "anything with functional complex specified information and/or irreducible complexity" or something along those lines. However I have yet to see a clear-cut, restrictive definition of Y tying together beaver, human, alien and deity. "Conscious and/or intelligent agent" just isn't specific enough, as I hope I have been able to explain. I suspect that coming up with such a definition might help. I'll make my own ham-fisted attempt, just to explain what I mean: "An intelligent agent is a discrete entity that autonomically acts to disrupt the operation of known chemical and physical laws." Has anybody tried to come up with such a definition before? englishmaninistanbul
First, let me apologize for both the lateness and length of this message. I write slowly, and I kept thinking of more things I wanted to say, and ... it got a bit out of hand. (And for KF: I'll try to reply about thermodynamics tomorrow.) gpuccio:
Let’s go to the details. Thank you from my heart for admitting that my reasoning is not circular. It should be obvious, once ot os calrified, but my experience here, even with people “from the other side” that I really respect and like, has been different. So, thank you for that.
No problem. I agree that people often get too wrapped up in defending their positions to admit their own biases and mistakes. I can hardly claim to be bias-free, but I do try to avoid letting my biases and such control me.
[earlier] Maybe, if you understood better the details, you could change your mind, just out of intellectual honesty.
If you can convince me that the details justify your conclusions, then yes, I will change my mind. A bit of a warning, though: I've been following the Creation/Evolution/ID controversy for around 25 years, and have dug in detail into various topics that happen to have caught my interest. So far, for everything I've investigated, the evolution side has come out ahead or at least even. As I get older and lazier, it's harder and harder to catch my intrest enough to get me to really do the research necessary to have a properly informed opinion on a topic. I've been meaning for a while to write up some notes on all the things people get wrong about thermodynamics (this is a topic that tends to get messed up by almost everyone, no matter what side they're on), a critique of Dembski's CSI work (I wasn't impressed by the recent paper by Elsberry & Shallit's recent paper -- they were still calling out problems fixed in the 2005 version of CSI, but also missed new problems introduced in that version). Also, I really need to read more of Dembski's work on NFLT (I know the basic idea, but I haven't gone into the details). Net result: you'll have to seriously grab my attention to get me off my duff and into the details of the bio side of things. That said, most of the rest of your message was about a general overview of the current argument for ID (or your take on it anyway); I'm fairly familiar with that, so I think I can reply coherently. I may get a bit ranty, though: I'm not at all happy with design-centric ID (the "design detection" approach), because I think it's really a bit of a cop-out. To explain why, let me take a bit of your discussion out of order:
The only reason why so called scientists refute that explanation a priori is that they are committed by faith to materialistic reductionism, that cannot tolerate even the possibily that conscious agents may exists who can have designed biological information.
I think you're vastly overestimating this committment -- a few scientists are committed to materialism, but most just do what works. Historically, materialistic reductionism has produced successful theories, and nonmaterialism pretty much hasn't. As result, the possibility of nonmaterialistic science is generally dismissed. The attitude is little like the general attitude about permetual motion machines: present a scientist with the claim that you've come up with either one and the response will be along the lines of "good grief, not again -- I have real work to do, please go away." In either case, if you actually have one (perpetual motion machine or viable nonmaterialistic scientific theory), the solution isn't to rail against bias; it's to build the thing, run it, and show people that it does work. If you can do this, you will gradually get people to take you seriously, and modify their ideas of what's possible. Consider science around 1900: physics was considered the definition of what good science should be like: not merely materialistic, but mechanistic and deterministic. Then quantum mechanics came along; it violated the rules about what a good theory should be like, but made successful predictions that no other theory did. It worked. And because of that, it changed the rules. If you can build a theory that makes good predections, and works by the other measures of a successful scientific theory, I'm confident you can change the rules as well. But I don't think it's possible to do that under the constraint of design-centric ID. Think about the other fields that're sometimes given as successful scientific theories of design: anthropology and forensics. In both cases, the real meat of the science isn't in detecting designed objects, but in figuring out who did it, why, how, etc. Furthermore, in both cases the "design detection" is actively driven by theories about the designers. Anthropologists don't decide that something is designed just because it doesn't match known non-intelligent sources, but because it does match known and understood human sources, goals, techniques, etc. Design-centric ID can at most use a generic description of what designers might do, like: sometimes they produce things that have some dFSCI. That doesn't really give much scope for any sort of detailed predictions. Let me give you an example of this limitation: last year, Richard Sternberg posted a series of articles (1, 2, 3, about patterns in the distribution of LINES and SINES in the genomes of mice and rats. His main point is that evolution fails to provide an explanation for the patters. I haven't checked into whether he's right about that, but the thing I found disturbing was that he didn't try to construct an ID explanation either. If I were an IDist scientist looking at that pattern, I'd be trying to figure out where the pattern could've come from: is it a design goal in and of itself? Is it a side effect of some other goal, and if so what goal and how does the side effect arise? Is it a result of some feature of the design orpcess or how it was implemented? Think of things that could've caused the pattern, and (if possible) figure out ways to test them. Unless you can do that, ID can't claim this as a point in its favor:the best you can do is say that neither evolution nor ID has an explanation. And I don't see any way to get past that without going past the design-centric ID paradigm. Ok, I think I'm done ranting. At least for the moment.
The “argument from ignorance” part deserves a further clarification.
I was specifically referring to the argument that at-least-partially-selectable paths don't exist, not to ID overall; however, let's proceed.
ID is not an argument from ignorance. It is, indeed, a theory made at least of two fundamental parts: a) The positive part: a1) definition of a formal property (for instance, dFSCI), frequently observed in objects that are certainly designed by conscious agents (In this case, humans), and empirically never observed in objects that are certainly not designed by conscious observers, but are rather the output of random or necesiity, or mixed systems. I strongly believe that dFSCI completely satisfies that purely empirical definition. a2) demonstration that dFSCI is hugely observed in the set of objects we are discussing (biological objects, and particularly the genome), whose origin is the object of controversy. a3) inference to design by a conscious agent as the best explanation available. This is an inference by analogy, a very strong and convincing one, a positive argument from all points of view. It is also a perfectly correct scientific explanation, wholly empirical and appropriate to the problem we are trying to solvbe.
An analogy is not a scientific theory. It might be the basis for a theory, but the bare analogy? No. The most important feature of a scientific theory (or hypothesis) is its testability: it has to have consequences (predictions) that can be checked against reality. Tests that match reality provide support for the theory. As you've described the positive side of ID, I don't see any way to derive any testable predictions from it. (Note: the above is, of course, a bit of an oversimplification. For one thing, it's nearly impossible to derive testable predictions from just one theory. Generally, a prediction derives from serveral theories, subsidiary hypotheses, etc; and if the prediction fails, it can be tricky to figure out which theory should be considered falsified. The recent faster-than-light neutrino results are a good examle: they falsify something, but whether they've falisified relativity, or the rule that causes precede effects, or some part of their understanding of the timing or distance measurements, or something about the operation of their detectors, or...) Consider some opportunities for your theory to make predictions: if we have an object we know was human-designed, does your theory make any prodections about it? No, they theory essentially says it might have dFSCI (and indeed, some human-designed objects have dFSCI, and some don't). How about organisms? Again, the theory says they might have dFSCI. How about things other than organisms that aren't human-designed? Well, since one of the possibilities on the table is that the entire universe and everything in it were designed, everything might have dFSCI. You can improve the situation a bit by adding subsidiary hypotheses. For instance, if you add the hypothesis that organisms are the only designed-by-other-than-human things, you can get some predictivity. But not much, because properly testing a theory requires that you test it against evidence independent of the evidence that let to its formulation. I'm pretty sure your theory derives from considering a wide variety of objects and their properties, which doesn't leave much room to test it against new objects (especially, objects that aren't basically more of the same). Let me give you an example where a similar argument from analogy was used in science: when scientists studied waves (e.g. sound), they found waves need some sort of medium to propagate through. Sound, for example, cannot propagate through a vacuum. When they discovered that light was a type of wave, they assumed that it similarly needed a medium, and since light can propagate through a vacuum, they hypothesised that vacuum wasn't really empty, but filled with "luminiferous aether" that carried the light waves. But they didn't stop there, they did what I was griping about Sternberg not doing, and design-centric ID not allowing: they developed (and tested and refined and...) an extensive theory about exactly how this aether behaved. It was actually quite strongly supported. Then came the Michelson–Morley experiment. It was an attempt to measure the motion of Earth relative to the aether by looking for changes in the speed of light, and it didn't find any. This is usually described as having refuted the aether theory, but what it actually did is much subtler: it didn't show that aether was nonexistent, it showed that it was irrelevant. If you include relativistic corrections in the distance and timing elements of MM's experiment (i.e. correct the subsidiary hypotheses), their result is entirely consistent with a stationary aether. But the undetectability of aether drift left aether with no real (=contributing to predictions) role in its own theory. Since the predictive parts of the aether theory could be reformulated without reference to aether (instead, they're described in terms of abstract electric and magnetic fields), the only reason to have the aether in the theory was to satisfy the analogy. That wasn't enough reason; it was jettisoned. I'll draw two lessons for ID from the example of aether: first, your hypotheses and theories really need to be able to make predictions. Second, if you don't want the designer(s) to be jettisoned from ID as irrelevant, he/she/it/they have to be active participants in the theory, with properties that contribute to its predictions. Going back to my rant, I don't see a way to do either of these within the constraints of design-centric ID. (As usual, I've oversimplified the aether story a bit. Most significantly, irrelevance wasn't the only problem with aether: its properties wound up not making much sense for a physical thing, so it was getting a bit implausible anyway.) I also think the analogy itself is rather weak, but this is a relatively unimportant point and I've gone on long enough...
b) The second part is the falsification of the currently accepted theory of neo darwinism. That is necessary too, because one of the pillars of ID reasoning is that dFSCI is observed in biological information, IOWs that the functional information there is not explained by any knows chance, necessity, or mixed mechanism. As neo darwinism pretends to do exactly that, it is the duty of ID to demonstrate that that theory is wrong, illogical and empirically unsupported. And that’s exactly what ID does.
As usual, I disagree with your assessment of the situation. I don't want to go on too long here, but let me just break down what I see as the level of evidentiary support for various parts of evolutionary theory (and then I'll discuss information at the end): - Abiogenesis happened: very strongly supported. The very early Earth (& before that the early universe) couldn't support anything like life, and it's here now, so it must've originated at some point. - Abiogenesis happened by X path: very weak at this point. In the first place, we don't have any fully worked out paths by which abiogenesis could have happened (though that doesn't mean there are no such paths) Second, there's very little evidence left to go on (there is some evidence, like possible moleculars from an RNA world, but not much of it). - Abiogenesis happened by an entirely naturalistic process: weak-to-nonexistent. Scientific investigations assume this both for methodological reasons and for lack of evidence otherwise; but it's really just an assumption. Furthermore, I don't really see any way to put it in testable form, meaning that I don't know if it can be supported by evidence. - Common ancestry (all -- or at least many -- organisms are related to each other): very strong for the "many" version, much weaker for "all". Basically, the evidence we have supports common ancestry; for parts of the family tree we have lots of evidence about, this is very strongly supported; for parts we have less evidence for, it's proportionally weaker. Examples of areas where it's weaker: species we haven't discovered yet (that's sorta the extreme case), the relations between archaea, eubacteria, and eukaryotes (the split happened so long ago there's relatively little evidence left), and relations among eubacteria (evidence is mostly limited to genetic similarity, and that has a low signal-to-noise ratio due to horizontal transfer). An example of an area where it's strong: mammals, including the one everyone cares about, humans (here we have a variety of sources of evidence, all pointing to pretty much the same history -- some of the evidence is fairly strong on its own, but the principle of consilience means means the whole is even stronger than the sum of its parts). (You're probably going to ask if I've been reading Jonathan M's recent articles on the evidence for common ancestry, and the answer is no I haven't, and yes I probably should. But when I've dug into such things in the past, evolution's come up the winner.) - Mutation and selection contribute to evolution (i.e. have contributed to the differences between modern organisms and their ancestors): very very strong. Both are observed in the lab and in the wild, and given our understanding of genetics it's hard to see how either could not happen. - Mutation and selection are the only mechanisms of evolution: known to be false. Don't leave out genetic drift and gene flow, lots of special variants on the primary mechanisms (hypermutation, chromosome fission&fusion, meiotic drive, hitchhiking, the founder effect, etc), and a few outliers (e.g. endosymbiotic capture)... - The known mechanisms of evolution (see above) are the only ones in operation: sort-of assumed for both for methodological and Occam's razorish reasons, but probably false. We keep finding new machanisms (or at least variants), and there's no reason to think that's going to suddenly stop tomorrow. It may sound nonsensical to assume something that's probably false, but it's actually a good assumption in certain ways. Essentially, since we don't know what other mechanisms there are, we don't know how to take them into account, and any attempt to do so runs the risk of throwing us even further off. A simple analogy: suppose we make a low-accuracy measurement (say, weighing a hog with Robert Burns' method); we may be pretty sure the result is wrong, but since we don't know how far off or in which direction, it's nonetheless going to be our best estimate of the actual value. Basically, it's reasonable to use it as a working assumption, and as a starting point for further investigation, just don't actually fall into believing it's true. - Evolution happens by entirely naturalistic processes: weak-to-nonexistent. Scientific investigations assume this both for methodological reasons and for lack of evidence otherwise; but it's really just an assumption. Pretty much like the analogous abiogenesis one. We have lots of evidence for naturalistic mechanisms of evolution, but that hardly rules out non-naturalistic contributions. - Mutation and selection are the most important mechanisms of evolution: legitimately controversial, as well as subjective (depending on what you consider important). Most DNA-sequence-level differences between organisms are neutral, so selection's irrelevant to their origin, so mutation and drift seem to be the major players at this level. But if you look at differences in phenotype rather than genotype, the non-neutral differences are the ones that matter, and hence selection plays a much larger role. How much larger, and whether (/how much) it outweighs other factors is something scientists argue about... - The known mechanisms of evolution (see above) are sufficient to account for the properties of modern organisms: weak, but in the absence of counter-evidence, reasonable to assume. (I know, this is the one where you hit the ceiling. Please wait, and hear me out first.) There's been a lot of effort by both creationists and ID supporters to find & describe properties of organisms that known evolutionary mechanisms couldn't produce, and (in my opinion and that of the mainstream scientific community) they haven't found any. Since you're particularly interested in information, I'll discuss that. Antievolutionists have been trying to use information to refute evolution for a long time. I think A. E. Wilder-Smith may've been the first to seriously pursue this line of reasoning. He basically claimed (IIRC, it's been a while since I read him) that information theory says that information can only come from intelligent sources. Unfortunately for him, both the statistical and algorithmic theories of information (the two primary theories) say almost the opposite: in the statistical theory, information sources are generally assumed to be random (even when they really aren't, because the difference because the difference between intelligently-created and random information doesn't matter to the aspects it studies). The algorithmic theory doesn't generally deal explicitly with the creation of information, but random processes are certainly capable of producing Kolmogorov complexity (AIT's measure of information). Some later, more clueful antievolutionists (mainly Dembski) realized that the definitions from the standard theories of information didn't give them any basis to refute evolution, and so set out to create their own theories and definitions that could provide a framework they could use. To understand the problem they faced a little better, consider that novels, weather reports, blueprints, and lottery numbers can all resonably be considered information, but they're all very different types of information. Evolution can clearly create some of these types of information: mutations can create lottery-number-style information, and selection can create weather-report-style information. It cannot (at least as far as I can see anything like a novel, but since there doesn't seem to be any of that sort of information in the genome, that doesn't mean anything. The more interesting question is something more like blueprint-style information. If it (or something similar) could be properly defined (as opposed to what I've done) and it can be shown that evolution has no way to add it to the genome, then you'd have a case. There've been a number of attempts at this over the years. Dembski's first, complex specified information, takes what I'd call a rationalist approach: it tries to arrange the definition of information so that it rules out (well, actually just limits) the production of information by natural means. The early versions of CSI had some pretty serious problems; later definitions cleared things up a great deal, but left at least one showstopper: in order to prove that something had CSI, you had to first show that natural processes had a very small probability of producing that thing. Essentially, CSI can't be used to prove that evolution doesn't work, because in order to show that you'd have to already have proven that evolution doesn't work. (I'll skip over Dembski's more recent work on NFLT and active information, because I haven't read enough of it to really comment knowledgeably.) The definition you're using of dFSCI takes a very different approach to ruling out natural production. Where Dembski is rationalist (mathematical derivations of why natural processes can't produce CSI), you use an empirical argument (natural processes have never been observed to produce dFSCI). I discussed some general problems with this approach earlier, but let me take a closer look at this particular argument. I think it's pretty clear that evolutionary processes can produce increases in dFSCI, at least if your measure of dFSCI is sufficiently well-behaved. Consider that there exist point mutations that render genes nonfunctional, which I assume that you'd consider a decrease in dFSCI. Point mutations are essentially reversible, meaning that if genome A can be turned into genome B by a single point mutation, B can also be turned into A by a single point mutation. Therefore, the existance of point mutations that decrease dFSCI automatically implies the existance of point mutations that increase dFSCI. "But," I hear you say, "all observed mutations that change dFSCI decrease it." I'd go look up counterexamples, but it's really a moot issue because it doesn't really matter if dFSCI-increasing mutations are observed because logically they must exist. If they are not observed, that just means that the organisms' starting genomes are at local maxima of dFSCI, and an inability to go "up" from where they are does not imply the inability to get "up" to where they are. This is rather abstract, so let me give an anology that might clarify what I'm talking about. Consider a bunch of people milling around the top of a hill. They have very short memories, and are arguing about how they got to the top of the hill. The walkists think they just walked up it, but the helicopterists think there must've been some sort of aerial transportation involved. To prove their case, the helicopterists point out that walking is always observed to take them down, not up, so clearly it cannot be how they got to the top of the hill. Their argument is wrong for exactly the same reason yours (at least, the one I attribute to you) is: as long as walking is symmetric (like point mutations), the ability to walk down automatically implies the ability to walk up. The reason they cannot walk up from the top of the hill is not because of a limitation of walking, but because of a special property of their starting position. And (most important) that special property doesn't mean that their position cannot be reached by walking up. Note that if walking was not symmetric -- for example, if there were slopes too steep to climb -- my argument would fail. But point mutations are reversable, meaning that at least for those types of mutations, the ability to decrease dFSCI implies the ability to increase dFSCI. Insertions and deletions are not generally symmetric, but they are indirectly reversable: any insertion can be reversed by a corresponding deletion, and any deletion can be reversed by a sequence duplication followed by a bunch of point mutations. There are some ways your argument could escape this problem: for one, you could claim that all genomes have equal dFSCI, so that your argument doesn't limit evolution, only abiogenesis (which is not, as far as I can see, symmetric). Or your dFSCI measure might not be a defined as a function of the DNA sequence (i.e. you might have a measure where a particular mutation and its reverse both decreased dFSCI). But it's time for bed, and this message is too long already. Gordon Davisson
Petrushka:
A naturalistic history would involve one mutation at a time. With occasional chromosomal changes, such as duplications and transpositions. ID calculations never take this into consideration. Basically because such a history would never be particularly improbable.
This is simply false. ID calculations assume the best case scenario for naturalistic pathways, and those pathways continue to be shown as exceedingly improbable. There are two aspects to information arising in life: prebiotic, and post-biotic. Some (including probably some or even most ID proponents) would view the first as the most problematic. Indeed, Meyer's latest book is focused exclusively on the first, while Behe's work is focused primarily on the second. In either case, every opportunity is made in the calculations for material mechanisms to do their alleged work; and each time they come up wanting.
Behe is simply full of it.
Look in the miror. I don't think you understand the relationship of probability to the ID argument, as your examples demonstrate. Of course improbably things happen. They happen all the time. Improbability is only one aspect of detecting design. In addition, there has to be specification. Eric Anderson
Upright Biped, Thanks for weighing in. I think you make some excellent points. If I can distill what you are saying into generalized ID terms, the semiotic nature of cellular processes is another striking example of (i) functional complex specified information, and (ii) irreducible complexity. I agree with you that it is a powerful example and one that cannot be ignored by the thoughtful person. However, I fear the materialist who cannot or will not understand the basic concept of functional complex specified information and the fact that it points to a designing intelligence will not be swayed by the semiotic example, as powerful as it may be. That is because the dedicated materialist has already committed the intellectual fallacy of thinking that such systems can arise through purely naturalistic and materialistic processes. Therefore, she will simply ignore the argument altogether (witness your recent attempt at an exchange on the other website) or will argue that, "yes, the semiotic processes in the cell are incredible, and isn't it incredible what evolution can produce!?" Eric Anderson
Well Petrushka, seeing as non-local, beyond space and time, quantum entanglement/information is now found along entire protein structures which is, among other things, constraining functional proteins from 'evolving' to any new, extremely rare, sequences of functionality in the first place, I would have to say that the only thing that is 'full of it' is the entire reductive materialistic foundation of the neo-Darwinian framework that you are so enamored with. But hey petrushka, it's only science! :) ,,, All you have to do, to save your beloved atheistic/materialistic delusions, is to refute Alain Aspect, and company's, falsification of local realism i.e. of reductive materialism!!! Moreover, you don't rigorously establish a point in science by denigrating another person, such as you have done with Dr. Behe, you do it by actually providing concrete, clear, repeatable experimental proof for your position that unquestionably demonstrates that it is true!!! notes:
Falsification Of Neo-Darwinism by Quantum Entanglement/Information https://docs.google.com/document/d/1p8AQgqFqiRQwyaF8t1_CKTPQ9duN8FHU9-pV4oBDOVs/edit?hl=en_US Where's the substantiating evidence for neo-Darwinism? https://docs.google.com/document/d/1q-PBeQELzT4pkgxB2ZOxGxwv6ynOixfzqzsFlCJ9jrw/edit Coherent Intrachain energy migration at room temperature - Elisabetta Collini & Gregory Scholes - University of Toronto - Science, 323, (2009), pp. 369-73 Excerpt: The authors conducted an experiment to observe quantum coherence dynamics in relation to energy transfer. The experiment, conducted at room temperature, examined chain conformations, such as those found in the proteins of living cells. Neighbouring molecules along the backbone of a protein chain were seen to have coherent energy transfer. Where this happens quantum decoherence (the underlying tendency to loss of coherence due to interaction with the environment) is able to be resisted, and the evolution of the system remains entangled as a single quantum state. http://www.scimednet.org/quantum-coherence-living-cells-and-protein/ Quantum states in proteins and protein assemblies: The essence of life? - STUART HAMEROFF, JACK TUSZYNSKI Excerpt: It is, in fact, the hydrophobic effect and attractions among non-polar hydrophobic groups by van der Waals forces which drive protein folding. Although the confluence of hydrophobic side groups are small, roughly 1/30 to 1/250 of protein volumes, they exert enormous influence in the regulation of protein dynamics and function. Several hydrophobic pockets may work cooperatively in a single protein (Figure 2, Left). Hydrophobic pockets may be considered the “brain” or nervous system of each protein.,,, Proteins, lipids and nucleic acids are composed of constituent molecules which have both non-polar and polar regions on opposite ends. In an aqueous medium the non-polar regions of any of these components will join together to form hydrophobic regions where quantum forces reign. http://www.tony5m17h.net/SHJTQprotein.pdf Myosin Coherence Excerpt: Quantum physics and molecular biology are two disciplines that have evolved relatively independently. However, recently a wealth of evidence has demonstrated the importance of quantum mechanics for biological systems and thus a new field of quantum biology is emerging. Living systems have mastered the making and breaking of chemical bonds, which are quantum mechanical phenomena. Absorbance of frequency specific radiation (e.g. photosynthesis and vision), conversion of chemical energy into mechanical motion (e.g. ATP cleavage) and single electron transfers through biological polymers (e.g. DNA or proteins) are all quantum mechanical effects. http://www.energetic-medicine.net/bioenergetic-articles/articles/63/1/Myosin-Coherence/Page1.html
Here's another measure for quantum information in protein structures:
Proteins with cruise control provide new perspective: Excerpt: “A mathematical analysis of the experiments showed that the proteins themselves acted to correct any imbalance imposed on them through artificial mutations and restored the chain to working order.” http://www.princeton.edu/main/news/archive/S22/60/95O56/
The preceding is solid confirmation that far more complex information resides in proteins than meets the eye, for the calculus equations used for ‘cruise control’, that must somehow reside within the quantum information that is ‘constraining’ the entire protein structure to its ‘normal’ state, is anything but ‘simple classical information’. For a sample of the equations that must be dealt with, to ‘engineer’ even a simple process control loop like cruise control along a entire protein structure, please see this following site:
PID controller Excerpt: A proportional–integral–derivative controller (PID controller) is a generic control loop feedback mechanism (controller) widely used in industrial control systems. A PID controller attempts to correct the error between a measured process variable and a desired setpoint by calculating and then outputting a corrective action that can adjust the process accordingly and rapidly, to keep the error minimal. http://en.wikipedia.org/wiki/PID_controller
It is very interesting to note that quantum entanglement, which conclusively demonstrates that ‘information’ in its pure 'quantum form' is completely transcendent of any time and space constraints, should be found in molecular biology on such a massive scale, for how can the quantum entanglement 'effect' in biology possibly be explained by a material (matter/energy) 'cause' when the quantum entanglement 'effect' falsified material particles as its own 'causation' in the first place? (A. Aspect) Appealing to the probability of various configurations of material particles, as Darwinism does, simply will not help since a timeless/spaceless cause must be supplied which is beyond the capacity of the material particles themselves to supply! To give a coherent explanation for an effect that is shown to be completely independent of any time and space constraints one is forced to appeal to a cause that is itself not limited to time and space! i.e. Put more simply, you cannot explain a effect by a cause that has been falsified by the very same effect you are seeking to explain! Improbability arguments of various 'special' configurations of material particles, which have been a staple of the arguments against neo-Darwinism, simply do not apply since the cause is not within the material particles in the first place! Verse and music:
John 1:1-5 1 In the beginning was the Word, and the Word was with God, and the Word was God. 2 He was with God in the beginning. 3 Through him all things were made; without him nothing was made that has been made. 4 In him was life, and that life was the light of all mankind. 5 The light shines in the darkness, and the darkness has not overcome it. Rascal Flatts - Unstoppable (Olympics Mix) http://www.youtube.com/watch?v=v1xF1L8ZS7s
bornagain77
Behe is full of it, you say? Full of what? While Dr Behe is cleaning up, would you (finally) mind addressing the evidence you have ducked more times than can be counted? The translation of genomic information requires specific physical objects and dynamics. These are observable properties. It is observed that two categories of physical objects within the system have qualities that are immaterial to their physcial make-up. This process is coherently understood, and these immaterial qualities are logically necessary for the system to operate properly. HOW does an immaterial quality become instantiated into a material object by the purely material processes you defend? Give yourself a pep talk; hit the evidence for ID as a welcome change of pace. Upright BiPed
Hello Eric and Englishman, Not to distract from your conversation, but I think there is a far more compelling (and sustainable) coversation about the predictions of ID and TOE with regard to DNA than pondering the percentage of junk DNA (to be confirmed at some point in the distant future). Please allow me to please cut and paste from another thread...
If the theory of material origins is actually true, then the idea itself predicts that the information in the genome is not semiotic – to borrow Dr Moran’s term – it is only ‘analogous’ to the kind of information transfer we as sentient beings use. One is symbolic and the other is chemical. Indeed, that position is argued by materialists (one way or another) ever day on this forum. The information transfer in the genome is said to be no more than a cascade of physical reactions, but of course, all information transfer is a cascade of physical reactions, so that is no answer, and it never has been. But why does the truth of materialism predict this (chemical-only transfer) anyway? Because the representations and protocols involved in semiosis would have only appeared on the map after billions of years of evolutionary advancement in organisms. An imaginative materialists may see a chemically non-complex origin of inheritable Life in his or her mind’s eye, but that image blows up if that heredity is accomplished by using representations and protocols. Ask a materialists “what came first on the great time-line of Life: a) molecular inheritance by genetics, or b) representations and protocols?” Typically confusions ensues, and the embattled assumptions of materialism are pushed to the very front of the defense. On the other hand, if ID is said to be true, then it’s own prediction is on the line. That prediction has been that the information causing life to exist is semiotic. And again, that is exactly what is argued (one way or another) on this board every day. When nucleic sequences were finally elucidated, we did not find an incredible new and ingenious way in which physical law could record and transfer information, we found the exact same method of information transfer that living agents use; semiosis. And as it turns out, if one properly takes into account the observable physical entailments of information transfer during protein synthesis, and compares it to the physical entailments of any other type of recorded information transfer (without exception), they are precisely the same. It requires an arrangement of matter to serve as a representation within a system, it requires an arrangement of matter to physically establish an immaterial relationship between two discrete objects within that system (the input and output), it requires an effect to be driven by the input of the representations, and it requires that all these physical things remain discrete. The semiotic state of protein synthesis is therefore confirmed by the material evidence itself, and with it, one of the predictions of ID theory. Of course, I have no authority, and I am not speaking for ID writ large, just for myself and anyone else who might hold this view.
Upright BiPed
Behe is simply full of it. Every thing that happens can be considered improbable in retrospect. And yet things happen. It is incredibly improbable that your particular parents met, married, conceived at the exact moment required to produce you, and yet they did. The Axe argument was that functional space was too sparse to support incremental change, so when Lenski and Thornton demonstrate incremental pathways, suddenly it becomes improbable that they be found. What a surprise. Suddenly the threshold for dFSCI is reduced from 500 bits to one bit. Petrushka
By the way, I should add that DNA is most definitely elegant and breathtaking. Everything we are learning about it underscores the fact that we are dealing with a system designed by a designer or designers of almost incomprehensible skill and capability. However, as it relates to the inference of design, my point is that "elegance" is a more subjective concept that does not necessarily or exclusively belong to the design inference. Functional complex specified information is the key to inferring design, whether we subjectively think the particular item is "elegant" or not. Eric Anderson
Thanks, englishmaninistanbul. Two points: 1. ID itself does not get into the question of the designer's purpose in any ultimate purpose sense. We can talk about a concept of "purpose" in the narrow sense of an object having a definable function or being a "purposeful" object in terms of what it objectively does. As a result, there is "purpose" in terms of engineering or functionality (which is a form of complex functional specified information). But the broader "Why" of a designer's actions is not part of ID. 2. More importantly, even if we accept that DNA has a purpose in the sense of its engineering and function (which I think most everyone agrees on), it does not follow that "all parts" of DNA must have a purpose. Machines wear out, break down, etc. Also, as was discussed on another thread, it is possible that some DNA does not have current function but is there for future use/development. That said, there are good design and engineering reasons to believe that the great majority, if not all, of DNA is functional. This has been a general prediction of ID for a long time (as articulated by Bill Dembski a number of years ago), in contrast to the junk DNA hypothesis pushed by many evolutionary proponents. But this does not mean that "all parts" of DNA have function. There may be some small amount of junk. Thus, if we were to describe the predictions of ID and Darwinism with respect to DNA we would say that: (i) ID predicts most DNA will have function, though some smaller portion could be junk, (i) Darwinism predicts that some small portion is functional, with the vast majority being junk.* We now are starting to see the evidence, and it is lining up very strongly on the side of (i). * Note that some evolutionists are now starting to back pedal from the junk DNA hypothesis as more and more DNA is shown to be functional. But even recently, prominent proponents of Darwinian theory have continued to maintain that junk DNA provides evidence for Darwinism and against design. Eric Anderson
Well, if the logic goes "Designers have purpose, and DNA was designed, therefore all parts of the genome must have a purpose" then it kinda does require "elegance" in that sense. Maybe we're talking at cross purposes. englishmaninistanbul
"So are we saying that ID necessarily posits a designer that is interested in elegance of design?" Teleologic does not require elegance. Eric Anderson
Just having a little think to myself, it occurs to me that part of the trouble is in defining "consciousness" and "agency." I just listened to that Signature in the Cell discussion over at discovery.org where Stephen Meyer describes how he started with what you might describe as a "hunch" that design is the answer, and how he then set out to find a scientific way of describing and justifying that hunch. I hope that I'm just ill-informed and somebody will put me right, but I still haven't come across a definition of "consciousness" or "intelligent agency" beyond what is assumed to be common knowledge. To use an example: "Driving responsibly" is a concept we're all familiar with. But when a policeman pulls over an adrenaline-addicted wannabe rally driver he can't just say "You're not driving responsibly enough sir" and give him a ticket. He has to refer to legal definitions of "driving responsibly" phrased in words of one syllable that offenders can't weasel out of. So with "consciousness" and "intelligent agency", we all know what we mean but we have trouble describing it. We know that a beaver dam qualifies as intelligent design as much as computer software. So how do we define these intelligent agents, as opposed to any other phenomenon? It reminds me of an interview with Antony Flew on Youtube, where Flew confirms that he accepts there must be "an intelligence", and then reasons that the intelligence must be "omnipotent", but that we're not entitled to infer anything else in a religious sense. When questioned on whether that intelligence is eternal, he says "you can't really separate the eternity from the omnipotence." The interviewer asks if this must be a "personal force or being", and among other things Flew says "He's got to be conscious if he's going to be an agent." And so on. I'm sure people are going to debate the meaning of all these words and which thing necessitates the other, but that's really the problem. I suppose what's needed is not so much an understanding of consciousness or agency or even a definite statement of whether it's illusory or real, just a universally acceptable, bare bones, legalistic working definition of what conscious (?) agents are in terms of observed phenomena, that's portable to the origin of life debate. To give an example of why I think there's a need for this, on the Put a Sock In It page, in the opening paragraph, we have the following:
Intelligent design does not speak to the nature of designers anymore than Darwin’s theory speaks to the origin of matter.
True in context, but you have to define this designer somehow, at least in terms of lowest common denominators between all observed designers. The heading "Intelligent Design is Not a Valid Theory Since It Does Not Make Predictions" answers that accusation by pointing to the vindicated prediction that junk DNA would turn out to be functional. However it is admitted that (italics mine):
predictions of functionality of “junk DNA” were made based on teleological bases
So are we saying that ID necessarily posits a designer that is interested in elegance of design? By that definition Windows software is not ID! A small joke there, but I think you can see my point. Now it is certainly true that someone who subscribes to ID is also free to step into the realm of teleology which materialism flatly denies, but might it not be more technically correct to say that ID allowed for the prediction of junk DNA function, as opposed to being responsible for it all by itself? Another example: The heading "The Designer Must be Complex and Thus Could Never Have Existed" is handled thusly:
This is obviously a philosophical argument, not a scientific argument, and the main thrust is at theists. So I will let a theist answer this question
A correct statement in its context, but it doesn't address the question of where the scientific definition ends and philosopho-religious speculation begins. You can't say that the word "designer" is utterly undefinable scientifically otherwise it would be as unscientific a term as "God." So there has to be a minimum definition acceptable in scientific terms for what does and does not qualify as a "designer", "intelligent agent", what have you. I'm not sure whether I'm raising valid points or if I just haven't done enough research. But I really would like to know if these questions have been addressed. Thank you for reading, if you made it this far :) englishmaninistanbul
as to:
Thornton has added a new twist — actually trying all the possible variations between two cousin genes and checking to see that there is a neutral pathway. I’m sure this will become a common kind of research.
Drum roll please:
Wheel of Fortune: New Work by Thornton's Group Supports Time-Asymmetric Dollo's Law - Michael Behe - October 5, 2011 Excerpt: Darwinian selection will fit a protein to its current task as tightly as it can. In the process, it makes it extremely difficult to adapt to a new task or revert to an old task by random mutation plus selection. http://www.evolutionnews.org/2011/10/wheel_of_fortune_new_work_by_t051621.html Severe Limits to Darwinian Evolution: - Michael Behe - Oct. 2009 Excerpt: The immediate, obvious implication is that the 2009 results render problematic even pretty small changes in structure/function for all proteins — not just the ones he worked on.,,,Thanks to Thornton’s impressive work, we can now see that the limits to Darwinian evolution are more severe than even I had supposed. (which was 1 in 10^40 for just two protein-protein binding sites) http://www.evolutionnews.org/2009/10/severe_limits_to_darwinian_evo.html#more Dollo’s law, the symmetry of time, and the edge of evolution - Michael Behe - Oct 2009 Excerpt: Nature has recently published an interesting paper which places severe limits on Darwinian evolution.,,, A time-symmetric Dollo’s law turns the notion of “pre-adaptation” on its head. The law instead predicts something like “pre-sequestration”, where proteins that are currently being used for one complex purpose are very unlikely to be available for either reversion to past functions or future alternative uses. http://www.evolutionnews.org/2009/10/dollos_law_the_symmetry_of_tim.html
petrushka, do you even really care that your integrity is completely shot after repeatedly being shown to be wrong??? Why do you do this? Exactly what is the payoff for living in a lie? bornagain77
ID proponents use probability to ascertain the likelihood of a materialistic and naturalistic history of an object...
A naturalistic history would involve one mutation at a time. With occasional chromosomal changes, such as duplications and transpositions. ID calculations never take this into consideration. Basically because such a history would never be particularly improbable. Now, in many cases we don't know the history, so we extrapolate from the processes that we can observe. We also extrapolate from the genetic distance between cousin species. Thornton has added a new twist -- actually trying all the possible variations between two cousin genes and checking to see that there is a neutral pathway. I'm sure this will become a common kind of research. It's a fossil record in the genome. Petrushka
Petrushka: "The key requirement is regularity. One cannot study miracles except as exceptions to the background of regularity . . ." Who is talking about miracles? Is Mount Rushmore or the sculptures on Easter Island a result of miracles? The observable fact is that some things in the world are designed and others are not. Can we tell whether something falls in the designed category if we don't have a record of the causal history? That is it. Very simple question. It is a question that is asked and answered all the time in several fields. Nothing to do with miracles. Eric Anderson
Petrushka: "ID proponents use probability to suggest the history of an object, and the assumed history of the same object as parameters in the calculation of probability." This is a misrepresentation. ID proponents use probability to ascertain the likelihood of a materialistic and naturalistic history of an object, based on what is currently known in chemistry and physics, and then draw a perfectly rational inference based on that probability. The parameters are whatever information is currently known in chemistry and physics that could bear on the alleged materialistic history. Don't like the parameters of the calculation? Then please detail for us what materialistic mechanism/history you propose, and then we can do some calculations to see if it has any legs in the real world. Eric Anderson
Petrushka, genetic translation contains two material objects with (observable) immaterial qualities. These are coherently understood phenomena, which are logically necessary to accomplish the task. How does an immaterial quality become instantiated into a physical object? What mechanisms are causally adequate to accomoplish such a thing? My bet? You'll say anything at all before you'll allow yourself to get in the ring with the actual* evidence. actual: a : existing in act and not merely potential b : existing in fact or reality c : not false or apparent: existing or occurring at the time ... Upright BiPed
'Evolution' breaks the fact, that humans only come from humans. Now DNA can also be used in creation. The idea that a creator made one animal and from that, ( even after many generations )made another similar animal. To go even further than that God could take body materials to make another animal thus leaving vestiges of that first animals history. This explains the 'common descent' many 'evolutionary' scientists hold on to. But there is also evidence for that common descent, is not correct. That is why some they say uncommon descent. But both ideas are not totally correct. In Creative Patterns, we see a descent of similar animals, but many starts to theses descents. But not through 'evolution' but through Creation. This is not just an idea, we are told that God actually did creation like that. That was with Adam and Eve. God actually took bone, DNA, muscle tissue,etc, to create Eve. He did not just create her from scratch, nor did he use just the DNA. Does this not explain what both the common descent and uncommon descent scientists see? It is a combination of both ,views. If anyone thinks this is incorrect, let me now why? MrDunsapy
as to: Evolution is learning and doesn’t violate any laws or probabilities. No??? OK,, it just violates the second law of thermodynamics!!! :)
Evolution's Thermodynamic Failure - Granville Sewell (Professor of Mathematics - Texas University - El Paso) http://www.spectator.org/dsp_article.asp?art_id=9128 Prof. Granville Sewell on Evolution: In The Beginning and Other Essays on Intelligent Design - video http://www.youtube.com/watch?v=CHOnqDNJ0Bc Granville Sewell - Mathematics Dept. University of Texas El Paso (Papers and Videos) http://www.math.utep.edu/Faculty/sewell/
bornagain77
Petrushka: Put simply, science assumes that anything that can interact with matter is matter. Sièle it certainly is, and simply wrong. Science does not assume anything like that. You certainly do. But you are not "science". On regularity I agree. When design is applied, there are regular phenomena that can be observed. gpuccio
As to constructing a 'background of regularity', ignoring the elephant in the living room fact that materialism presupposes chaos as the 'background', it seems that Quantum Mechanics, once again, refuses to crawl into this 'materialistic box' of your own making: notes: First I noticed that the earth demonstrates centrality in the universe in this video Dr. Dembski posted a while back;
The Known Universe – Dec. 2009 – a very cool video (please note the centrality of the earth in the universe) http://www.youtube.com/watch?v=17jymDn0W6U
,,, for a while I tried to see if the 4-D space-time of General Relativity was sufficient to explain centrality we witness for the earth in the universe,,,
Where is the centre of the universe?: Excerpt: The Big Bang should not be visualized as an ordinary explosion. The universe is not expanding out from a centre into space; rather, the whole universe is expanding and it is doing so equally at all places, as far as we can tell. http://math.ucr.edu/home/baez/physics/Relativity/GR/centre.html
,,,Thus from a 3-dimensional (3D) perspective, any particular 3D spot in the universe is to be considered just as ‘center of the universe’ as any other particular spot in the universe is to be considered ‘center of the universe’. This centrality found for any 3D place in the universe is because the universe is a 4D expanding hypersphere, analogous in 3D to the surface of an expanding balloon. All points on the surface are moving away from each other, and every point is central, if that’s where you live.,,,
4-Dimensional Space-Time Of General Relativity – video http://www.metacafe.com/watch/3991873/
,,,yet I kept running into the same problem for establishing the sufficiency of General Relativity to explain our centrality in this universe, in that every time I would perform a ‘thought experiment’ of trying radically different points of observation in the universe, General Relativity would fail to maintain centrality for the radically different point of observation in the universe. The primary reason for this failure of General Relativity to maintain centrality, for different points of observation in the universe, is due to the fact that there are limited (10^80) material particles to work with. Though this failure of General Relativity was obvious to me, I needed more proof so as to establish it more rigorously, so I dug around a bit and found this,,,
The Cauchy Problem In General Relativity – Igor Rodnianski Excerpt: 2.2 Large Data Problem In General Relativity – While the result of Choquet-Bruhat and its subsequent refinements guarantee the existence and uniqueness of a (maximal) Cauchy development, they provide no information about its geodesic completeness and thus, in the language of partial differential equations, constitutes a local existence. ,,, More generally, there are a number of conditions that will guarantee the space-time will be geodesically incomplete.,,, In the language of partial differential equations this means an impossibility of a large data global existence result for all initial data in General Relativity. http://www.icm2006.org/proceedings/Vol_III/contents/ICM_Vol_3_22.pdf
,,,and also ‘serendipitously’ found this,,,
THE GOD OF THE MATHEMATICIANS – DAVID P. GOLDMAN – August 2010 Excerpt: Gödel’s personal God is under no obligation to behave in a predictable orderly fashion, and Gödel produced what may be the most damaging critique of general relativity. In a Festschrift, (a book honoring Einstein), for Einstein’s seventieth birthday in 1949, Gödel demonstrated the possibility of a special case in which, as Palle Yourgrau described the result, “the large-scale geometry of the world is so warped that there exist space-time curves that bend back on themselves so far that they close; that is, they return to their starting point.” This means that “a highly accelerated spaceship journey along such a closed path, or world line, could only be described as time travel.” In fact, “Gödel worked out the length and time for the journey, as well as the exact speed and fuel requirements.” Gödel, of course, did not actually believe in time travel, but he understood his paper to undermine the Einsteinian worldview from within. http://www.firstthings.com/article/2010/07/the-god-of-the-mathematicians
,,,But if General Relativity is insufficient to explain the centrality we witness for ourselves in the universe, what else is? Universal Quantum wave collapse to each unique point of observation is! To prove this point I dug around a bit and found this experiment,,, This following experiment extended the double slit experiment to show that the ‘spooky actions’, for instantaneous quantum wave collapse, happen regardless of any considerations for time or distance i.e. The following experiment shows that quantum actions are ‘universal and instantaneous’ for each observer:
Wheeler’s Classic Delayed Choice Experiment: Excerpt: Now, for many billions of years the photon is in transit in region 3. Yet we can choose (many billions of years later) which experimental set up to employ – the single wide-focus, or the two narrowly focused instruments. We have chosen whether to know which side of the galaxy the photon passed by (by choosing whether to use the two-telescope set up or not, which are the instruments that would give us the information about which side of the galaxy the photon passed). We have delayed this choice until a time long after the particles “have passed by one side of the galaxy, or the other side of the galaxy, or both sides of the galaxy,” so to speak. Yet, it seems paradoxically that our later choice of whether to obtain this information determines which side of the galaxy the light passed, so to speak, billions of years ago. So it seems that time has nothing to do with effects of quantum mechanics. And, indeed, the original thought experiment was not based on any analysis of how particles evolve and behave over time – it was based on the mathematics. This is what the mathematics predicted for a result, and this is exactly the result obtained in the laboratory. http://www.bottomlayer.com/bottom/basic_delayed_choice.htm Genesis, Quantum Physics and Reality Excerpt: Simply put, an experiment on Earth can be made in such a way that it determines if one photon comes along either on the right or the left side or if it comes (as a wave) along both sides of the gravitational lens (of the galaxy) at the same time. However, how could the photons have known billions of years ago that someday there would be an earth with inhabitants on it, making just this experiment? ,,, This is big trouble for the multi-universe theory and for the “hidden-variables” approach. http://www.asa3.org/ASA/PSCF/2000/PSCF3-00Zoeller-Greer.html.ori
,,,Shoot, there is even a experiment that shows the preceding quantum experiments will never be overturned by another ‘future’ theory,,,
An experimental test of all theories with predictive power beyond quantum theory – May 2011 Excerpt: Hence, we can immediately refute any already considered or yet-to-be-proposed alternative model with more predictive power than this (quantum theory). http://arxiv.org/abs/1105.0133
,, and to make universal Quantum Wave collapse much more ‘personal’ I found this,,,
“It was not possible to formulate the laws (of quantum theory) in a fully consistent way without reference to consciousness.” Eugene Wigner (1902 -1995) from his collection of essays “Symmetries and Reflections – Scientific Essays”; Eugene Wigner laid the foundation for the theory of symmetries in quantum mechanics, for which he received the Nobel Prize in Physics in 1963.
,,,Here is the key experiment that led Wigner to his Nobel Prize winning work on quantum symmetries,,,
Eugene Wigner Excerpt: To express this basic experience in a more direct way: the world does not have a privileged center, there is no absolute rest, preferred direction, unique origin of calendar time, even left and right seem to be rather symmetric. The interference of electrons, photons, neutrons has indicated that the state of a particle can be described by a vector possessing a certain number of components. As the observer is replaced by another observer (working elsewhere, looking at a different direction, using another clock, perhaps being left-handed), the state of the very same particle is described by another vector, obtained from the previous vector by multiplying it with a matrix. This matrix transfers from one observer to another. http://www.reak.bme.hu/Wigner_Course/WignerBio/wb1.htm
i.e. In the experiment the ‘world’ (i.e. the universe) does not have a ‘privileged center’. Yet strangely, the conscious observer does exhibit a ‘privileged center’. This is since the ‘matrix’, which determines which vector will be used to describe the particle in the experiment, is ‘observer-centric’ in its origination! Thus explaining Wigner’s dramatic statement, “It was not possible to formulate the laws (of quantum theory) in a fully consistent way without reference to consciousness.” I find it extremely interesting, and strange, that quantum mechanics tells us that instantaneous quantum wave collapse to its ‘uncertain’ 3-D state is centered on each individual observer in the universe, whereas, 4-D space-time cosmology (General Relativity) tells us each 3-D point in the universe is central to the expansion of the universe. These findings of modern science are pretty much exactly what we would expect to see if this universe were indeed created, and sustained, from a higher dimension by a omniscient, omnipotent, omnipresent, eternal Being who knows everything that is happening everywhere in the universe at the same time. These findings certainly seem to go to the very heart of the age old question asked of many parents by their children, “How can God hear everybody’s prayers at the same time?”,,, i.e. Why should the expansion of the universe, or the quantum wave collapse of the entire universe, even care that you or I, or anyone else, should exist? Only Theism offers a rational explanation as to why you or I, or anyone else, should have such undeserved significance in such a vast universe:
Psalm 33:13-15 The LORD looks from heaven; He sees all the sons of men. From the place of His dwelling He looks on all the inhabitants of the earth; He fashions their hearts individually; He considers all their works.
bornagain77
as to: 'Put simply, science assumes that anything that can interact with matter is matter.' That is blatantly false!! Only materialism assumes that is true, science could care less what interacts with what and only cares what can shown/tested to be true to a starting hypothesis. Moreover 'transcendent information' which is definitely NOT MATTER, is shown to have dominion over matter, and energy, in quantum entanglement and teleportation experiments!!! bornagain77
Put simply, science assumes that anything that can interact with matter is matter. If the interaction can be detected, one can study the attributes whatever is interacting. The key requirement is regularity. One cannot study miracles except as exceptions to the background of regularity, and we haven't finished constructing a model of this background. Petrushka
Put as simply as I can, materialist ideologues simply assume that materialism is true, and by virtue of that assumption, they render all contrary evidence null and void. When they do not like the end result of material investigation, they immediately stop being materialist, and become ideologues instead. Upright BiPed
Put as simply as I can, ID proponents use probability to suggest the history of an object, and the assumed history of the same object as parameters in the calculation of probability. Petrushka
Evolution is learning and doesn't violate any laws or probabilities. Unless you can demonstrate that DNA sequence cousins cannot be bridged by known kinds of variation. Good luck. Petrushka
H'mm: all quiet on the thermodynamics-info theory front? KF kairosfocus
This reasoning is completely wrong and unscientific. It is the duty of those who propose an explanation to show that the explanation is both logically consistent and empirically supported. Just arguing that it is possible is not a scientific argument.
Unless, of course, the explanation involves a designer. Petrushka
LYO,
So only “entities” can “select”? You’re assuming your teleological conclusion in your argument.
As with so many terms, you can apply "selection" to NS if you define the term very, very loosely. In this case, we reason that every time differential reproduction occurs, something has been "selected." But neither someone nor something actually selected anything. The event that occurred was a bird mating, laying some eggs, and the eggs hatching, or something similar. We can call that selection, but it doesn't really mean anything. Similarly we can say that if a drop of water flows one way rather than the other that one path was selected. But did anyone or anything really select anything? In this sense, "selection" is just an after-the-fact label applied to an event that would have occurred exactly the same if it wasn't so labeled. How can a label applied after-the-fact to an event be credited with any creative ability? Even selection by an intelligent agent creates nothing. I can select a McDouble or a McRib, but how can I be credited with creating either? Even if differential sales prompts McDonald's to alter or improve a product, that improvement always originates from a distinct design process. Selection alone will never, ever introduce a new ingredient. Clearly selection is an after-the-fact assessment. How can it be a cause and also be a label applied to an effect? ScottAndrews2
lastyearon: You are imagining things that I neither think nor have ever said. In saying that NS does not really "select" I did not mean that it should be an entity with intentions. That would obviously be a conscious agent. I used the word "entity" (probably not the best one), to say that NS is not a single principle or law, or some aspect of the environment. It is rather a comlex process, where replicators that replicate better can expand. I would not use the term "selection" because it suggests that there is some simple principle that selects, and that effect is usually psychologically attributed to the environment. But the replicatore contributes to the final process even more than the environment. That's why I tried to describe it, more realistically, as follows: "It could be better described as a law of necessity which simply describes how better replicators will probably expand, and worse replicators will probably be eliminated." Moreover, it is important to remember that NS describes at least two different processes, formally disctinct: negative NS usually eliminates variaiton that compromises reproductiove fiteness, while positive NS has as its result the expansion of variation which implies a better reproductive fitness. Finally, your last phrase, that: "based on that assumption, you are concluding that evolution can’t create complexity. It’s circular." is really nonsense. My conclusion that the neo darwinian mechanism cannot create complexity is based on many explicit reasonings, detailed by me many times here, but not certainly on the silly assumption that you attribute to me. gpuccio
gpuccio
But the point is, NS is not really an entity, and does not really “select”. It could be better described as a law of necessity which simply describes how better replicators will probably expand, and worse replicators will probably be eliminated.
So only "entities" can "select"? You're assuming your teleological conclusion in your argument. The function that NS performs is selection, regardless of its lack of intent. You are assuming that only entities- with intentions-can truly select. And based on that assumption, you are concluding that evolution can't create complexity. It's circular. lastyearon
GD: Pardon, but you have made some pretty strong claims, including targetting named individuals on matters linked to thermodynamics and info theory. I think that creates an obligation to warrant your claims. Kindly, do so, that we can see. And, recall that one pivotal point in Sewell is that that which is thermodynamically improbable to the point of unobservability, when a system is isolated, does not become instantly probable when it is opened up, unless the opening up is in particular ways that foster such outcomes. [In short, we know of a class of entities that routinely creates complex functional organisation and associated information: intelligence. This is routinely observed, and we do not observe such FSCO/I emerging from other causes. Digitally coded FSCI, dFSCI, such as the text of this and other posts in this thread, is a capital case in point. With the whole Internet and the collection of major libraries around the world as cases in point also. Not to mention, for the organisation side, the whole world of technology.) In short, opening up a system to raw injections of energy, will naturally tend to INCREASE its disorder, as, the Clausius first example used to deduce the second law of thermodynamics shows -- body B at the lower temp imports energy in raw form [heat or its near-equivalent], and so increases its entropy as to overwhelm the reduction in body A originally at a higher temperature on transferring energy to B. The root of that of course is the way that such a transferred increment of heat increases the number of ways mass and energy at ultra-microscopic level can be arranged. The resulting strong tendency is then that systems tend to go to the clusters of states that have overwhelmingly large numbers of microstates associated with them, consistent with macroscopic constraints. I suggest you respond in light of the considerations here, here and here (given as well here on the recent more positive response to the informational view of thermodynamics). I contend that, on the infinite monkeys/needle in the haystack sort of analysis [cf here on the plausibility bound] once we are within our solar system -- our practical universe for events in general -- a functionally specific complex outcome arising by undirected chance and necessity is utterly implausible; but such is very much commonly observed as produced by intelligence. And, digitally coded, functionally specific complex information is a particularly relevant case in point. Move up to 1,000 bits, and we cover the observed cosmos, on principles that ground statistical views on thermodynamics and the famous laws thereof. GEM of TKI kairosfocus
Gordon: Thank you for your thoughts. I really appreciate your discussion, and your reasonable and respectful approach. So, just a quick (I hope :) ) counter reply. First of all, about the problem of lies, and unjustified accusations. It's fine for me that you think that way, but I still maintain my position that neo darwinism (and stronf AI) are unacceptable scientific lies. With that, I don't mean to be unrespectful to any specific person. The responsibilities in maintaining such a general cognitive hypnosis are many and different, and I am not interested in accusing anyone personally. But it is wholly unacceptable that scientific thought has been made slave to weak, illogical, and unsupported theories for so long. You obviously believe that ID arguments are weak, and that neo darwinism is a good explanation. I am fine with that. Indeed, I think the the main reason you beliebe that is probably that, as you say, you are not an expert on biology or biochemistry. Maybe, if you understood better the details, you could change your mind, just out of intellectual honesty. Let's go to the details. Thank you from my heart for admitting that my reasoning is not circular. It should be obvious, once ot os calrified, but my experience here, even with people "from the other side" that I really respect and like, has been different. So, thank you for that. The "argument from ignorance" part deserves a further clarification. ID is not an argument from ignorance. It is, indeed, a theory made at least of two fundamental parts: a) The positive part: a1) definition of a formal property (for instance, dFSCI), frequently observed in objects that are certainly designed by conscious agents (In this case, humans), and empirically never observed in objects that are certainly not designed by conscious observers, but are rather the output of random or necesiity, or mixed systems. I strongly believe that dFSCI completely satisfies that purely empirical definition. a2) demonstration that dFSCI is hugely observed in the set of objects we are discussing (biological objects, and particularly the genome), whose origin is the object of controversy. a3) inference to design by a conscious agent as the best explanation available. This is an inference by analogy, a very strong and convincing one, a positive argument from all points of view. It is also a perfectly correct scientific explanation, wholly empirical and appropriate to the problem we are trying to solvbe. The only reason why so called scientists refute that explanation a priori is that they are committed by faith to materialistic reductionism, that cannot tolerate even the possibily that conscious agents may exists who can have designed biological information. b) The second part is the falsification of the currently accepted theory of neo darwinism. That is necessary too, because one of the pillars of ID reasoning is that dFSCI is observed in biological information, IOWs that the functional information there is not explained by any knows chance, necessity, or mixed mechanism. As neo darwinism pretends to do exactly that, it is the duty of ID to demonstrate that that theory is wrong, illogical and empirically unsupported. And that's exactly what ID does. Now I want to be very clear: ID is the best explanation because of its positive point. Design is the best explanation for biological information exactly for the point Dawkins himself makes: that biological information has all the formal properties of designed things. ID only makes that very simple statement specific and rigorous, isolating and defining a quantifiable formal property and empirically verifying that it is a reliable marker of design, and that it is hugely present in biological information. But ID, the natural explanation for biologival information, being a perfectly appropriate scientific theory, can obviously be falsified. Neo darwinism is certainly an attempt to falsify ID: that is, to show that there exists a non design theory that can explain biological information. It's perfectly fine that neo darwinism, or any other thwory, seriously tries to falsify ID, the natural explanation for biological information. The point is: neo darwinism completely fails in that attempt. The neodarwinian theory does not explain anything. As you don't like me calling it a lie, I will not insist. But at least, let me repeat here another provocative statement: it is a stupid theory! :) Again, I am not in any way trying to offend you: intelligent people can certainly believe in a stupid theory, and after all you have admitted that you don't know well the biological details :) . Finally, I appreciate your appreciation of Dembski. I think he is a great and original thinker, completely misunderstood, even if I don't always agree with (or probably understand) all of his points. I would like to know what you think of Behe, Abel, Durston, three ID thinkers that I deeply admire. I will not make any comment about your comment on Sewell, because really I have not the correct background for that. I would like to comment in detail your comments about intermediates, but I think I will leave that for another post. gpuccio
gpuccio: I haven't had a chance to write up a proper response yet (hopefully tomorrow... maybe...), but I wanted to at least give you a quick reply with a few high points: - First, thanks for clarifying your usage of dFSCI; given that, I agree that you aren't falling into the circularity I described (and a good bit of my earlier response was irrelevant, because I thought it was closer to Dembski's CSI). - I do, however, think you are making an argument from ignorance. This doesn't necessarily mean that it's fallacious, it just means that it is weak (i.e. it's an assumption in the absence of evidence, not a conclusion on the basis of solid evidence). - I'm not any sort of expert on biology or biochemistry or any such, so I can't properly justify this, but IMHO the views that there are no selectable intermediates at all (so selection can be ignored in the probability calculations) and the view that all intermediates are selectable (as in Dawkins' weasel simulation) are both hopeless oversimplifications. Reality is almost certainly somewhere in between (and quite a ways off to the side, if I may stretch the metaphor). IMHO the question to ask isn't "does selection help", but "does selection help enough?" - On the question of lying: even if you aren't accusing me personally (and as a sort of amateur Darwin-defender I lie somewhere on the fringes of the group you're accusing), I think your accusation is unjustified and destructive. I agree with you that "Those who propose a theory, who defend and enforce it at the level of academic thought, have the scientific and moral duty to verify that what they say and propose is credible, is supported by facts, and is a good explanation, if possible the best", and I think that's exactly what they are doing. Again, I'm not an expert, but on the subjects I happen to have the background knowledge to give an informed opinion on (mostly information theory and thermodymanics), the case for ID is weak to nonexistent, and a disturbing amount is incompetent (I'll cite Granville Sewell as an example -- he really doesn't know what he's talking about). (If you want a counterexample, I'll cite Dembski. IMHO some of his work is quite good -- flawed, but workable, although it has gaps when you actually try to connect it to reality. I really wish he and his critics would talk to each other more, rather than past each other.) ...I guess that wasn't a very quick reply, was it? Oh, well... Gordon Davisson
Firstly, the linked article doesn't define dFSCI (the thing I mentioned not having a definition for); it defines yet another measure, Chi_500. One of the points I was making was that gpuccio was using several different definitions of information, without properly accounting for their differences; adding more similar-but-confusingly-different definitions doesn't seem helpful here. Secondly, the definition of Chi_500 is rather confusing (to me, anyway) and ambiguous. If I get a chance, I'll write up some notes about my confusion in that other posting's comments. Thirdly, in that other message you make precisely the circular argument I described above: treating Durston's "fit" values as -log2(p), which only works if you start by assuming selection has no effect. Gordon Davisson
F/N: It must be noted, that protein chains are based on a chaining reaction that depends only on the fact that the Amino Acid has a structure where a COOH and an NH2 group are tied to a C-atom, with a side chain. The chaining is based on the acid-amine4 reaction, and is independent, essentially, of the side chain. Protein activity is based on the particular sequence imposed by chaining in the ribosome, not by constraints on what COOH may bond with what NH2. The activity and functionality arise because certain AA chains in aqueous media possess folding properties and expose reaction structures based o0n the side chains, etc. These are properties, essentially, of the whole chain, and/or of placement of particular AAs in it. This is informationally controlled, indeed, even the particular AA loaded on the standard CCA coupler end of the tRNA, is based on recognising he specific tRNA and using a specific loading enzyme (assembled in an earlier cycle of the system! chickens and eggs and looping back where product A depends on prior product A . . . ). In turn, D/RNA is based on sugar-phosphate chaining, and is again independent of the side-chains. The chains are string data structures, and are highly contingent. In the case of the D/RNA, we know of codes imposed on the sequences so that through algorithmic processes, protein chains are assembled. Where it is known that fold domains are like 1 in 10^70 of AA chain space. We see here highly contingent, discrete state [= digital] information, and we see here symbols and codes. Which is an essentially mental construct, i.e. language [machine code, to be specific]. Thus, digitally coded, functionally specific, complex information is am apt description of what goes on in the living cell. Which turns out to be an instance of essentially the same thing that goes on when we code a program into machine language in a computer to make it carry out an algorithmic process. I suggest that GD takes a look here on, pausing to view the video. Here will also help (cf how I link Durston and Dembski), and I think I will add the new detailed video there. GP is of course right to highlight that he claim that we are dealing with essentially vast connected continents of functional configs, has to be shown, not asserted, assumed or implied. So far, just starting with the issue of proteins, properly folding domains are like 1 in 10^70. For AA chains, recall that 3 of 64 AA codes are STOPs, so at random chains are maximally likely to be non-functional. We have excellent reason to infer to molecular islands of function in the living cell. GEM of TKI kairosfocus
F/N: Lying by continued selectively hyperskeptical misrepresentation in the teeth of duty of care to the truth (courtesy Wiki, testifying against known interest):
Those who indulge in the more stubbornly correction-resistant forms of . . . selective hyperskepticism should therefore reflect soberly, slowly and seriously on this Wikipedia summary definition of lying (acc: Jul 23, 2011):
To lie is to state something with disregard to the truth with the intention that people will accept the statement as truth . . . . even a true statement can be used to deceive. In this situation, it is the intent of being overall untruthful rather than the truthfulness of any individual statement that is considered the lie . . . . One can state part of the truth out of context, knowing that without complete information, it gives a false impression. Likewise, one can actually state accurate facts, yet deceive with them . . . . One lies by omission when omitting an important fact, deliberately leaving another person with a misconception. Lying by omission includes failures to correct pre-existing misconceptions. Also known as a continuing misrepresentation . . . . A misleading statement is one where there is no outright lie, but still retains the purpose of getting someone to believe in an untruth . . .
And of course the usual fever swamps will try a turnabout tactic, so, let us make it plain that to point out the -- warranted -- objective, credible (albeit plainly utterly unwelcome and perceived as offensive) truth is not to lie. KF kairosfocus
Gordon: Let's go to your specific comments: Mixing definitions of information — making part of one’s argument with one definition, then switching to another definition for a different part of the argument — is a disturbingly common problem in these discussions. You seem to be doing it below (switching between CSI and FSC, and maybe also dFSCI if it’s different from both): In my personal discussions, I always use my explicit definition of dFSCI. dFSCI is a subset of CSI where the specification is purely functional, and the information is in digital form. As the information in the genome is in that form, I believe it is perfectly appropriate to relate to dFSCI in discussing biological information, especially the protein coding genes. Assuming you mean specified complexity as defined by Dembski in “Specification: The Pattern That Signifies Intelligence” (2005), No. I don't refer to that particualr paper. In the concept of dFSCI, any digital information in an object can be considered specified if a conscious observer can explicitly define a function for that information. But the following measures of complexity will be done for that definition, and will be valid only for that definition. Let's make an example. For an enzyme, I can define a function as the ability to accelerate some specific biochemical reaction at least of x, in specific lab conditions. That is an explicit, objectively measurable function. For many ptoteins, indeed, the specific biochemical function is well known, and can be easily found in protein databases. Then, the concept of dFSCI is: how much information (in bits) is really necessary to obtain that function? The Durston method is a powerful indirect method to approximate that valur. In general, the value is defined as the rate betwenn the search space and the target (functional) space of sequences. In order to claim that a particular gene (or other biological element) exhibits CSI, you must first show that there is no sufficiently selectable path to it to raise its probability. No. The dFSCI of an object is a measure of its functional complexity, expressed as the probability to get that information in a purely random system. For instance. for a protein family, like in Durston's paper, that probability is the probability of getting a functional sequence with that function through a random search or a random walk starting form an unrelated state (which is more or less the same). If some necessity algorithm is shown that can bypass the purely random walk, then it must be taken into consideration, and the computation of dFSCI can still be applied to the random variation before and after the necessity step. But the necessity mechanism must be explicit, and demonstrated. Just hoping that such a mechanism can exist will not do. That is not science. Therefore, we will take into serious consideration in the computation of dFSCI any explicit, demonstrated necessity mechanism that can intervene in the generation of that functional complexity. In the case of the appearance of basic protein domains, there is none. this number only corresponds to probability under the hypothesis that all sequences are equally likely. I don’t see any way to use this to argue against the effectiveness of selection without falling into the same circularity I described above. I have already clarified, I hope, that I am not using the concept of dFSCI to argue against the effectiveness of selection. I use it only to argue against the effectiveness of RV. I argue against the effectiveness of NS by analyzing the formal properties of NS and the empirical evidence about its powers. So, there is no circularity in my reasoning. But have any of them been shown not to have any selectable intermediates? If not, this is just an argument from ignorance. And since the space of possible intermediates is so huge, not having found any is hardly a strong argument for their nonexistence. This reasoning is completely wrong and unscientific. It is the duty of those who propose an explanation to show that the explanation is both logically consistent and empirically supported. Just arguiong that it is possible is not a scientific argument. The existence of simple intermediates to complex protein domains is neither required by logical or formal properties of the protein themselves, nor supported by any empirical evidence. Therefore, it is a "just so story and nothing" else. One can believe in that possibility out of faith or naivety, but there is no scientific reason at all to take it seriously. gpuccio
allanius: thank you :) gpuccio
Gordon: I am not saying that you are lying. You are entitled to your personal opinions, obviously, mistaken or not. And you are certainly sincere in expressing it. But you are not the scientific creator, or defender, of neodarwinism. Those who propose a theory, who defend and enforce it at the level of academic thought, have the scientific and moral duty to verify that what they say and propose is credible, is supported by facts, and is a good explanation, if possible the best. If they don't do that, and if they go on defending their theories in spite of all evidence, and if they do all they can to discredit the important and meaningful objections to their theory, then, IMO, they are lying at the scientific level. gpuccio
GD: if you want definitions, cf here for a start. And, digitally coded functionally specific, complex information is in the end a description of a very commonly observed phenomenon, e.g. the ASCII text of posts in this thread. Proceed therefrom by family resemblance, in case you have a problem with mathematical specifications. KF kairosfocus
F/N: The parts must not only function, but hey must do something that we have not even begun to master kinematically: they must self-assemble. This is an ADDITIONALITY issue, i.e. one that INCREASES the complexity and specificity of the design, and the search-space challenge to get to it in the midst of a sea of possible configs. In short, you just inadvertently underscored another reason to infer to design. KF kairosfocus
F/N: The engine was purpose-designed and built, for power and lightness, and the propellers were likewise purpose-designed and built. The "existing kite" is a strawman misrepresentation of the series of gliders developed by the Wright Bro's to create an effective airframe. (In short, the challenge of irreducible complexity and the need for a cluster of well matched integratedly functional parts, is being dodged. Cf here on the C1 - 5 factors that must be properly addressed if the exaptation talking point is to be more than manipulative rhetoric. Unsurprisingly, C2 - 5 as a rule do not even get mentioned, and C1 is used in a highly misleading way.) Business as usual . . . KF kairosfocus
lastyearon: "All of human invention is so clearly evolutionary. The materials, the ideas, the problems the invention solved, the very intelligence of the inventor." This is complete nonsense. Starts with an absurd definition of "evolutionary", slips in a list of four things that are not demonstrated as being related either exclusively or even particularly to evolution, and ends with a circular proclamation of faith that evolution somehow (please don't anyone ask for details) generated the intelligence of the inventor. Eric Anderson
gpuccio:
So, what neo darwinists want us to believe is that simple results, each of which naturally selectable, are steps to complex results. That is so obviously a lie that there should be no need even to discuss it, but anyway, given that most people accept easily such nonsense, I would like to remind here that there is no support to that assumption, neither logical nor empirical. In all forms of complex functions we can easily verify that the function is never the result of the simple addition of simple functional steps. That is not true. That is a lie.
No, it is not a lie. In the first place, it's an oversimplification: I don't know anyone who thinks every simple step is selectable. More importantly, it may be wrong, but a lie is an intentional falsehood; with appropriate qualification ("it's more complicated than that") I honestly think that the known mechanisms of evolution are sufficient to explain observed biological complexity. I may be mistaken, but I am not lying. ScottAndrews2:
This is the part that astonishes me to no end. If someone is willing to believe that it is possible to routinely progress from a simple form to complex, innovative function in single increments of change, despite the absence of a single example, what could ever convince them otherwise? Math? Reason?
I'd dispute the "despite the absence of a single example" part (the success of genetic algorithms and the camera shape and lens of eyes come to mind), but to answer your direct question: reason and evidence are good (math can help, but cannot be the whole argument, since it is entirely abstract: in order to draw conclusions from math, you must know how your mathematical model corresponds to reality). I can hardly claim to be familiar with all of the arguments for ID, but the ones I have dug into (and that I have the background to understand) don't successfuly make it all the way to their claimed conclusions. Since gpuccio brings up information, I'll use that as an example:
It remains absolutely true that NS cannot create any new information. It can only expand existing information, and make it more likely that RV can happen in a population where the previous selectable variation has been expanded. The generation of information, in the neo darwinian algorithm, is completely the result of the RV part. NS, if and when it happens, acts as a tool to reduce some probability barriers.
This depends critically on how one defines and measures "information"; there is a wide variety of definitions available, and what you've said is true for only some of them. For some definitions, selection cannot add information to the gene pool, but mutation can. For some definitions, mutation cannot add information, but selection can. There are even definitions by which neither mutation nor selection can add information (although I don't know of any of these there's known to be that kind of information in any organism's gene pool). Mixing definitions of information -- making part of one's argument with one definition, then switching to another definition for a different part of the argument -- is a disturbingly common problem in these discussions. You seem to be doing it below (switching between CSI and FSC, and maybe also dFSCI if it's different from both):
Anyway, briefly: compex functional information is functional information whose specified complexity (measured in bits) is great enough to make the emergence of that information in a purely random system completely irrealistic.
Assuming you mean specified complexity as defined by Dembski in "Specification: The Pattern That Signifies Intelligence" (2005), CSI is defined in terms of the probability of an event meeting a particular specification (along with the descriptive complexity of the specification). Highly probable specifications have low (negative) CSI; only very low-probability specifications have high (i.e. positive) CSI. The critical thing to realize is that in order to compute the CSI of a particular specification, you must first be able to compute the probability under the relevant hypothesis. In order to claim that a particular gene (or other biological element) exhibits CSI, you must first show that there is no sufficiently selectable path to it to raise its probability. If someone claims that CSI demonstrates that there is no sufficiently selectable path, they are implicitly assuming their conclusion, and therefore engaging in circular reasoning.
Of course, if some object exhibiting dFSCI could be built by a series of simple steps, each of which in the range of what RV can do, and each of which naturallt selectable, those probability barriers could be overcome.
Assuming dFSCI is a subset of CSI (I see the term used a lot, but not properly defined), this is a contradiction: if the probability barriers could be overcome, the object would by definition not exhibit CSI (or dFSCI).
The best example are basic protein domains. Most of them, maybe almost all, exhibit dFSCI (see the Durston paper for that).
I assume you're talking about "Measuring the functional sequence complexity of proteins" by Durston, Chiu, Abel, and Trevors? I haven't read it in detail, but it's immediately clear that the measure of FSC defined there does not equate to CSI as defined by Dembski. For one thing, their measure does not account for replicational or specificational resources (per Dembski). More importantly, their measure is based on the logarithm of the number of gene sequences that perform a particular function; this number only corresponds to probability under the hypothesis that all sequences are equally likely. I don't see any way to use this to argue against the effectiveness of selection without falling into the same circularity I described above.
And not a single one of them [basic protein domains] has ever been deconstructed into simple, naturally selectable steps.
But have any of them been shown not to have any selectable intermediates? If not, this is just an argument from ignorance. And since the space of possible intermediates is so huge, not having found any is hardly a strong argument for their nonexistence. Gordon Davisson
Thanks, gpuccio. I always enjoy your comments and look for them first. allanius
lastyearon you state:
I suggest that IDers stop using the analogy to human invention to argue against evolution.
Well since the cell is chock full of machines and programming language that far, far, surpasses what human intelligence, and invention, has been known to produce thus far,,, I suggest that evolutionists stop using words that imply intentionality, functionality, strategy, and design in biology when trying to understand what we are discovering in biology!!!,,,
Life, Purpose, Mind: Where the Machine Metaphor Fails - Ann Gauger - June 2011 Excerpt: I'm a working biologist, on bacterial regulation (transcription and translation and protein stability) through signalling molecules, ,,, I can confirm the following points as realities: we lack adequate conceptual categories for what we are seeing in the biological world; with many additional genomes sequenced annually, we have much more data than we know what to do with (and making sense of it has become the current challenge); cells are staggeringly chock full of sophisticated technologies, which are exquisitely integrated; life is not dominated by a single technology, but rather a composite of many; and yet life is more than the sum of its parts; in our work, we biologists use words that imply intentionality, functionality, strategy, and design in biology--we simply cannot avoid them. Furthermore, I suggest that to maintain that all of biology is solely a product of selection and genetic decay and time requires a metaphysical conviction that isn't troubled by the evidence. Alternatively, it could be the view of someone who is unfamiliar with the evidence, for one reason or another. But for those who will consider the evidence that is so obvious throughout biology, I suggest it's high time we moved on. - Matthew http://www.evolutionnews.org/2011/06/life_purpose_mind_where_the_ma046991.html#comment-8858161 What Do Organisms Mean? Stephen L. Talbott - Winter 2011 Excerpt: whatever their belief in these matters, biologists today — and molecular biologists in particular — routinely and unavoidably describe the organism in terms that go far beyond the language of physics and chemistry. Words like “stimulus,” “response,” “signal,” “adapt,” “inherit,” and “communicate,” in their biological sense, would never be applied to the strictly physical and chemical processes in a corpse or other inanimate object. But they are always employed in attempts to understand the living organism. The prevalent descriptions portray the whole organism as an active unity, with powers of regulation and coordination intelligently directed toward the achievement of the organism’s own ends. Further, we noted that such descriptions, rooted as they are in the observable character of the organism, show no sign of being reducible to less living terms or to the language of mechanism. http://www.thenewatlantis.com/publications/what-do-organisms-mean
I don't know lastyearon but it seems readily apparent that ID proponents are not forcing anyone to use language that has solely been used with human 'inventiveness' before. Perhaps it is neo-Darwinists who need to change the way they view the cell in the first place since the 'language' is certainly not going to change just to accommodate atheistic views??? bornagain77
lastyearon: That "simple truth" has been discussed here many times, and with a lot of rigor. What part do you want to discuss again? I am always willing to do that. About the "intelligence" in NS. What I meant is simply that NS is not a pure random system, like the RV part of the algorithm, but rather a necessity algorithm. The "quasi- intelligent" was more or less ironic, because darwinists always try to affirm that the neo darwinian algorithm is not really random, because it has NS in it, and in their opinion all the apparent design in biuological beings is explained by the NS part (at least, that is true for classical neo darwinists). But the point is, NS is not really an entity, and does not really "select". It could be better described as a law of necessity which simply describes how better replicators will probably expand, and worse replicators will probably be eliminated. Not a great principle, after all. It remains absolutely true that NS cannot create any new information. It can only expand existing information, and make it more likely that RV can happen in a population where the previous selectable variation has been expanded. The generation of information, in the neo darwinian algorithm, is completely the result of the RV part. NS, if and when it happens, acts as a tool to reduce some probability barriers. The mechanism of RV and NS has no power to generate complex functional information. That si the main point in ID. If you know ID, you should understand why. Anyway, briefly: compex functional information is functional information whose specified complexity (measured in bits) is great enough to make the emergence of that information in a purely random system completely irrealistic. Therefore, complex specified information, and in particular its digital functionally specified form (dFSCI), cannot empirically emerge as a result of RV in a physical random system. Of course, if some object exhibiting dFSCI could be built by a series of simple steps, each of which in the range of what RV can do, and each of which naturallt selectable, those probability barriers could be overcome. But that is not true. It is not true for human software, It is not true for biological information. The best example are basic protein domains. Most of them, maybe almost all, exhibit dFSCI (see the Durston paper for that). And not a single one of them has ever been deconstructed into simple, naturally selectable steps. gpuccio
LYO, It is not ID which must concern itself with evolutionary inventiveness, it is the ideology of materialism, and its drunken cousin scientism. The package of human qualities (including the capacity of immaterial-symbol-maker, which has lead to all recorded human knowledge) is claimed to not have appeared on planet earth until the last few moments of geologic time. Yet those capacities are evident and observed to be the very thing that is causing life to exist at all. How is it that an immaterial quality comes to be physically realized in a material object, like a condon of DNA for example? Upright BiPed
sorry lastyearon, but what you said here does not make sense. Human invention is ID,and basically what we do is copying the ID of creation. There really is no such thing as 'evolution'. That is only an idea, without evidence. It does not happen in the real world. MrDunsapy
I suggest that IDers stop using the analogy to human invention to argue against evolution. All of human invention is so clearly evolutionary. The materials, the ideas, the problems the invention solved, the very intelligence of the inventor. lastyearon
The neo darwinian algorithm can do nothing like that. It is blind, and its “quasi-intelligent” part, NS, has extremely limited properties: it can only expand existing functions, which have to arise via RV. No really complex result can be achieved that way. That is the simple truth.
Ah, so does natural selection have a little intelligence in your opinion? If so, how do you know that it's not enough to create a "really complex result". How does one go about determining if an object is "really complex" anyway? May I suggest that this "simple truth" needs a little more rigor before it can be verified. I mean by people that don't already know how simply true it is. lastyearon
dmullenix, since you think all inventions are purely gradual, this should interest you,,,
Electrical genius Nicola Tesla was born in Serbia in 1856,,, his father was a clergyman.,,,While walking in Budapest Park, Hungary, Nikola Tesla had seen a vision of a functioning alternating current (AC) electric induction motor. This was one of the most revolutionary inventions in the entire history of the world. http://www.reformation.org/nikola-tesla.html
Moreover it should interest you that the Wright Brothers were sons of a clergyman as well. In fact the modern science that you put so much faith in (actually you put your faith in 'materialism') was founded by Christians,,,
The Christian Founders Of Science - Henry F. Schaefer III http://vimeo.com/16523153 Epistemology - Why Should The Human Mind Even Comprehend Reality? - Stephen Meyer http://vimeo.com/32145998
bornagain77
The 'planning and forethought' came in the invention of the telescope. Or are you suggesting that the notice of the children doesn't count as knowledge, or that 'planning and forethought' cannot emerge as a result of that knowledge? Upright BiPed
Eric: "Yeah, you’re right. Probably no planning or forethought." One story of the discovery of the telescope is that Hans Lipershey saw children playing with eyeglass lenses and heard them remark that looking through two lenses made a steeple seem much closer. No planning or forethought there. dmullenix
...and without a way to store and transfer information (semiosis) natural processes can do nothing at all. That capacity requires two discrete sets of material objects, each habouring an immaterial property (which are coordinated to one another). There are absolutely no exceptions to this observation anywhere is existent knowledge. In fact, no one can even come up with a logical exception and offer it as a counter-example. And as if the practice of Science had a rich sense of humor, it turns out that this very system of information transfer (the very heart of it all) is the most prolific form of irreducible complexity in the known universe. It's logically undeniable. Merry Christmas Michael Behe. Upright BiPed
That is so obviously a lie that there should be no need even to discuss it
This is the part that astonishes me to no end. If someone is willing to believe that it is possible to routinely progress from a simple form to complex, innovative function in single increments of change, despite the absence of a single example, what could ever convince them otherwise? Math? Reason? Some will even allow for the possibility of a designing intelligence behind some initial form of life, and then, even having allowed for that, insist on explanations for the rest that assumes its nonexistence. ScottAndrews2
Gordon: I certainly agree with you about the first part of your post. There is no doubt that human design is progressive, and that it relies heavily on previous design. I will add that the same thing is absolutely obvious for biological design. There, too, previous solutions are often reused, or improved, or added to a new context. I have to completely disagree, however, with the second part of your post. You say: IDists claim the critical element to generating highly complex functional designs is intelligence. I claim the critical element is the ability to pass on and elaborate on past designs. Intelligence (especially the ability to communicate ideas) is one way to achieve this; I claim that genetics is another way to achieve this, and that therefore biological evolution has the critical element for creating designs of arbitrarily high complexity. That is simply not true. The ability to pass on and elaborate on past design is certainly an important factor. But the point is, it's intelligence that creates previous designs, and it's intelligence that elaborates on them. Intelligence has two fundamental properties that are completely absent in non designed, non conscious systems: cognition and intent. Cognition and intent allow the designer to "pro-ject", to visualize a solution on the basis of what is already available and on the basis of what is desired. Nothing like that is possible in non designed systems. It's not that intelligence "accelerates" the emergence of complex functions. The simple truth is that complex functions are simply impossible without intelligence. Even in progressive design, inteliigence has to "recognize" the existing functions, has to "desire" and "visusalize" their improvement, their modifications, orsimply the emergence of new functions. The neo darwinian algorithm can do nothing like that. It is blind, and its "quasi-intelligent" part, NS, has extremely limited properties: it can only expand existing functions, which have to arise via RV. No really complex result can be achieved that way. That is the simple truth. And if no complex result can be achieve, no complex result can be passed, least of all elaborated on. So, what neo darwinists want us to believe is that simple results, each of which naturally selectable, are steps to complex results. That is so obviously a lie that there should be no need even to discuss it, but anyway, given that most people accept easily such nonsense, I would like to remind here that there is no support to that assumption, neither logical nor empirical. In all forms of complex functions we can easily verify that the function is never the result of the simple addition of simple functional steps. That is not true. That is a lie. gpuccio
you are forgetting a key difference between biology and human constructed devices. Biological organisms grow - they construct themselves - this means the idea of different parts having to be made precisely so they will fit together a bit meaningless. The parts all grow at once, all together. GCUGreyArea
I'm with Petrushka on this one. Humans don't just design complex, functional things out of nothing; they create them by elaborating on previous designs (which came from elaborations of previous designs, etc). Humans spent hundreds of thousands of years making sharp rocks. As far as we know, they were about as smart as modern humans, so that's not what kept them from making complex designs. I claim what was missing was engineering culture -- actually, two related things: the knowledge of how (& why) to design things, and second an ever-growing library of past designs to draw from. Why did the the Wright Brothers succeed where Leonardo da Vinci failed? I don't think it's because they were smarter (and that's not an insult to the Wrights), I think it's because they had more earlier designs and previous research to draw from. They didn't have to invent the internal combustion engine, or the propeller, or wings. AIUI they did improve somewhat on the efficiency of previous wing and propeller designs, but their biggest innovations were in controls: wing warping and 3-axis controls. Here's a small excerpt from the wikipedia article on the Wright brothers:
The Wrights based the design of their first full-size glider (as well as the 1899 kite) on the work of their recent predecessors, chiefly the Chanute-Herring biplane hang glider ("double-decker", as the Wrights called it), which flew well in the 1896 experiments near Chicago; and aeronautical data on lift that Lilienthal had published. The Wrights designed the wings with camber, a curvature of the top surface. The brothers did not discover this principle, but took advantage of it. The better lift of a cambered surface compared to a flat one was first discussed scientifically by Sir George Cayley. Lilienthal, whose work the Wrights carefully studied, used cambered wings in his gliders, proving in flight the advantage over flat surfaces. The wooden uprights between the wings of the Wright glider were braced by wires in their own adaptation of Chanute's modified "Pratt truss", a bridge-building design he used in his 1896 glider.
Note that while much of the previous work the Wrights drew on was specific to flight, quite a lot originated for other purposes: internal combustion engines were developed to power land- and sea-based vehicles and machines; the Pratt truss was for bridges; the cloth they used for the wings was for ... well, nearly everything; etc. Today's airplanes are far more complex than anything the Wrights came up with for the same reason: today's aeronautics engineers have a far bigger library of ideas, designs, research, etc to draw on. And their designs will provide the basis for even more complex designs in the future. IDists claim the critical element to generating highly complex functional designs is intelligence. I claim the critical element is the ability to pass on and elaborate on past designs. Intelligence (especially the ability to communicate ideas) is one way to achieve this; I claim that genetics is another way to achieve this, and that therefore biological evolution has the critical element for creating designs of arbitrarily high complexity. Mind you, intelligence is a much more efficient way to increase complexity than RM+NS -- foresight, planning, understanding, etc may not be absolutely necessary, but they speed things up hugely. In just 61 years, aeronautical engineering went from Wright Flier to the SR-71 Blackbird; if you want to see similar increases in complexity (and no, I don't know exactly how to measure complexity), expect to wait hundreds of millions of years. As long as evolution can add complexity faster than it looses it, I don't see an upper limit to the complexity evolution can produce. (I should probably note that evolution either adds or looses complexity at anything like a constant rate [even if we had a way to even define that rate]. There are certainly situations where it looses complexity rapidly. IDists generally claim it never adds complexity, but I think they're wrong about that. There's an active controversy in evolutionary biology over whether evolution has an overall trend in complexity, or whether it's essentially random whether it goes up or down, and the increase over the history of life is an artifact of having started from minimal complexity -- i.e. there was nowhere to go but up.) Gordon Davisson
There is no evidence that the available parts were used for other purposes. I am simply responding to what you wrote. There are many things wrong with picking out bits that are beside the point on both sides of the argument. Mytheos
The article referred to available parts formerly used for other purposes. I simply responded to what was written. There are many things wrong with such analogies on both sides of the argument. Petrushka
Petrushka: "The Wright Brothers attached an existing engine to a kite and a bicycle frame." Congratulations, Petrushka. You've given us about the same level of detail (and uselessness) that we get from most virtually all evolutionary stories. Yeah, you're right. Probably no planning or forethought. Probably no measurements to account for velociy, weight, lift, or anything else. Probably no attempt to design the overall craft with the parts in the right place, with the right spacing, in the right location. Probably just walked outside in the yard one day, bolted an "engine to a kite and a bike frame" and took off. Yeah, that's it. What a joke. Eric Anderson
Off the top of my head, telescopes were made from parts lying around due to their use in eyeglasses. The Wright Brothers attached an existing engine to a kite and a bicycle frame. Petrushka

Leave a Reply