Uncommon Descent Serving The Intelligent Design Community

East of Durham: The Incredible Story of Human Evolution

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Imagine if Galileo had built his telescope from parts that had been around for centuries, or if the Wright Brothers had built their airplane from parts that were just lying around. As silly as that sounds, this is precisely what evolutionists must conclude about how evolution works. Biology abounds with complexities which even evolutionists admit could not have evolved in a straightforward way. Instead, evolutionists must conclude that the various parts and components, that comprise biology’s complex structures, had already evolved for some other purpose. Then, as luck would have it, those parts just happened to fit together to form a fantastic, new, incredible design. And this mythical process, which evolutionists credulously refer to as preadaptation, must have occurred over and over and over throughout evolutionary history. Some guys have all the luck.  Read more

Comments
Where do you draw the line of consciousness? A re apes conscious? Dogs? Cats? Rats? Birds? Insects? Are acephalic humans conscious? What's your objective criterion? There are already manufactured objects that are too complex to be designed by humans. Computer circuit boards are an example. This seems to be a trend. I would be willing to bet that withing 50 years, most products will be designed by humans only in the sense that humans will comprise the focus groups and consumer preference panels.Petrushka
December 12, 2011
December
12
Dec
12
12
2011
03:38 AM
3
03
38
AM
PDT
englishmaninistanbul: For the last time. The words "design" and "designer" have no meaning ouside a conscious agent. A designer is a conscious agent that outputs consciously represented forms to a material system. Design is the process of doing that. Thar describes perfectly drwaing, painting, sculpure, architecture, writing, programming, and so on, exactly those activities for which human have created the word "designer". Here is the Wikipedia definition for "design": "Design as a noun informally refers to a plan or convention for the construction of an object or a system (as in architectural blueprints, engineering drawing, business process, circuit diagrams and sewing patterns) while “to design” (verb) refers to making this plan.[1] No generally-accepted definition of “design” exists,[2] and the term has different connotations in different fields (see design disciplines below). However, one can also design by directly constructing an object (as in pottery, engineering, management, cowboy coding and graphic design). More formally, design has been defined as follows. (noun) a specification of an object, manifested by an agent, intended to accomplish goals, in a particular environment, using a set of primitive components, satisfying a set of requirements, subject to constraints; (verb, transitive) to create a design, in an environment (where the designer operates)[3] Another definition for design is a roadmap or a strategic approach for someone to achieve a unique expectation. It defines the specifications, plans, parameters, costs, activities, processes and how and what to do within legal, political, social, environmental, safety and economic constraints in achieving that objective.[4] The person designing is called a designer, which is also a term used for people who work professionally in one of the various design areas, usually also specifying which area is being dealt with (such as a fashion designer, concept designer or web designer). A designer’s sequence of activities is called a design process.[5]" Is that clear? It's as simple as that. There can be no definition of designer that does not imply consciousness, because the nmeaning of the word designer is "a conscious agent that goves form to things" Still, you insist: I still think that a definition that does without consciousness might be useful. It's just the opposite. Such a definition would be simply false, manipulative and harmful. Like compatibilists trying to define free will without free will. Like AI redictionists trying to define consciousness without consciousness. All these appoaches are simply intellectually wrong. A word means what it means. Redefining it so that it seems to mean some eother thing, and still it seems to retain its meaning, is simply cheating. So, I don't follow you. You want to do that, do as you like, express your final results clearly, and then I will comment on them. You say: Gpuccio has done a very good job of defining a designer based on “consciousness”, Well, it was not difficult indeed. That's what the word means. Just looking at a dictionary, you colud have done the same. however he notes that the ID movement does not seem to have come to anything definite yet on this question. That's only your interpretation. I have said that other IDists speak of design without explicitly making the connection to consciousness, but that speaking of design implies that connection. And I have also stated very clearly that what you call "the ID movement" is indeed a field of scientific reflection where different appoaches are present. However, as far as I know, no relevant IDist has ever said that design and designer mean anything different from what they mean. That's all, I hope. Now, either you bring new interesting arguments to the discussion, or I will consider it closed for me. I don't think I have anything else useful to say on this point.gpuccio
December 12, 2011
December
12
Dec
12
12
2011
01:47 AM
1
01
47
AM
PDT
One of the oldest chestnuts is "ID tells us nothing about the designer", and while it is true that we don't need to identify the specific designer for any given designed object, we do need a clear, empirically-founded, bare minimum definition of what does and does not constitute a "designer", otherwise you only have half a theory. Gpuccio has done a very good job of defining a designer based on "consciousness", however he notes that the ID movement does not seem to have come to anything definite yet on this question. I still think that a definition that does without consciousness might be useful. Whether that is right or not, I do think that the whole question deserves to be made clearer. I'd be very interested to hear your comments on 26.5.1.1.6, by the way. A small footnote: I am a translator by profession, so my working life revolves around the definitions of words. Maybe that would explain why I am so preoccupied with them.englishmaninistanbul
December 12, 2011
December
12
Dec
12
12
2011
01:04 AM
1
01
04
AM
PDT
I'm good at splitting hairs, and I enjoy the process of refining words to capture a precise meaning. But I get the impression that much of this is over my head. I don't mean that in any bad, anti-intellectual way. And yet I always suspect that if someone cannot recognize design in simpler terms, such as when observing the transfer of DNA information or the metamorphosis of butterflies, that more precise definitions of "consciousness" or "designer" aren't going to do much good. These things themselves seem intended to make the case for design, and do so eloquently. Calling attention to that evidence often suffices when one has overlooked it is often beneficial. But if one can see it and reject it, I don't know what further elaboration accomplishes. Not to go off on a tangent, but I'm always astounded at how the details of the natural world are revealed to us. How much knowledge and technology did we have to accumulate in order to perceive the greater knowledge and technology underlying life? How many thousands of years did take us to comprehend the tiny points of light in the sky, look beyond them, and realize how vast the observable universe is? If our eyes are open we never cease to be amazed by the incomprehensible wisdom, intelligence, and power behind creation, because the more we advance the more we discover to humble us. Every advance we make gives us a glimpse into something even greater, and gives us reason to feel awe. And for what was all this developed? To process data faster or to accelerate some particle? Apparently just so that we could enjoy our lives - family, friends, food, fun, and work, and even share in the same pleasurable activity of using our own intelligence to design, create, improve, and do good for others and ourselves. And so that we could appreciate the gift. All of this was apparent before the electron microscope or the Hubble telescope. Now we can see the same things with better focus and in more detail. It might as well be written across the sky.ScottAndrews2
December 11, 2011
December
12
Dec
11
11
2011
08:55 AM
8
08
55
AM
PDT
SA: Maybe these are word games, but I have a feeling they need to be played. Thank you for your input. gpuccio: FAN-TASTIC! I love detail. True, I am looking for a concise way of defining a designer, but you can't do reductio ad absurdum without the details!
In my view, the empirical definition of cosnciousness is the only way to a completely empirical definition of design, of CSI, FSCI and dFSCI, and of the final deisgn inference. That’s what I believe. Do others in ID believe the same? Well, I think that some fundamental ID thinkers probably would not explicitly make the connection to consciousness, probably because they think it would imply a philosophical stand.
If they don't explicitly make the connection to consciousness, how do they define a designer? Would you care to elaborate, or at least give me a link to a page that does? Right from the beginning I've been looking for ID's definition of a designer. With your help, I've come to see that there is one that takes conciousness as a starting point, and I'm trying to take some small, baby steps towards devising a definition that doesn't require consciousness (with some feedback from ScottAndrews2 and yourself). So there are others? What are they? Surely it would be a good thing if the ID movement could either come up with a single definition or clearly define each alternative. One or the other.
If you start from the statement “dFSCI has only ever been observed to come from discrete environment-manipulating entities”, all instances of design mediated by the animal kingdom become demonstrations of that proposition and not merely inferences that base themselves on it. That’s not my position. I do believe that such a position would lead to many logical contradictions. So I refute it.
Again, I would love to know exactly what logical contradictions they would be.
But if you want to elaborate on it, and prove it emnpirically valid, I will listen.
Thank you for keeping an open mind. Or maybe now it is you playing the optimist! :) To elaborate: I use the word "discrete" to say that after the inception of the entity in question its actions are internally caused, at least in part, ie. partially or wholly independent of external stimuli. I'd have thought "environment-manipulating" was self-explanatory. That's it.
Now I’m not saying the first definition is faulty as such, both seem to me to be equally viable definitions in and of themselves. I don’t agree. As I have said, I am strongly skeptical about your second definition, and would never use it.
Too right, neither would I! This is just my current draft, all I have at the moment is a kind of faith that a better draft is possible. I would be very happy if someone could help me with it. Scott? Anyone?
I am not aware that anyone else, except you, is so concerned that animals should be included in the set that demonstrates the inference. But I could be wrong.
Hah yeah, I'm asking that question myself. I just look at a beaver dam, and I instinctively feel that it should be part of what justifies the design inference instead of being an inference in itself. But maybe I am alone in that, judging by the fact that it's only me you and Scott still sticking around this thread. If there are any observers lurking I'd really like to know what they think. That's right, I'm talking about you!englishmaninistanbul
December 11, 2011
December
12
Dec
11
11
2011
05:02 AM
5
05
02
AM
PDT
englishmaninistanbul: And here are my comments (not certainly corrections, because obviously I have no special authority). But are animals consciously intelligent agents? Debatable in each and every case. So if we take consciousness as a starting point for defining designers, only human beings qualify in an empirically unassailable way. More or less. The concsciousness in higher animals is an inference that many would agree with, even if it is certainly weaker than the inference for human beings (the "analogy" is weaker). Any kind of inference, design or otherwise, depends on past examples that demonstrate the reliability of the inference. Correct. So if we take consciousness as a starting point for defining designers, only human beings qualify in an empirically unassailable way. I agree. There is another reason for that. It's not only the fact that the inference of consciousness is weaker (although personally I accept it). The most important point is that tha representations in human are accessible, both in ourselves (directly) and in others (through language). That is not true for animals. Why is that so important. Because, according to my definition of design, the crucial point to have a design process is that conscious representations are purposefully otputted to a material system. That is the whole point of design. Something that exist before as a subjective experience is then outputted, as form, to a material system. That's the true meaning of the words designer and design. Now, in hu8mans we can really observe the whole process of design: we can witness the existence of the subjective representations (again, both in ourselves and in others), we can observe the design implementation and we can observe and analyze the designed object. That's why I rely only on humans to define design and demonstrate the relationship between dFSCI and design. But that does not exclude other non huma designers. As we have said, the crucial point is the causal connection between subjective representations and the final output. Any conscious being can qualify, if we can demonstrate the subjective representations. Therefore, I would not consider the animals to demonstrate the reliability of the design inference. For that, humans qualify better. “dFSCI has only ever been observed to come from consciously intelligent agents.” The demonstrations of that proposition are many, but all of them come from human beings. True. But please, consider that the important pointis the connection between conscious representations and the designed object. In that sense, what we observe in humans is potentially valid for any conscious being. Assuming beaver dams contain dFSCI, are beavers designers? Your answer is “Either they are, or whatever designed their genome is.” I would like to qualify that better. In a sense, there is no doubt that beavers are designers, if we assume that they are conscious (as I do). In building the dam, they are certainly guided by cosncious representations, so they are in that sense designers. But a problem remains. Let's assume that the dam exhibits FSCI, and is therefore an object for which we can infer design. In that case, we believe that the dam is designed, and we need not know who the designer is to do that. OK? The, we do ask oursleves: who is the designer of the dam? In a sense, as I said, the beaver is. The beaver certainly contributes to the design of the dam. What is, then, the difference with human design? The difference is: beavers only build dams. If they design, they design always the same type of object, even is withremarkable individual adaptations, and always for the same function. IOWs, they create no new functional specifications. Moreover, there is reason to believe that waht they do is mainly inherited, because not only beavers buld only dams, but all beavers build dams. That's only a more refined way of saying that their behaviour has all the formal characteristics of what we call (both in animals and humans, an "instinctive" behaviour. That is not true of human design. Although many components of human behaviour are certainly instinctive (and the need to design could well be considered as such), the important fact is that humans create new, individual specifications, conceive functions that have never been observed before, and support those functions with original FSCI. So, I insist: is the beaver the designer? I would say: it is certainly a co-designer of the dam, because its conscious representations certainly contribute to each individual output. But still, as the beaver's behaviour is largely repetitive, the functional specification is always the same, and the functional complexity is similar, it is reasonable to believe that the functional structure of dams is largely based on pre-existing information, very likely to be found in the beaver's genome. So, the beaver can be a co-designer of the dam, and yet it is not, probably, the conscious originator of the sepcification and of most of the functional information implied by the building of the dam. The designer of the beaver's genome is the true designer of those things. Sorry to be so analitical which you seem to prefer quick definitions: but if you make analytical questions, I have to answer them in detail. If we do not yet have enough data to point to a designer (on a consciousness-based definition), that would mean that beaver dams do not demonstrate design, they are shunted to the category of instances where we can merely infer design. Absolutely correct. This doesn’t feel right. Why? It feels perfectly right to me. Beaver dams never turn up through chance and necessity, they are obviously a product of design. That is absolutely correct, provided that we assume that we have computed the FSCI in dams, and found that it is present. As alredy said, we infer design in dams if and because they exhibit FSCI. We do not observe directly the whole design process of dams, unless we have access to the conscious representations of the beaver. If we could, we could better judge if those conscious representations are sufficient to explain the dam, or not. And beavers, if they are not designers, are at least design proxies. They are “the [immediate] providers of specified information required to implement a function”, to hijack Scott Andrews’s tabled definition. Correct. And so? However I suspect that saying that “dFSCI (digital functionally specified complex information) has only ever been observed to come from providers of specified information required to implement a function” might seem rather circular. Correct. That's why I would never say such a silly thing. I detest circular reasoning. It all hinges on the word “provider”, which I would tentatively describe as a “discrete environment-manipulating entity.” No. It all hinges on the concept of consciousness. If you renounce to the connection bewteen conscious representations and the designed object, IMO you cannot define design in any reasonable way. You have shown yourself how such an attempt leads to circularity. If you start from the statement “dFSCI has only ever been observed to come from discrete environment-manipulating entities”, all instances of design mediated by the animal kingdom become demonstrations of that proposition and not merely inferences that base themselves on it. That's not my position. I do believe that such a position would lead to many logical contradictions. So I refute it. But if you want to elaborate on it, and prove it emnpirically valid, I will listen. Now I’m not saying the first definition is faulty as such, both seem to me to be equally viable definitions in and of themselves. I don't agree. As I have said, I am strongly skeptical about your second definition, and would never use it. But which one does ID work from? Can it work from both? Does it work from both? The answer is very simple. ID is not a dogmatic theory. Many people have different approaches. Maybe some work better than others. I have clearly stated my approach. I take responsibility for it. In my view, the empirical definition of cosnciousness is the only way to a completely empirical definition of design, of CSI, FSCI and dFSCI, and of the final deisgn inference. That's what I believe. Do others in ID believe the same? Well, I think that some fundamental ID thinkers probably would not explicitly make the connection to consciousness, probably because they think it would imply a philosophical stand. I don't agrre with that. Again, if one uses words like "designer" and "choice", IMO he is implying consciousness. Those words cannot be even defined out of consciousness. And consciousness is a completely empirical fact, provided that we don't superimpose to it our personal theories about what it is or means. Finally, I would like to kust remind a couple of important points: a) The design inference needs not an explicit knowledge of who the designer is. b) As far as I can know, all ID theorists refer to human design to demonstrate the connection between design and CSI, or one of its subsets. I am not aware that anyone else, except you, is so concerned that animals should be included in the set that demonstrates the inference. But I could be wrong.gpuccio
December 11, 2011
December
12
Dec
11
11
2011
01:57 AM
1
01
57
AM
PDT
EMII,
dFSCI has only ever been observed to come from designers (demonstrations), therefore whenever we observe dFSCI we are justified in assuming the existence of a designer even when unable to identify it (inferences).
I think the noun is less important. We could replace "designer" with "thing" and say that dFCSI has only been observed to come from things, therefore from dFCSI we can infer the existence of a thing. It's the adjective "intelligent" that matters. What if we took out "designer" and "intelligent" and made the noun "intelligence?" (Even though these are word games in a sense, I really enjoy it.)ScottAndrews2
December 10, 2011
December
12
Dec
10
10
2011
03:44 PM
3
03
44
PM
PDT
gpuccio: I get the feeling I am trying your patience with my repeated attempts to express myself. I would like to assure you that I am not being deliberately difficult, and that your replies have all been very instructive, and I greatly appreciate them. Thank you for taking the time to correct many of my inaccurate statements. Despite the failings in my arguments so far I still feel I have a point that has been misunderstood, or rather, one that I have not yet succeeded in expressing correctly. So I would like to try to put it another way. Any kind of inference, design or otherwise, depends on past examples that demonstrate the reliability of the inference. "dFSCI has only ever been observed to come from designers (demonstrations), therefore whenever we observe dFSCI we are justified in assuming the existence of a designer even when unable to identify it (inferences)." So where do demonstrations stop and inferences begin? To answer that question we need to define the word "designer." Your definition starts with consciousness. This is an empirically proven reality, and I thank you for the powerful way that you argued that. "I am conscious, and every moment of my existence gives me reason to infer that all other human beings are conscious." Nobody can seriously argue with that either. But are animals consciously intelligent agents? Debatable in each and every case. So if we take consciousness as a starting point for defining designers, only human beings qualify in an empirically unassailable way. "dFSCI has only ever been observed to come from consciously intelligent agents." The demonstrations of that proposition are many, but all of them come from human beings. Assuming beaver dams contain dFSCI, are beavers designers? Your answer is "Either they are, or whatever designed their genome is." If we do not yet have enough data to point to a designer (on a consciousness-based definition), that would mean that beaver dams do not demonstrate design, they are shunted to the category of instances where we can merely infer design. This doesn't feel right. Beaver dams never turn up through chance and necessity, they are obviously a product of design. And beavers, if they are not designers, are at least design proxies. They are "the [immediate] providers of specified information required to implement a function", to hijack Scott Andrews's tabled definition. However I suspect that saying that "dFSCI (digital functionally specified complex information) has only ever been observed to come from providers of specified information required to implement a function" might seem rather circular. It all hinges on the word "provider", which I would tentatively describe as a "discrete environment-manipulating entity." If you start from the statement "dFSCI has only ever been observed to come from discrete environment-manipulating entities", all instances of design mediated by the animal kingdom become demonstrations of that proposition and not merely inferences that base themselves on it. Now I'm not saying the first definition is faulty as such, both seem to me to be equally viable definitions in and of themselves. But which one does ID work from? Can it work from both? Does it work from both? I submit my thoughts in good faith and await your comments and corrections.englishmaninistanbul
December 10, 2011
December
12
Dec
10
10
2011
03:19 PM
3
03
19
PM
PDT
Scott: Very well said. I think that is exactly what I have tried to say in my posts here. It is simple, after all, if one just stops one moment to understand. Functional information can be identified in the final designed object. That is enough to infer intelligent design, but does not tell us when the information was inputted (the time of the design process) and who inputted it or how (the identity of the designer and the modalities og implementation). It is perfectly true that those aspects are not necessary to infer design. However, those aspects are certainly, in principle, amenable to scientific inquiry. The time of the design process in the time where functional information appears for the first time in a material system. If that time can be known, than the time of the design process is known. The modalities of implementation can be known in principle, either directly (by observing the design process), or more often indirectly, by scientific inference. For instance, I have argued many times that biological design by direct writing (guided variation) and biological design by RV + intelligent selection are two valid possibilities, but will have different natural histories, and so can in principle be distibguished by a detailed knowledge of the natural history of biological beings, of genomes and of proteomes. The identity of the designer is stil another independent issue. The case of the beavers is an exellent model of how, after having inferred design, we can still be in doubt about the identity of the designer. Both the beaver itself and the designer of the beaver's genome are valid possible candidates. The problem is not trivial, and can be solved on an empirical basis, as I have said (for instance, by identifying the specific information sufficient to build dams in the beaver's genome). But in no way do those problems compromise the design inference for dams, provided that it is supported by a correct evaluation of the presence of FSCI in the final object.gpuccio
December 10, 2011
December
12
Dec
10
10
2011
05:58 AM
5
05
58
AM
PDT
I remember seeing a tiered beaver dam when I was living on a farm in New Hampshire (recovering from graduate school). The brooklet it was exploiting was quite smaall, and so the beavers had actually constructed three dams, neatly spaced. Quite fascinating. Doubt they thought of it themselves.allanius
December 10, 2011
December
12
Dec
10
10
2011
05:20 AM
5
05
20
AM
PDT
englishmaninistanbul: Once again I miss the logic of some of your remaks. You say: So now we have a way of scientifically describing designed objects, so that nobody can justifiably claim on scientific grounds that design is or could be an illusion. And that's true. But now we come to the question of the designer, and sometimes the objection is raised that “consciousness is an illusion” or “free will is an illusion.” And I had the feeling that a similar rigorous scientific definition for designers is lacking. And I already answered you that free will in not necessary to define a conscious designer, and that consciousness is an empirical fact that cannot be denied by anyone. Please, explain, mif you don't agree, how consciousness could ever "be an illusion". Obviously, an illusion is something that can take place only in a consciousness (It is essentially a representation that does not corresponds to reality). So, to state that consciousness is am illusion is mere semiotic nonsense. Threfore, your "feeling that a similar rigorous scientific definition for designers is lacking" is simply a wrong feeling. Is a beaver dam a designed object? If we can measure its functional information with reasonable approximation, and if it is high enough in relation to an appropriate threshold, we can relioably infer design for it. There is no problem in the concepts and methods. There can be, obviously, some practical difficulties in the individual measurements, as always happens in all sciences. But if we accept that a beaver dam has dFSCI, is the beaver a designer? I have explicitly answered that too. If the functional information derives from conscious representations in the beaver, than the beaver is the designer. If, instead, the functional information, or at least most of it, derives from automatic behaviours, coded for instance in the beaver's genome, then the designer of the beaver's genome is also the designer of the dam. It is very simple: the designer is the conscious first originator of the functional information we observe in the final object. The only "problem" here is that we have not enough data: first of all, we know very little about conscious processes in beavers (and that can be a very difficult point to improve). But we also lak any understanding of if and how "dam building" is based upon genomic information in the beaver. That can certainly be understood in the future, as research goes on. Research is already being made to understand the genomic basis of instinctive behaviour in animals. The only problem I see is your final, strange statement: It seems it should be a much easier question to answer than it is at present… Why? I really don't understand why you think it should be easy at all. We have a good theory, good definitions and good tools. But the solution of individual problems depends critically on other things, especially on existing data. Sometimes, solutions just require time to be found, even with the best available methodology. Quantum mechanics is a very powerfull scientific theory, and yet most simple systems in nature cannot be realistically analyzed by QM. That is true even of Newton mechanics, in many cases. Even in mathemathics we have many potentially treatable problems that have not been solved up to now. Does that mean that mathemathics is not a good discipline? So, your statement that deciding if the beaver is the designer of the dam "should be a much easier question to answer than it is at present…" is simply nonsense, and suggesting that the "problem" you see is due to faulty definitions in the theory is simply wrong.gpuccio
December 10, 2011
December
12
Dec
10
10
2011
01:58 AM
1
01
58
AM
PDT
GD: I see your:
I am working from the principles and actual practice of information theory (although I haven’t touched on thermo here). My objection is that you are not . . . [& Re my: "Signals, are intelligently created, noise is based on — in general — random, stochastic natural processes, e.g. Johnson, flicker, shot, sky etc."] This is not how the terms are defined and used in information theory.
I must beg to disagree, once we focus on the significance of the signal to noise ratio and the fact that we already know from massive experience that signals exist and have certain objective characteristics, which we can then use to sufficiently accurately measure signal power, and the same holds for noise. In particular, we know that thermal agitation creates Johnson noise, a statistical thermodynamic result; that shot noise comes from the stochastic nature of currents in semiconductors; that sky noise comes form various natural processes giving rise to a random radio background; and more, much more. We can therefore even assign noise factor/figure values to equipment, or a noise temperature value, as can be seen from LNB's for satellite dish receivers. The last, reflecting the open bridge to statistical thermodynamics considerations. Clipping Wikipedia on noise, as a convenient source speaking against known ideological interest:
Electronic noise [1] is a random fluctuation in an electrical signal, a characteristic of all electronic circuits. Noise generated by electronic devices varies greatly, as it can be produced by several different effects. Thermal noise is unavoidable at non-zero temperature (see fluctuation-dissipation theorem), while other types depend mostly on device type (such as shot noise,[1][2] which needs steep potential barrier) or manufacturing quality and semiconductor defects, such as conductance fluctuations, including 1/f noise. In communication systems, the noise is an error or undesired random disturbance of a useful information signal, introduced before or after the detector and decoder. The noise is a summation of unwanted or disturbing energy from natural and sometimes man-made sources. Noise is, however, typically distinguished from interference, (e.g. cross-talk, deliberate jamming or other unwanted electromagnetic interference from specific transmitters), for example in the signal-to-noise ratio (SNR), signal-to-interference ratio (SIR) and signal-to-noise plus interference ratio (SNIR) measures. Noise is also typically distinguished from distortion, which is an unwanted alteration of the signal waveform, for example in the signal-to-noise and distortion ratio (SINAD). In a carrier-modulated passband analog communication system, a certain carrier-to-noise ratio (CNR) at the radio receiver input would result in a certain signal-to-noise ratio in the detected message signal. In a digital communications system, a certain Eb/N0 (normalized signal-to-noise ratio) would result in a certain bit error rate (BER).
Notice a key, telling contrast: the noise is an error or undesired random disturbance of a useful information signal, i.e. it is quite obvious that informational signals are intelligently applied and functional, while noise is naturally occurring and degrades or even undermines function if it gets out of hand. In addition, we can see that there will be observable, contrasting characteristics, and a relevant way to come up with statistical models of noise that will show that noise imitating signals is maximally implausible, and with high reliability, practically unobservable. We can therefore characterise the statistics of signals, and those of noise, and with high certainty know that in a given situation, the noise power level is x dB, and the signal power level is X dB, so that our Signal to Noise ratio can be estimated off appropriate volts squared values, etc. And, we can use 'scopes to SEE the patterns of signal and noise, as the eye diagram -- your irrelevant distractor notwithstanding -- shows. As in, how open is the eye? Something as simple as snow on a TV set vs a clear picture is a similar case. AmHD gives a useful summary on what signals, by contrast are, one that is instantly recognisable to anyone who has had to work with signals in an electronics or telecommunications context:
sig·nal (sgnl)n.1. a. An indicator, such as a gesture or colored light, that serves as a means of communication. See Synonyms at gesture. b. A message communicated by such means. 2. Something that incites action: The peace treaty was the signal for celebration. 3. Electronics An impulse or a fluctuating electric quantity, such as voltage, current, or electric field strength, whose variations represent coded information. 4. The sound, image, or message transmitted or received in telegraphy, telephony, radio, television, or radar . . .
I would modify that slightly, to make more room for analogue signals, i.e the variation represents coded or modulated or baseband, analogue information. So, I am quite correct based on longstanding praxis that the whole context of discussing how much information is passed per symbol, on average, on a - SUM [pi*log pi] measure, i.e. H, is that we already know to distinguish signal and noise in general, and can measure the power levels of noise and signal. Now, there is of course a wider usage of H, which ties back to thermodynamics, going back to implications of Maxwell's demon and Szilard's analysis which was extended by Brillouin. Jaynes and others have carried this forward to our time, amidst a debate over the link between information and thermodynamics, which now seems to be settling down in favour of the reality of the link. As my always linked note shows, I have long used Robertson's summary of it, that a distribution of possibilities and reduction in uncertainties is associated with information. In particular, the lack of information about microstates of matter leads to a situation where we have to work at gross level, and so limits the work that can be extracted from heat, etc. This allows a bridge to be built from Shannon's H metric, to statistical measures of entropy. It turns out that Shannon's assignment of the term "entropy" to the metric that had an analogous mathematical form, has substantial support. I now clip from my notes on the subject, including a cite from Wikipedia on the current state of play:
Further to this, we may average the information per symbol in the communication system thusly (giving in terms of -H to make the additive relationships clearer): - H = p1 log p1 + p2 log p2 + . . . + pn log pn or, H = - SUM [pi log pi] . . . Eqn 5 H, the average information per symbol transmitted [usually, measured as: bits/symbol], is often termed the Entropy; first, historically, because it resembles one of the expressions for entropy in statistical thermodynamics. As Connor notes: "it is often referred to as the entropy of the source." [p.81, emphasis added.] Also, while this is a somewhat controversial view in Physics, as is briefly discussed in Appendix 1 below, there is in fact an informational interpretation of thermodynamics that shows that informational and thermodynamic entropy can be linked conceptually as well as in mere mathematical form. Though somewhat controversial even in quite recent years, this is becoming more broadly accepted in physics and information theory, as Wikipedia now discusses [as at April 2011] in its article on Informational Entropy (aka Shannon Information, cf also here):
At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann's constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing. But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon's information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more" . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate. >>]) Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).
Summarising Harry Robertson's Statistical Thermophysics (Prentice-Hall International, 1993) . . . we can see, perhaps, that this should not be so surprising after all. (In effect, since we do not possess detailed knowledge of the states of the vary large number of microscopic particles of thermal systems [typically ~ 10^20 to 10^26; a mole of substance containing ~ 6.023*10^23 particles; i.e. the Avogadro Number], we can only view them in terms of those gross averages we term thermodynamic variables [pressure, temperature, etc], and so we cannot take advantage of knowledge of such individual particle states that would give us a richer harvest of work, etc.)
So, what is happening here is that Shannon's H-metric is conceptually connected to thermodynamic entropy, the latter being a measure of the degree of freedom or uncertainty remaining at microscopic level, once we have specified the gross, lab level observable macrostate. In short,the missing info that would have to be supplied -- and would require a certain quantity of work to do so -- to in principle know the microstate. Going beyond, and back to the key matter, the definition of Shannon's H metric and its use exists in a context, it is not isolated form the considerations already given. In particular, we routinely know the difference between intelligently applied signals and naturally occurring noise, and that it is maximally implausible for noise to mimic signals, though it is logically and physically possible in principle. As I discussed in Appendix 8 of the same note, a pile of rocks on a hill on the border between England and Wales could -- logical and physical possibility -- fall down the hill and spontaneously form the pattern of glyphs: "Welcome to Wales." However, the number of configurations available for an avalanche, and the high contingency is such that with maximum confidence, we can rest assured that this is operationally implausible, i.e scientifically unobservable on chance plus necessity. If we ever go to the railway line on the border and see "Welcome to Wales" spelled out in rocks, we can with maximal assurance infer that this was intentionally put there as a signal, by an intelligence using the glyphs of the Roman Alphabet as modified for English and in effect using stones as pixels. Or, equivalently, we can infer that such an observation is operationally inconsistent with the reasonably observable results of blind chance and mechanical necessity acting on stones on a hillside. All of this ties right back to the case of an empirical observation E, from a narrow zone or island of function T, in a much wider sea of possible configurations, W. Which is precisely the design inference in action, on a concrete illustration, that shows both the issue of complex specified information, and that of functionally specific, complex organisation per the Wicken wiring diagram, and associated information. indeed, of digitally coded functionally specific complex information. Of course, H can be used to look at distributions and patterns in general [the above applied it to thermodynamics!], but that has nothing to do with its telecomms and information theory context, that of our routine ability to distinguish and measure the difference between noise and signal. And, on characteristic signs and the relevant statistics of sampling of a space of configurati6ons, we can and do routinely recognise that we may reliably distinguish information from noise on characteristic patterns, so much so that we use S/N as a key measure in evaluating something as basic as theoretical channel capacity. Put in other words, the inference to design on observable characteristics and background knowledge and experimental technique, is an integral part of information and telecomms theory and praxis. As matters of commonplace fact. GEM of TKIkairosfocus
December 10, 2011
December
12
Dec
10
10
2011
12:09 AM
12
12
09
AM
PDT
KF:
Sorry, but your above response shows the root problem: you are cyclically repeating objections, instead of working forward from first principles and actual praxis of information theory and thermodynamics.
First, as far as I can remember this is the first time I've raised these objections to you. I've raised them to other people, and you've probably had them raised to you before, but that does nothing to indicate they're wrong in any way. Second, I am working from the principles and actual practice of information theory (although I haven't touched on thermo here). My objection is that you are not.
Signals, are intelligently created, noise is based on — in general — random, stochastic natural processes, e.g. Johnson, flicker, shot, sky etc. There is also of course cross-talk that is an unwanted effect of an intelligent signal, and there are ground loops with mains pickup etc.
This is not how the terms are defined and used in information theory.
So, of course the metric H itself does not “distinguish” signal from noise, it is rooted in a situation where we already know — and can measure — the difference and use H in a knowledgeable context. In short, we know what noise looks like, and what signal looks like, and we can measure power levels of both. The eye diagram is a classic example of that, as I pointed out already. So is the good old “grass growing on the signal.”
What a signal looks like depends entirely on the encoding -- the eye diagram page you linked gives examples of several examples of different encodings, and some encodings won't look like an eye pattern at all. What noise looks like also depends on the particular noise source. The only way to distinguish a particular signal from a particular type of noise is to start with some assumptions about the each of them.
The very fact that we routinely mark the observable distinction to the point where there is an embedding of the ratio in the bandwidth expression that was one of Shannon’s main targets in his analysis, is revealing.
Shannon was considering a specific case: a band-limited channel (an assumption about the limits placed on the signal) with Gaussian white noise (an assumption about the noise). Make different assumptions, and you'll get different results.
PS: I see another recycled objection that I will pause on. As has already been pointed out in the onward linked and in earlier discussions that have obviously been strawmannised in the usual fever swamp sites, the first default is that a situation is best explained on necessity, and/or chance; in which case S = 0 by default. In short, the explanatory filter STARTS from the assumption that chance and/or necessity are the default explanations. It is when there is a positive, objective reason to infer specificity, that we assign S = 1. As I gave above, a few days ago, here at UD I found out the hard way that if a picture caption has a square bracket, the post vanishes. The function disappears over a cliff, splash. The same occurs with protein chains, and the same occurs in program code, etc etc etc. With car parts, you had better get the specifically right part, or it will not work; and of course the information in the shape etc of such parts can be reduced to clusters of linked strings, as say would occur in a part drawing file. This is not hard to figure out — save to those whose intent is to throw up any and all objections in order to dismiss what would otherwise make all too much sense.
...I'm honestly not sure what this has to do with anything I wrote. If it's a response to the question I asked about the dummy variable S, I was just asking for clarification, and this hasn't clarified it at all for me.Gordon Davisson
December 9, 2011
December
12
Dec
9
09
2011
04:52 PM
4
04
52
PM
PDT
GD: Sorry, but your above response shows the root problem: you are cyclically repeating objections, instead of working forward from first principles and actual praxis of information theory and thermodynamics. Signals, are intelligently created, noise is based on -- in general -- random, stochastic natural processes, e.g. Johnson, flicker, shot, sky etc. There is also of course cross-talk that is an unwanted effect of an intelligent signal, and there are ground loops with mains pickup etc. So, of course the metric H itself does not "distinguish" signal from noise, it is rooted in a situation where we already know -- and can measure -- the difference and use H in a knowledgeable context. In short, we know what noise looks like, and what signal looks like, and we can measure power levels of both. The eye diagram is a classic example of that, as I pointed out already. So is the good old "grass growing on the signal." H, being the average info per symbol, in effect, is then fed into the channel capacity. It is that capacity that is set by signal to noise ratio -- as already and separately identified -- and bandwidth. The very fact that we routinely mark the observable distinction to the point where there is an embedding of the ratio in the bandwidth expression that was one of Shannon's main targets in his analysis, is revealing. GEM of TKI PS: I see another recycled objection that I will pause on. As has already been pointed out in the onward linked and in earlier discussions that have obviously been strawmannised in the usual fever swamp sites, the first default is that a situation is best explained on necessity, and/or chance; in which case S = 0 by default. In short, the explanatory filter STARTS from the assumption that chance and/or necessity are the default explanations. It is when there is a positive, objective reason to infer specificity, that we assign S = 1. As I gave above, a few days ago, here at UD I found out the hard way that if a picture caption has a square bracket, the post vanishes. The function disappears over a cliff, splash. The same occurs with protein chains, and the same occurs in program code, etc etc etc. With car parts, you had better get the specifically right part, or it will not work; and of course the information in the shape etc of such parts can be reduced to clusters of linked strings, as say would occur in a part drawing file. This is not hard to figure out -- save to those whose intent is to throw up any and all objections in order to dismiss what would otherwise make all too much sense.kairosfocus
December 9, 2011
December
12
Dec
9
09
2011
02:17 PM
2
02
17
PM
PDT
KF:
Pardon, but information obviously embraces a spectrum of interconnected meanings.
I'd agree with that, but with the qualification that even though they're interconnected, they diverge quite a bit from each other.
A good place to start with is how Shannon’s metric of average info per symbol, H, is too often twisted into a caricature that is held to imply that it has nothing to do with information as an intelligent product.
I'll disagree with that, although with a qualification: H doesn't distinguish between information from intelligent sources or unintelligent sources, so in that sense it has "nothing to do with information as an intelligent product". But information from intelligent sources contributes to H, so in that sense they do have something to do with each other. Essentially, H = (intelligent-origin information) + (unintelligent-origin information). So H counts information from intelligent sources, but doesn't distinguish it from information from unintelligent sources.
To see that his is plainly wrong, let us simply move the analysis forward until we come to the step where Shannon puts H to work in assessing carrying capacity [C] of a band-limited [B] Gaussian white noise channel:
C = B*(1 + S/N) . . . Eqn 1
See that ratio S/N? It is a log ratio of signal power to noise power. That is, it is premised on the insight that we can and routinely do recognise objectively and distinguish signals and noise, and can quantitatively measure the power levels involved in each, separately. (In a digital system, the Eye Diagram/’scope display is a useful point of reference on this.)
This is irrelevant, since the signal/noise distinction has nothing to do with whether the information is from intelligent sources or not. To see why this is, consider some examples of signal noise from intelligent sources. First, some noise from ID sources: if you look at the noise a modern radio communication system has to exclude, a lot of it is due to other radios using the same (or nearby) frequencies (or leaking radiation at unintended frequencies, etc). Generally, this is limited by FCC (and similar bodies') regulations that limit who's allowed to transmit at what frequencies and power, but in less-strictly-regulated frequency bands interference is common. Different Wi-Fi networks, for example, will interfere with each other (and with cordless telephones, and bluetooth devices, and...) if they're too close and using the same frequencies. You can also get interference from ID-but-not-meaningful sources; for instance, microwave ovens tend to leak radiation in the 2.4 GHz band used by 802.11b, g, and n. One can see similar things in non-radio contexts as well: for electronic signals, crosstalk (leakage of signals between nearby wires) can be a significant problem. As with radio interference, this is generally dealt with by a combination of designing the system to limit the amount of crosstalk, and designing the receivers to ignore crosstalk they do receive. Second, some signals from non-ID sources: radio astronomy comes to mind as an example of where the receiver (the radio telescope) is designed to receive a signal from non-intelligent sources (e.g. stars), and exclude noise from terrestial sources (radios, etc). In fact, the same is true of all other types of astronomical telescopes as well, and for that matter even normal cameras (depending on what you're taking a picture of). So then what is the signal vs. noise distinction? IMHO, it's really a distinction between the information you want (signal) vs. information you don't want (noise); as such, it's not a distinction that originates within information theory, but a distinction imposed on it from the outside. If I tune my radio to station A, it's supposed to play whatever station A it sending, and any interference from stations B, C, D, etc is defined as noise. If I change the station to B, then B changes from noise to signal and A from signal to noise. Information theory can help with designing the radio receiver to obey these whims of mine, and evaluate its success in doing so, but it cannot tell me what those whims should be. That's the practical situation; what about the theoretical side of information theory? It is, if anything, even further from what you described. If anything, information theory seems to go out of its was to ignore the distinction between ID and non-ID information. Certainly, it does not limit information production to intelligent sources. As far as statistical information theory is concerned, the defining characteristic of an information source is that it is not completely predictable; whether its unpredictability is a result of intelligent choice or simple randomness does not matter. From the introduction to Shannon's original paper:
The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem. The significant aspect is that the actual message is one selected from a set of possible messages.
And from section 2, The Discrete Source of Information:
We can think of a discrete source as generating the message, symbol by symbol. It will choose successive symbols according to certain probabilities depending, in general, on preceding choices as well as the particular symbols in question. A physical system, or a mathematical model of a system which produces such a sequence of symbols governed by a set of probabilities, is known as a stochastic process.[3] We may consider a discrete source, therefore, to be represented by a stochastic process. Conversely, any stochastic process which produces a discrete sequence of symbols chosen from a finite set may be considered a discrete source. This will include such cases as: 1. Natural written languages such as English, German, Chinese. [these are ID sources -GD] 2. Continuous information sources that have been rendered discrete by some quantizing process. For example, the quantized speech from a PCM transmitter, or a quantized television signal. [might or might not be ID, depending on the continuous source -GD] 3. Mathematical cases where we merely define abstractly a stochastic process which generates a sequence of symbols. [followed by examples] [i.e. an idealized random source, which is not ID -GD]
In my opinion, there are two primary reasons that information theory ignores intelligence and meaning: first because it doesn't really matter to the communication system what (if anything) the messages it's passing mean or where they come from (see the first Shannon quote); and second because we have extensive mathematical tools for describing and analyzing random processes, but no analogously powerful tools for dealing with intelligence and meaning; ignoring the distinction allows the theory to apply the random-based tools to situations where they don't strictly apply. I think of statistical information theory's treatment of intelligence in information sources as being a bit like theistic evolution's treatment of God: yeah, we know it's there, but the theory works best if we pretend it doesn't. BTW, another argument I sometimes see made based on Shannon's theory that his theory treats transmitters, receivers, codes, etc as being intelligently designed (and I made the same assumption in at least one place above), and therefore these things must be intelligently designed. This is wrong; Shannon assumed this because he was interested in analyzing intelligently designed communications systems, not because that was the only type possible. Since that was an assumption of the theory, it can't also be a conclusion without making the argument circular. Finally, I haven't had a chance to reply on your ID Foundations 11 posting, so let me throw in a couple of quick requests for clarification here. First, in the equation "Chi = – log2(2^398 * D2 * p)", are D2 and p the same as Dembski's ?S(T) and P(T|H), respectively? Second, can you clarify what you mean when you define "a dummy variable for specificity, S, where S = 1 or 0 according as the observed configuration, E, is on objective analysis specific to a narrow and independently describable zone of interest, T"? Is it simply that S=1 when E is in T, and S=0 otherwise?Gordon Davisson
December 9, 2011
December
12
Dec
9
09
2011
01:40 PM
1
01
40
PM
PDT
oops, not logged, actual values.kairosfocus
December 9, 2011
December
12
Dec
9
09
2011
12:35 PM
12
12
35
PM
PDT
I wrote my previous reply in haste, so now it is time to repent at leisure. It seems that a lot of the great leaps forward in certain fields of science start with a scientist who grasps something genuinely new. His self-confident opponents are working from an established paradigm with an extensive toolbox of arguments. They are the orthodoxy, and science is a game they think they have played and won. So our scientist's next task is to rigorously translate that insight into the language of science to show that his idea is indeed superior to what went before. All the way back in 14, I mentioned how Stephen Meyer describes how he set out to find a scientific way of describing and justifying the intuitive design inference. You need that to counter the "design is an illusion" objection, which usually finds its source in materialism. Materialism claims science as its own turf, so "beating them at their own game" means doing good science, but also defining things empirically in an unassailable way, or "in words of one syllable", so that the truth will out. (A poor choice of expression to be sure.) So now we have a way of scientifically describing designed objects, so that nobody can justifiably claim on scientific grounds that design is or could be an illusion. But now we come to the question of the designer, and sometimes the objection is raised that "consciousness is an illusion" or "free will is an illusion." And I had the feeling that a similar rigorous scientific definition for designers is lacking. Is a beaver dam a designed object? We have dFSCI for that, if only we had enough time to ponder the question. But if we accept that a beaver dam has dFSCI, is the beaver a designer? It seems it should be a much easier question to answer than it is at present... Maybe I'm Don Quixote tilting at windmills. Still, from underneath the twisted pile of what was my shining armour, I would suggest that it would be helpful if the salient points from our discussion could be crystallized and made available on the comment policy page or somewhere else for easy reference.englishmaninistanbul
December 9, 2011
December
12
Dec
9
09
2011
12:20 PM
12
12
20
PM
PDT
That's an interesting idea. I can see how it might go through a number of iterations as each definition allows for or excludes something unintentionally. It could also prove impossible given enough hair-splitting. If "intelligence" is clearly defined enough, then perhaps the rest isn't really so difficult. I don't know that the word "designer" is important. The adjective is - it just needs a noun to go with it. What's important is the intelligence. The noun could be just about anything as long as that attribute can be applied to it. Function is central to Intelligent Design. I think it follows that the intelligent agent, or designer, is the entity that originates the implementation of a function. Too wordy? The intelligent designer provides the specified information required to implement a function. That would cover the function of arranging words to convey meaning, or the function of building dams to benefit both beavers and the ecology. In the case of the dams, having determined that they perform a function (for the beavers and/or the ecosystem) and that this function requires specified information (how to gather materials and assemble a dam, where, and when) then intelligent cause is inferred. That leaves open the question of whether the beavers possess it, but the determination does not hinge on the answer. If they do not, the information must have been provided to them. It is an observed reality that intelligent agents can produce unintelligent agents that follow intelligently formulated instructions, even if the instructions are so complex and the inputs so varied that the agents appear intelligent. So how's that: The intelligent designer provides the specified information required to implement a function.ScottAndrews2
December 9, 2011
December
12
Dec
9
09
2011
11:45 AM
11
11
45
AM
PDT
englishmaninistanbul: It is not my purpose, and I hope not the purpose of any serious IDist, to "beat materialists at their own game". Our only purpose, I believe, is to do good science. We can happily leave the materialist game to materialists. And defining a concept in "words of one syllable" has never been, as far as I know, an epistemological requirement of good science. Finally, I believe you are really an optimist. Even if we could do what you ask (that we cannot and want not), materialist would argue just the same. I am afraid that you have not understood what "their own game" really is.gpuccio
December 9, 2011
December
12
Dec
9
09
2011
09:46 AM
9
09
46
AM
PDT
englishmaninistanbul: I still don't agree with defining intelligence out of consciousness. Consciousness is the primary condition. Intelligence has no meaning out of conscious cognition. If beavers are designers, that's because they are consciously intelligent. The computer is not intelligent. It is a depository of intelligent choices made by humans. Even if it "manipulates" the environment, it does so only because it has been programmed to do so, not because it understands, or represents, or cognizes, or has purposes. Computers, or other non conscious, designed systems, are only passive executors of conscious plans deviced by conscious beings. Beavers are conscious, and that's why the discussion about them is more difficult, as I said from the beginning, and as proven by the interventions here. But the original source of dFSCI is a conscious, meaningful, purposeful representation. The first beginning of dFSCI is the conception of a function. A machine cannot do that, unless passively, automatically executiong instructions that have been written in it, without any awareness of their meaning or purpose. Therefore, I insist: Only conscious intelligent agents have ever been observed to produce de novo dFSCI. Designed machines can output dFSCI according to the information that has already been inputted in them by their designers. As Abel emphasizes, functional information is only the result of choice determinism, of the application of choice to comfigurable switches, something that neither necessity nor chance contigency can ever realize. Machines work by necessity, or sometimes chance and necessity. They have no choice determinism.gpuccio
December 9, 2011
December
12
Dec
9
09
2011
09:37 AM
9
09
37
AM
PDT
Oh absolutely. I'm just saying that when you infer design, you imply the existence of a designer, and although we all know intuitively what the word "designer" means in a general sense, people with a materialist world view might insist that that concept is undefinable scientifically and rubbish your entire theory. If you can find a way to define the concept of "designer" in words of one syllable that even they can't argue with then you have a chance of beating them at their own game.englishmaninistanbul
December 9, 2011
December
12
Dec
9
09
2011
09:25 AM
9
09
25
AM
PDT
Thank you for that link kf. I had seen that page before, if only I'd read it more carefully when I first saw it. I would just like to quote one section because I think it epitomizes what we've been discussing up to this point:
There plainly are other cases of FSCO/I that point to non-human intelligent designers, albeit these are of limited [non-verbal] forms. Where this gets interesting is when we bring to bear the Eng Derek Smith Cybernetic Model of an intelligent, environment-manipulating entity: In this model, an autonomous entity interacts with the environment through a sensor suite and through an effector array, with associated proprioception of internal state that allows it to orient itself in its environment, and act towards goals. The key feature is the two-tier control process, with Level I being an in-the-loop Input/Output [I/O] controller. But, the Level II controller is different. While it interacts with the loop indeed, it is supervisory for the loop. That allows for projection of planned alternatives, decision, reflection on success/failure, adaptation, and more. That is not all, it opens the door to different control implementations, on different “technologies.” For instance, it could be a software entity, with programmed loops that allow an envisioned degree of adaptation to circumstances as it navigates and tacks towards impressed goals. That sort of limited autonomy could indeed be simply hard wired or even uploaded as an operating system for a robot or a limited designer.
This is the sort of thing I've been looking for, in the sense of an empirically warranted way of defining an "intelligence." I don't see how even a determinist can deny the validity of this. Humans, beavers and other animals, and even certain postulated computer programs do in fact qualify as "intelligent, environment-manipulating entities." Only intelligent, environment-manipulating entities have ever been observed to produce dFSCI. Personally, this sounds like a more lucid position to hold than to say that "only intelligent agents have ever been observed to produce dFSCI", because it's much harder to come at it saying it's anthropocentric or open to interpretation or philosophically-grounded or what have you.englishmaninistanbul
December 9, 2011
December
12
Dec
9
09
2011
06:40 AM
6
06
40
AM
PDT
The example of the beaver demonstrates the variety of ways in which intelligent agency can be demonstrated. One could imagine a dam and build it with his own hands, or one could design a new "machine" or specialize an existing one for building dams. Either way the dam itself ultimately has the same designer. But the second case adds the design of the machines and has a purpose, not only for a single dam, but for a pattern of dam-building. It takes into account more than the immediate benefit of a single dam. But it shows the need to separate design from implementation and from replication. If someone writes an autobiography, how much difference does it make whether they typed, dictated, wrote by hand, or described events to a ghost writer? Or if someone designs an exercise machine, does it make a difference whether they assemble it with their own hands or specify a bunch of parts with assembly instructions for someone else to manufacture and someone else to assemble? Or what if several people collaborate? In both cases the implementation or even the method of design could be impossible to discern from the finished product. Can we still infer design in the case of the autobiography and the exercise machine based upon their specified complexity without identifying a designer or the implementation? I think we can and routinely do.ScottAndrews2
December 9, 2011
December
12
Dec
9
09
2011
06:38 AM
6
06
38
AM
PDT
EII: Pardon, but all that is needed is an objective basis for recognising that intelligences exist and act. That has long since been trivially answered, as we are such, and arguably the likes of beavers are such too (observe the sketch maps). I here notice how such beavers adapt their dam-building to the circumstances of stream flow, which is quite a design feat. I do not know how beavers were born as dam builders, on empirical observational grounds, but I can see what they are doing and recognise the intelligence implied. GEM of TKIkairosfocus
December 9, 2011
December
12
Dec
9
09
2011
03:58 AM
3
03
58
AM
PDT
GD: Pardon, but information obviously embraces a spectrum of interconnected meanings. A good place to start with is how Shannon's metric of average info per symbol, H, is too often twisted into a caricature that is held to imply that it has nothing to do with information as an intelligent product. To see that his is plainly wrong, let us simply move the analysis forward until we come to the step where Shannon puts H to work in assessing carrying capacity [C] of a band-limited [B] Gaussian white noise channel:
C = B*(1 + S/N) . . . Eqn 1
See that ratio S/N? It is a log ratio of signal power to noise power. That is, it is premised on the insight that we can and routinely do recognise objectively and distinguish signals and noise, and can quantitatively measure the power levels involved in each, separately. (In a digital system, the Eye Diagram/'scope display is a useful point of reference on this.) Now, information theory is unquestionably a scientific endeavour, and BTW, one closely connected to thermodynamics. So, we have here a relevant context in which AN INFERENCE TO DESIGN IS FOUNDATIONAL TO THE SCIENTIFIC AND TECHNOLOGICAL PRAXIS. That is why in the note that is always linked from my handle, and which I have linked above, Section A, I noted:
To quantify the . . . definition of what is perhaps best descriptively termed information-carrying capacity, but has long been simply termed information (in the "Shannon sense" - never mind his disclaimers . . .), let us consider a source that emits symbols from a vocabulary: s1,s2, s3, . . . sn, with probabilities p1, p2, p3, . . . pn. That is, in a "typical" long string of symbols, of size M [say this web page], the average number that are some sj, J, will be such that the ratio J/M --> pj, and in the limit attains equality. We term pj the a priori -- before the fact -- probability of symbol sj. Then, when a receiver detects sj, the question arises as to whether this was sent. [That is, the mixing in of noise means that received messages are prone to misidentification.] If on average, sj will be detected correctly a fraction, dj of the time, the a posteriori -- after the fact -- probability of sj is by a similar calculation, dj. So, we now define the information content of symbol sj as, in effect how much it surprises us on average when it shows up in our receiver: I = log [dj/pj], in bits [if the log is base 2, log2] . . . Eqn 1 This immediately means that the question of receiving information arises AFTER an apparent symbol sj has been detected and decoded. That is, the issue of information inherently implies an inference to having received an intentional signal in the face of the possibility that noise could be present. Second, logs are used in the definition of I, as they give an additive property: for, the amount of information in independent signals, si + sj, using the above definition, is such that: I total = Ii + Ij . . . Eqn 2
In sum, we cannot properly drive a wedge between information in the Shannon sense and the issue of an intelligent signal and distinguishing that from noise. So also, the inference to design is foundational to information theory. of course, the metric use3d to basically quantify information, has the odd property that since it addresses redundancy in real codes that leads to a distribution of symbols that as a rule will not be flat-random, a flat random string of symbols, which has no correlations between symbols, will have a maximal value of the Hartley or Shannon metrics, for strings of a given length. This oddity has been turned into a grant metaphysical story, but in reality is simply a consequence of how we have chosen to measure quantity of information in a string of symbols. Going further, we can see that we have a further relevant feature of information, that it is typically used to carry out a purposeful function. That may be linguistic, as in posts in this thread. It may be prescriptive/algorithmic, as in object code for a computer. In either case, it intelligently restricts us to a narrow zone of relevant function (T) within a sea of possible configurations for a string of the given length [W], i.e. strings of symbols are confined by rules of vocabulary and the relevant syntax and grammar, then also the semiotics of meaning. Once such a functional -- something that can often be directly observed or even measured -- string that is observed [E] gets to be of sufficient length that we observe its confinement to a narrow and sufficiently isolated zone of meaningful function, T, in a sufficiently wide space of configs W, we have very good reason on results of sampling theory to infer that the best explanation of that string E is design. You may wish to work through my reasoning on that here, I will simply sum up for the solar system scale and cosmological scale: Solar system (500 bits): Chi_500 = I*S - 500, bits beyond Observed Cosmos (1,000 bits): Chi_1,000 = I*S - 1,000 In each case the premise is that a blind search of the config space -- inherently a high contingency setting -- driven by chance and or necessity without intelligent guidance, will be so overwhelmed by the scope of possibilities that to near certainty, the sample will tend to pick up what is typical, not what is atypical. That is, the issue is operational implausibility on sampling theory: some haystacks are simply too large to expect to find a needle in, on accessible blind search resources. Of course, in relevant contexts such as origin of life or of major body plans, one may assert, assume or imply that the cosmos programs the search and cuts down the space dramatically. That is tantamount to an unacknowledged inference to design of the cosmos as a system that will develop life in target sites, then elaborate that life into complex and intelligent forms. At this stage, of course, I have very little confidence that this will be persuasive for those committed ahead of any evidence to a priori evolutionary materialism, that they need to reconsider their thinking in light of the unwelcome evidence they have locked out, as I have pointed out here on. All that likely response goes to, is that it shows the circle of question-begging, deeply ideologised philosophical thought that has donned a lab coat and seeks to inappropriately redefine science in its materialistic interests -- regardless of the damage done to science when in such hands it ceases to be an evidence-led search for the empirically anchored truth about our world and its roots. But, more reasonable onlookers, will be able to see what is really going on for themselves. GEM of TKIkairosfocus
December 9, 2011
December
12
Dec
9
09
2011
03:48 AM
3
03
48
AM
PDT
Oops it seems you already did. Any being who has conscious subjective experiences which bear for him the connotations of meaning and purpose, and outputs those representations as forms in a material system. Can it be streamlined any more than that?englishmaninistanbul
December 9, 2011
December
12
Dec
9
09
2011
03:05 AM
3
03
05
AM
PDT
Oops. Read 26.2 first to understand the previous post (26.1.1)englishmaninistanbul
December 9, 2011
December
12
Dec
9
09
2011
03:04 AM
3
03
04
AM
PDT
Oops it seems you already did. Any being who has conscious subjective experiences which bear for him the connotations of meaning and purpose, and outputs those representations as forms in a material system. Can it be streamlined any more than that?englishmaninistanbul
December 9, 2011
December
12
Dec
9
09
2011
03:03 AM
3
03
03
AM
PDT
And we are another step closer... Do you agree with the statement that ID does not posit supernatural causes? If that is the case, ID must be able to work within a materialist/determinist paradigm. By "materialistically acceptable" what I meant was that somebody working from the hypothesis of causal determinism should not be able to reject a definition of a design agent out of hand. Otherwise are we saying that ID incorporates an a priori denial of materialism? I hope you didn't think I meant to sniff at your definition of free will, your comments on that subject and indeed all your comments in this discussion so far seem very reasonable. My criticism derives entirely from my trying to play Devil's advocate. Would you care to frame an empirical definition of a conscious designer? You seem to be better at it than me.englishmaninistanbul
December 9, 2011
December
12
Dec
9
09
2011
03:02 AM
3
03
02
AM
PDT
englishmaninistanbul: My ideas: 1) Wrong. There is no epistemological need to have a "materialistically acceptable" definition of a design agent. Materialism is a phylosophy, not a requirement for science. What we nedd is an empirical definition of a designer. My definition if sully empirical. You can avoid the "free will" part, if you prefer, and just define a designer as any being who has conscious subjective experiences which bear for him the connotations of meaning and purpose, and outputs those representation as form in a material system. That definition is completely empirical, and based on the facts of subjective representations. There is no need for it to be "materialistically acceptable". If materialists are reasonable, and want to make science, they will accept it because it is empirical. If materialists are dogmatic, and stick to their ideology in spite of empirical reasons, they will refute it. That's fine for me. 2) We don't need it. See point 1. 3) I agree. Free will is a philosophical position. And so is materialistic determinism (if affirmed as a universal principle). But, as I said, the concept of free will is not necessary for the empirical definiton of a conscious designer, although it is very useful for a more general philosophical theory of design. I apologize if I have included it in my definition when I answered you, I just thought you could be interested in the concept.gpuccio
December 9, 2011
December
12
Dec
9
09
2011
01:43 AM
1
01
43
AM
PDT
1 2 3 4 7

Leave a Reply