Uncommon Descent Serving The Intelligent Design Community

FTR: Answering ES’ po-mo antics with the semantics of “function”

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In recent days, objector ES has been twisting the concept of Wickensian functionally specific information-bearing complex organisation into post-modernist deconstructionist subjectivist pretzels, in order to obfuscate the plain inductive argument at the heart of the design inference and/or explanatory filter.

For example, consider these excerpts from the merry go round thread:

ES, 41: . . . If a conscious observer connects some observed object to some possible desired result which can be obtained using the object in a context, then we say that the conscious observer conceives of a function for that object . . . . In science, properties of the material just are, without purpose, because everybody knows purpose is subjective. Functionality comes in when you get engineerial, and then it’s more up to the “objective functionality” of the engineer than of the material . . .

KF, 42: When one puts together a complex composite such as a program or an electronic amplifier ckt or a watch or an auto engine or many other things, function is not projected to it by an observer. I wuk, or i nuh wuk, mon. Was that a bad starter motor, a run down battery, out of gas, dirty injector points and more. Was that a bug in syntax or in semantics. Was that a BJT miswired and fried, did you put in the wrong size load resistor so it sits in saturation when it was meant to be in the middle of the load line, did you put in an electrolytic cap the wrong way around, etc. Is this a heart attack triggered by a blood clot etc. Function is not a matter of imagination but observation. And you full well know that or should.

Joe, 44: Earth to E. Seigner- functionality, ie a function, is an OBSERVATION. We observe something performing some function and we investigate to try to figure out how it came to be the way it is. Within living organisms we observe functioning systems and subsystems. As for “information”, well with respect to biology ID uses the same definition that Crick provided decades ago. And we say it can be measured the same way Shannon said, decades ago.

ES, 46: To an observer it looks like cars take people to work and shopping. But most of the time cars stand in garage motionless, and sometimes they fail to start. If the observer is truly impartial, then it’s not up to him to say that the failure to start or mere standing is any less of the car’s function than the ability of being driven. The car’s function is what the car does and when the car fails to start then that’s what it does and this is its function. Of course this sounds silly, but it’s true . . .

BA, 48: It is clear to me now. You have drunk deeply from the post-modernist/constructivist Koolaid. Kairosfocus and gpuccio be advised — attempting to reason with such as E.Seigner is pointless.

Let’s first remind ourselves as to what the glorified common-sense design inference process actually does as an exercise in inductive, inference to the best current explanation on empirically observed evidence:

explan_filter

 

. . . and also, of the significance of Wickensian functionally specific, complex information and Orgellian informational specified complexity for a blind, needle in haystack search; as highlighted by Dembski et al:

csi_defn

While we are at it, let us remind ourselves of what FSCO/I looks like in the form of functionally specific organisation in the technological world:

Fig 6: An exploded view of a classic ABU Cardinal, showing  how functionality arises from a highly specific, tightly constrained complex arrangement of matched parts according to a "wiring diagram." Such diagrams are objective (the FSCO/I on display here is certainly not "question-begging," as some -- astonishingly -- are trying to suggest!), and if one is to build or fix such a reel successfully, s/he had better pay close heed.. Taking one of these apart and shaking it in a shoe-box is guaranteed not to work to get the reel back together again. (That is, even the assembly of such a complex entity is functionally specific and prescriptive information-rich.)
Fig 6: An exploded view of a classic ABU Cardinal, showing how functionality arises from a highly specific, tightly constrained complex arrangement of matched parts according to a “wiring diagram.” Such diagrams are objective (the FSCO/I on display here is certainly not “question-begging,” as some — astonishingly — are trying to suggest!), and if one is to build or fix such a reel successfully, s/he had better pay close heed.. Taking one of these apart and shaking it in a shoe-box is guaranteed not to work to get the reel back together again. (That is, even the assembly of such a complex entity is functionally specific and prescriptive information-rich.)

 

. .  . and, in the life of the cell, as the Ribosome constructs a protein (which is a functionally specific string structure) based on the coded algorithmic information in the mRNA tape:

Step by step protein synthesis in action, in the ribosome, based on the sequence of codes in the mRNA control tape (Courtesy, Wikipedia and LadyofHats)
Step by step protein synthesis in action, in the ribosome, based on the sequence of codes in the mRNA control tape (Courtesy, Wikipedia and LadyofHats)

 

. . . not to mention, the parallel in the punched paper tape reader used for older computers and NC machines:

Punched paper Tape, as used in older computers and numerically controlled machine tools (Courtesy Wiki & Siemens)
Punched paper Tape, as used in older computers and numerically controlled machine tools (Courtesy Wiki & Siemens)

 

(When things have been strawmannised and twisted into pretzels, it helps to straighten them back out again. Where also, a picture is worth a thousand words.)

However, Mr Arrington has a point.

While those caught up in po mo deconstructionist schemes are hardly likely to have a sudden change of heart on the strength of a mere blog comment or even a headlined post, the rest of us can draw lessons from what we see laid out before us. I therefore crave the indulgence of a promoted comment to headlined status, including a snippet of intensely personal history of my homeland. For, comments get buried and forgotten within hours, but a headlined post is for the record:

_________________

KF, 73: >>ES:

It may be entertaining to play semantics games with terms like function, but that simply reveals that your problem is not with science, it is with common sense reality tracing to some of the sillier bits of post-modernist radical subjectivism and deconstructionism.

Perhaps, it has not dawned on you that survival of the fittest or hill-climbing algorithms or natural selection or the like pivot on the objectivity of function. Have you gone to Panda’s Thumb, TSZ, ATBC or the like Darwinist agitator sites to challenge the core concepts of evolution based on differential reproductive success pivoting on functional differences of life-forms? I safely bet not, you are reserving such talking-points for those you object to, regardless of inconsistencies or outright incoherence.

[Ill-]Logic with a swivel.

Patently, revealingly, sadly, you have indulged in incoherent selective hyperskepticism.

And if you genuinely imagine that a stalled car with a dead engine, or a leaky roof, or a crashed computer, or a PA system that distorts sounds horribly are functionally distinct as a mere matter of subjective opinion, your problem is a breach of common sense.

Do you — or a significant other — have a mechanic? Are you a shade-tree mechanic? Do you have even one tool for maintenance? Do you recognise the difference between sugar, salt and arsenic in your cup of coffee? Between an effective prescription correctly filled and faithfully carried out when you get sick and a breakdown of that process? Etc?

I put it to you that you cannot and do not live consistent with your Lit class seminar-room talking points.

And, your evasive resort to clinging to such absurdities to obfuscate the issue of functionally specific, complex organisation and associated information, speaks loudest volumes for the astute onlooker.

Own-goal, E-S.

The bottom-line of the behaviour of several objectors over the past few days, speaks inadvertent volumes on the real balance on the merits of the core design theory contention that there are such things as reliable empirical markers — such as Wickensian FSCO/I — that are strong signs of design as key causal process.

But, many are so wedded to the totalising metanarrative of a priori Lewontinian evolutionary materialism that they refuse to heed the 2350 year old warning posed by Plato on where cynical radical relativism, amorality opening the door to might makes right nihilism and ruthless factions points to for a civilisation. Refusing to learn the hard-bought, paid for in blood lessons of history, they threaten to mislead our civilisation into yet another predictably futile and bloody march of folly. As the ghosts of 100 million victims of such demonically wicked deceptions over the past century warn us.

The folly on the march in our day is so arrogantly stubborn that it refuses to learn living memory history or the history passed on first hand to our grand parents.

Here is Sophia (personification of Wisdom), in the voice of Solomon echoing hard-bought, civil war triggered lessons in Israel c 1,000 BC:

Prov 1:20 Wisdom [Gk, Sophia] cries aloud in the street,
in the markets she raises her voice;
21 at the head of the noisy streets she cries out;
at the entrance of the city gates she speaks:
22 “How long, O simple ones, will you love being simple?
How long will scoffers delight in their scoffing
and fools hate knowledge?
23 If you turn at my reproof,[a]
behold, I will pour out my spirit to you;
I will make my words known to you.
24 Because I have called and you refused to listen,
have stretched out my hand and no one has heeded,
25 because you have ignored all my counsel
and would have none of my reproof,
26 I also will laugh at your calamity;
I will mock when terror strikes you,
27 when terror strikes you like a storm
and your calamity comes like a whirlwind,
when distress and anguish come upon you.
28 Then they will call upon me, but I will not answer;
they will seek me diligently but will not find me.
29 Because they hated knowledge
and did not choose the fear of the Lord,
30 would have none of my counsel
and despised all my reproof,
31 therefore they shall eat the fruit of their way,
and have their fill of their own devices.
32 For the simple are killed by their turning away,
and the complacency of fools destroys them
;
33 but whoever listens to me will dwell secure
and will be at ease, without dread of disaster.”

A grim warning, bought at the price of a spoiled, wayward son who fomented disaffection and led rebellion triggering civil war and needless death and destruction, ending in his own death and that of many others.

Behind the Proverbs lies the anguished wailing of a father who had to fight a war with his son and in the end cried out, Oh Absalom, my son . . .

History sorts out the follies of literary excesses, if we fail to heed wisdom in good time.

Often, at the expense of a painful, bloody trail of woe and wailing that leads many mothers and fathers, widows and orphans to wail the loss of good men lost to the fight in the face of rampant folly.

But then, tragic history is written into my name, as George William Gordon’s farewell to his wife written moments before his unjust execution on sentence of a kangaroo court-martial, was carried out:

My beloved Wife, General Nelson has just been kind enough to inform me that the court-martial on Saturday last has ordered me to be hung, and that the sentence is to be executed in an hour hence; so that I shall be gone from this world of sin and sorrow.

I regret that my worldly affairs are so deranged; but now it cannot be helped. I do not deserve this sentence, for I never advised or took part in any insurrection. All I ever did was to recommend the people who complained to seek redress in a legitimate way; and if in this I erred, or have been misrepresented, I do not think I deserve the extreme sentence. It is, however, the will of my Heavenly Father that I should thus suffer in obeying his command to relieve the poor and needy, and to protect, as far as I was able, the oppressed. And glory be to his name; and I thank him that I suffer in such a cause. Glory be to God the Father of our Lord Jesus Christ; and I can say it is a great honour thus to suffer; for the servant cannot be greater than his Lord. I can now say with Paul, the aged, “The hour of my departure is at hand, and I am ready to be offered up. I have fought a good fight, I have kept the faith, and henceforth there is laid up for me a crown of righteousness, which the Lord, the righteous Judge shall give me.” Say to all friends, an affectionate farewell; and that they must not grieve for me, for I die innocently. Assure Mr. Airy and all others of the truth of this. Comfort your heart. I certainly little expected this. You must do the best you can, and the Lord will help you; and do not be ashamed of the death your poor husband will have suffered. The judges seemed against me, and from the rigid manner of the court I could not get in all the explanation I intended . . .

Deconstruct that, clever mocking scorners of the literary seminar room.

Deconstruct it in the presence of a weeping wife and mother and children mourning the shocking loss of a father and hero to ruthless show-trial injustice ending in judicial murder.

Murder that echoes the fate of one found innocent but sent to Golgotha because of ruthless folly-tricks in Jerusalem c. 30 AD.

(How ever so many fail to see the deep lesson about folly-tricks in the heart of the Gospel, escapes me. New Atheists and fellow travellers, when you indict the Christian Faith as the fountain-head of imagined injustice, remember the One who hung between thieves on a patently unjust sentence, having been bought at the price of a slave through a betrayer blinded by greed and folly. If you do not hear a cry for just government and common decency at the heart of the Gospel you would despise, you are not worth the name, literary scholar or educated person.)

And in so doing, learn a terrible, grim lesson of where your clever word games predictably end up in the hands of the ruthless.

For, much more than science is at stake in all of this.

GEM of TKI  >>

_________________

I trust that the astute onlooker will be inclined to indulge so personal a response, and will duly draw on the hard-bought lessons of history (and of my family story . . . ) as just outlined. END

PS, Sept 30: ES has been making heavy weather over the idea of a primitive tribe encountering a can opener for the first time and not understanding its function (which he then wishes to project as subjective):

A rotating cutter can opener in action
A rotating cutter can opener in action

And, a modern development showing meshing serrated gears:

modern rotary action can opener with meshing gears (Both images HT Wiki)
modern rotary action can opener with meshing gears (Both images HT Wiki)

However, this is both incorrect and irrelevant to recognising from aspects of the can opener that exhibit FSCO/I, that it is designed:

1 –> Whether or not the primitive seeing an opener for the first time can recognise its purpose and contrivance that integrates materials, forces of nature and components into a functioning whole, that functionally specific, complex organisation for a purpose exists and is embedded in how the opener is designed.

2 –> Just by looking at the evident contrivance manifested in FSCO/I that is maximally unlikely to obtain by blind chance and mechanical necessity — as with the fishing reel above, the primitive is likely to perceive design.

3 –> The rotating gears with matched teeth set to couple together alone implies highly precise artifice to build centred disks, cut matching gearing, mount them on precisely separated and aligned centred axes, with other connected parts already demonstrates design to a reasonable onlooker.

4 –> The precisely uniformly thick handles joined in a pivot, and reflecting rectangle-based shapes would be equally demonstrative.

5 –> Where, actual intended function has not been brought to bear. (And note, we see here again the implicit demand that the design inference be a universal decoder/ algorithm identifier. That is a case of setting up and knocking over a strawman, where . . .  just on theory of computation, such a universal decoder/detector is utterly implausible. The point of the design inference is that on inductively confirmed reliable signs such as FSCO/I we may confidently identify design — purposefully directed contingency or contrivance — as key causal factor. It seems that any number of red herrings are led away from this point to convenient strawman caricatures that are then knocked over as though the actual point has been effectively answered on the merits. It has not.)

6 –> But of course, that functionality dependent on specific components and an arrangement otherwise vanishingly improbable, reeks of design and the function can be readily demonstrated, as the patents diagram shows.

7 –> Where, again, it must be underscored that, per my comment 49 to ES:

[the] ultra-modernist, ugly- gulch- between- the- inner- world- and- the outer- one [of] sophomorised Kantianism fails and needs to be replaced with a sounder view. As F H Bradley pointd out over a century ago, to pretend to know that the external world is un-knowable due to the nature of subjectivitiy . . . the denial of objective knowledge . . . is itself a claim to objective knowledge of the external world and a very strong one too. Which therefore is self-referentially incoherent. Instead, it is wiser to follow Josiah Royce’s point that we know that error exists, undeniably and self evidently. Thus, there are certain points of objective knowledge that are firm, that ground that objective truth, warrant and knowledge exist, and that schemes of thought that deny or denigrate such fail. Including post modernism, so called. Of course, that we know that error exists means we need to be careful and conservative in knowledge claims, but the design inference is already that, it is explicitly inductive on inference to best explanation on observed patterns and acknowledges the limitations of inductive knowledge including scientific knowledge. [A Po-Mo] selectively hyperskeptical focus on the design inference while apparently ignoring the effect of that same logic on science as a whole, on history, on common sense reality and on reason itself, simply multiplies the above by highlighting the double standard on warrant.

8 –> In short, we have here a case of clinging to an ideological absurdity in the teeth of accessible, well-warranted correction.

Comments
GP Did you know that the world of information retrieval uses exactly the same concepts with different names?. It drives me crazy. MarkMark Frank
October 15, 2014
October
10
Oct
15
15
2014
05:31 AM
5
05
31
AM
PDT
gpuccio: Thank you for the explanation! If you made a mistake, I won't tell you about it, because my ignorance won't allow me to notice it! :) BTW, now I recall someone telling me a few months ago that I could not learn anything in this site. :)Dionisio
October 15, 2014
October
10
Oct
15
15
2014
04:44 AM
4
04
44
AM
PDT
Dionisio: Yes, it's Positive Predictive value, the ratio: True Positives / Total test positives. Specificity is the ratio: True negatives / Condition negatives Indeed, the number of parameters that can be derived from the basic 2x2 table is astounding, and to make things worse, many of them have more than one acronym! So, we have: Sensitivity (also called True Positive Rate) Specificity (also called True Negative Rate) Precision (also called Positive Predictive Value) Negative Predictive Value Prevalence Accuracy False Positive Rate (1-Specificity) And so on... (I hope I made no mistakes!) :)gpuccio
October 15, 2014
October
10
Oct
15
15
2014
04:12 AM
4
04
12
AM
PDT
gpuccio Sorry for asking such a dumb question in my previous post, but I don't recall seeing that term when I worked on engineering design software development for a number of years. Your medical field and biological science have quite a bit of acronyms that make my learning much slower :)Dionisio
October 15, 2014
October
10
Oct
15
15
2014
02:44 AM
2
02
44
AM
PDT
gpuccio What does PPV stand for in this case? Positive Predictive Value? The probability that a person with a positive test result has, or will get, a given disease? At least that's what I found in a dictionary. Thank you.Dionisio
October 15, 2014
October
10
Oct
15
15
2014
02:35 AM
2
02
35
AM
PDT
Mark: Yes, you are right. I should have said very high PPV. Thank you for the correction. :)gpuccio
October 15, 2014
October
10
Oct
15
15
2014
01:37 AM
1
01
37
AM
PDT
GP I just noticed something you wrote which worries me because if you really believe it you will be giving wrong diagnoses. This is not an attempt to restart our debate.
If you have a diagnosis by some test with very high specificity, your diagnosis is rather certain.
This is just not true. You must also know the specificity and base rate. This is not open to dispute. It is a mathematical certainty. Please reassure me you know this and this was a typo.Mark Frank
October 14, 2014
October
10
Oct
14
14
2014
11:12 PM
11
11
12
PM
PDT
gpuccio, Thank you, as always. And to gpuccio and Mark, such a gentlemanly exchange. Kudos.Mung
October 14, 2014
October
10
Oct
14
14
2014
04:43 PM
4
04
43
PM
PDT
Mark: OK, we just have to differ. :)gpuccio
October 14, 2014
October
10
Oct
14
14
2014
02:21 PM
2
02
21
PM
PDT
Well GP I guess we just have to differ. I find your opinions about this to be quite bizarre but I see no value in pursuing it. If you are interested here is a classic paper on the problems with p-values. There are many such papers.Mark Frank
October 14, 2014
October
10
Oct
14
14
2014
02:19 PM
2
02
19
PM
PDT
Mark: "Well that is where we disagree. Suppose I am a psychologist investigating the correlation between gender and reading speed age 10. I take a sample of 10 year old boys and a sample of 10 year old girls and measure reading speed. My H0 is that reading speed is independent from the variable gender. What is this H0 meant to be explaining? All I want to do is find out if it is true!" If I observe no difference which is relevant, indeed I need no H0. I have just observed no effect. (Please, note that the concept of "relevant" is completely different from the concept of "significant". A difference will always be there. But a researcher is always interested only on effects of a minimum size). If I observe an effect, I want to know if that effect can be generalized to a general population (IOWs, if it has predictive power in another set of cases) or if it is just a random fluctuation due to the sampling. That's why I need H0. To decide if the observed effect can be explained by the null hypothesis, and therefore if I have credible reasons to reject it, and suggest an alternative explanation. So, I try to explain the observed effect by H0, and compute its probability under the null hypothesis. "Are you going to say to her “you have breast cancer as an explanation of the mammogram, but not as an explanation of the biopsy” I simply don't think you have been well taught (just my humble opinion). Diagnostic tests can have different predictive value, sensitivity and specificity. Those parameters are usually measured in controlled conditions, and then applied in practice. A diagnosis is only the empirical application of the testing tools available, with awareness of their sensitivity and specificity, and then a personal judgement that must be communicated to the patient in the most realistic way. You don't give a patient a p value. At most, you give him or her a reasonable assessment of the probabilities of a disease, or of a healing. That's another thing. If you have a diagnosis by some test with very high specificity, your diagnosis is rather certain. Otherwise, you have to use other tools. But you never compute a p value in an individual case. You do that when you are researching, and trying to establish the sensitivity and specificity of your diagnostic tools. A thing is certainly true or false. Our conviction about its truth is anyway a theory, more or less based on facts. If I conclude that a string was designed, I must do that because I reject the H0 for some appropriate observed effect, and not for others. As some effects measure in the object can be compatible with a random origin, while other are not, even a single effect which is strongly incompatible with a random explanation will be enough to reject a random of the object. That is not in contradiction with the fact that other observed aspects of the object are compatible with a random explanation. Really, I don't see your problems. Obviously, it is the task of the researcher to understand what effect can be useful to reject a random origin of the object (if, like in ID, that is the purpose). dFSCI is a tool which has been chosen exactly for that reason: because it is appropriate, it has 100% specificity as a design detection tool. All your "objections" have only one meaning: science is not mere statistics. It requires a correct, reasonable methodology. The methodology of OD is absolutely correct. "In the previous sentence you had one H0 and two measurements. In this sentence you have two H0s. I understand the first sentence but not this one." It's because I mean, as hypothesis, the whole cognitive context: this effect I observe can be explained by a random variation in the system. If you prefer to call hypothesis the simple statement that in the system there is only random variation, then you have one hypothesis with which you try to explain two different observed effects. You choice. I maintain that a correct formulation of H= is only in reference to an observed effect. But what is the difference, in the end? What problems do you find with my definition and procedure, except that it is a little different from what you are accustomed to think? "Why? Are you not prepared to reject the hypothesis that beta lactamase was generated randomly? Will you tell the woman that to come to a conclusion about her breast cancer would not be science. it would just be prophecy?" I reject the hypothesis that beta lactamase was generated randomly only because of its functional information. So, I am rejecting an H0 connected to a specific effect. I see no cognitive utility in separating the two concepts. I have no scientific independent truths, only scientific theories justified by observed facts and correct interpretation. I try to keep my categories pure. A diagnosis to a person is all another matter. It is an operative communication, which has its deontology. It is not a matter of p values, but of honest communication of the degree of certainty that the existing tools can give us, according to existing research and knowledge, in that individual case. IOWs, it is an application of existing predictive models which have already been tested before. I am afraid that you confuse scientific principles with their practical application. Medicine is a practical problem, not only an abstract science. "If one aspect is incompatible with random origin then it doesn’t have a random origin – period. It doesn’t matter that other aspects are compatible." Or better, it is reasonable to reject the random origin. Because of the single aspect which was not compatible. Again, what is the problem? Have I ever said anything different? My point is simple: a correct application of hypothesis testing, with a correct methodology and understanding of what we are doing, generates no special problems, and works very well. In medicine like in ID. I have made very specific examples. I still don't understand where you believe that my procedure would generate a problem, or bring me to a wrong conclusion. I have clearly shown that you can reject a correctly formulated H0 if you observe some well defined effect which cannot be explained by random variation, and that has nothing to do with any detailed formulation of H1 or H2, if not as the simple logical opposite of H0: "This effect cannot be explained by random variation". You started all this by saying that if I have not an explicit H1, I cannot reject H0. And that Fisherian hypothesis testing is a problem. I think you have show mo reasons for any of those statements.gpuccio
October 14, 2014
October
10
Oct
14
14
2014
09:24 AM
9
09
24
AM
PDT
GP I am finding your ideas on hypothesis testing utterly bizarre - but I will make a couple more attempts to communicate.
This is an hypothesis, not a conclusion.
And if it is  rejected then the conclusion is that it is not true.Right?
The same term “the same population” can have a lot of meanings. The correct form is “populations in which the variable we are measuring is independent from the variable by which we group the cases”.
No problem with that. I was using a shorthand.
We always reject H0 as an explanation of something. We are not trying to decide if H0 is absolutely true or not.
Well that is where we disagree. Suppose I am a psychologist investigating the correlation between gender and reading speed age 10. I take a sample of 10 year old boys and a sample of 10 year old girls and measure reading speed. My H0 is that reading speed is independent from the variable gender. What is this H0 meant to be explaining? All I want to do is find out if it is true!
Our H0 will be: The relative percentage of nucleotides in this DNA string can be explained by random mutation.
Well of course you are free to believe this but it flies in the face of everything I have ever learned about hypothesis testing. As I have been taught (and I think rather well taught) there is a difference between the hypothesis – which is simply something that may be true or false – and the range of test statistics that may be used to confirm or reject it. So if the hypothesis is that the DNA string was created by random mutation then that could be tested by measuring the relative percentage of nucleotides – but that is quite crude – it would fail to reject the hypothesis if the nucleotides were in the equal proportions but all in a regular pattern. A better test might be something based on compression. In statistical terms that test would be more powerful. But there is nothing here about testing the hypothesis as an explanation of the KC complexity. The process is to decide on the hypothesis and then select the test. It may help to realise that tests for medical conditions are a form of hypothesis testing. The hypothesis is the condition. The test statistic is the result of whatever test you use. Suppose the hypothesis were – this woman has breast cancer. There are a number of tests you could use with varying sensitivity and specificity (and costs). If you choose a mammogram and get a negative result that will have one p-value, if you choose a biopsy you will get another. Are you going to say to her “you have breast cancer as an explanation of the mammogram, but not as an explanation of the biopsy”.
You should certainly understand that, if we do not reject H0, we are not demonstrating that it is true. We are simply stating that H0 remains a good explanation for the observed effect. Our conclusion is about what we can infer about reality, not about absolute reality. That is the nature of empirical science.
Well of course the conclusion is not certain. And I understand very well that classical hypothesis testing can only rule out H0 – it can never confirm H0 – that is one of its weaknesses. But we are still talking about a real thing that might be true or false. The woman has breast cancer or she does not.
There is no contradiction in saying that H0 cannot be rejected as an explanation of the first type of measurement, while it can be rejected as an explanation of the second type of measurement.
Of course – and that is why we choose the measurement that gives us the best combination of sensitivity and specificity.
We are testing our hypotheses, and we have two different hypotheses, regarding two different aspects of the same object.
In the previous sentence you had one H0 and two measurements. In this sentence you have two H0s. I understand the first sentence but not this one.
Nowhere we are stating: this string was not generated randomly. That would not be science, it would just be prophecy.
Why? Are you not prepared to reject the hypothesis that beta lactamase was generated randomly? Will you tell the woman that to come to a conclusion about her breast cancer would not be science. it would just be prophecy?
What we are saying is: given this aspect of the string, it is not credible that it was generated randomly. Other aspects of the string are perfectly compatible with a random origin.
If one aspect is incompatible with random origin then it doesn’t have a random origin – period. It doesn’t matter that other aspects are compatible.Mark Frank
October 14, 2014
October
10
Oct
14
14
2014
07:04 AM
7
07
04
AM
PDT
Mark: No. A) the two samples are drawn from the same population This is an hypothesis, not a conclusion. The same term "the same population" can have a lot of meanings. The correct form is "populations in which the variable we are measuring is independent from the variable by which we group the cases". In each case, you have to define very precisely: a) H0 b) The effect that H0 is supposed to explain. We always reject H0 as an explanation of something. We are not trying to decide if H0 is absolutely true or not. Data are measurements made on objects. So, let's say that our string is the object. We can measure different things in it. Let's say that we measure the relative percentage of nucleotides. Our H0 will be: The relative percentage of nucleotides in this DNA string can be explained by random mutation. That is our hypothesis. And, very likely, it cannot be rejected. You should certainly understand that, if we do not reject H0, we are not demonstrating that it is true. We are simply stating that H0 remains a good explanation for the observed effect. Our conclusion is about what we can infer about reality, not about absolute reality. That is the nature of empirical science. Let's say, instead, that in the same string we measure the functional information. Our H0 will be: The functional information for our defined function in this DNA string can be explained by random mutation. That is our hypothesis. And, if our measurements allow it, it can be rejected. So, we have the same string, and we measure two different properties of it. And we test two different hypotheses, each of which assumes a random origin of the string, but to explain two different measured data. What is the problem? It's the same case as the marble sculpture: same object, different aspects of the object, different data, different hypotheses to test. There is no contradiction in saying that H0 cannot be rejected as an explanation of the first type of measurement, while it can be rejected as an explanation of the second type of measurement. We are testing our hypotheses, and we have two different hypotheses, regarding two different aspects of the same object. Nowhere we are stating: this string was not generated randomly. That would not be science, it would just be prophecy. What we are saying is: given this aspect of the string, it is not credible that it was generated randomly. Other aspects of the string are perfectly compatible with a random origin. There is absolutely no contradiction in all this process of cognition. The only problem would be if the same aspects (data to be explained) were used to reject or not reject the same hypothesis about them at the same time. That would be a logical contradiction. But that is not the case. Not in ID, at least.gpuccio
October 14, 2014
October
10
Oct
14
14
2014
05:14 AM
5
05
14
AM
PDT
Why is Mark talking about hypothesis testing when his position doesn't have any testable hypotheses?Joe
October 14, 2014
October
10
Oct
14
14
2014
03:53 AM
3
03
53
AM
PDT
GP (cont) I should have addressed your example of marble statue.
The data are the same (the sculpture). The questions are different. We have two different H0s: both assume a random origin of what we observe, but they refer to different properties of the same object. The answers are different. In one case H0 is rejected, in the other case it is not.
This is silly. Clearly the colours and the shape of the statue are different data. They may be data about the same object but they are not the same observations. There are different possible H0s but they shouldn't be phrased in terms of what they explain. The H0s might be: A) The shape of this statue is the result of natural weathering B) The colours in this statue are the result the natural colour of the marble used Each of these H0s might be used to explain different things depending on the interests of the person looking at the data. (A ) might be offered as an explanation at to why there is an arm missing or why the face looks very much like president Kennedy. The p-value for an arm missing is quite high. The p-value for looking like President Kennedy is rather low. So does the evidence for the shape being the result of natural processes depend not just on the data but on the interest of the person examining the data?Mark Frank
October 14, 2014
October
10
Oct
14
14
2014
02:41 AM
2
02
41
AM
PDT
GP But H0 hypotheses are about what happened - not what they explain. Examples of H0 might be: A) the two samples are drawn from the same population or B) this DNA string was created by random mutation Different people can have different interests in what H0 explains. Take example A - one person might be interested in whether H0 explains that sample A has a higher mean than sample B. Another person might be interested in whether H0 explains why sample A's mean is almost exactly one standard deviation from sample B. One might have a high p value, the other a low p value. So does their different interest mean that one of them can rationally conclude that the two samples are from a different population while the other cannot?Mark Frank
October 14, 2014
October
10
Oct
14
14
2014
02:11 AM
2
02
11
AM
PDT
Mark: I really can't see what is the problem. If you have a set of data, and you ask different questions about them, it's obvious that the answers will be different. Why is that a problem? H0 must be formulated as a specific hypothesis, a specific question. In the case of design detection, the question is: can the functional complexity we observe be explained credibly by random variations in the system? Now, let's say you have a very detailed, unpainted marble sculpture. You ask: "Can the form of this object be explained by random variation in a natural system?" Well, if the sculpture is complex enough, the answer will be: No. And we will reject H0. Now you ask: "Can the colours and patterns on the surface of this object be explained by random variation in a natural system?" The answer is: Yes. The marble colours and patterns can certainly be explained that way. The data are the same (the sculpture). The questions are different. We have two different H0s: both assume a random origin of what we observe, but they refer to different properties of the same object. The answers are different. In one case H0 is rejected, in the other case it is not. What is the problem? A "conclusion" is not an absolute, abstract entity, as you seem to assume. A conclusion is a specific process of our mind, a meaning we derive from data, an answer to some specific question. We can have different conclusions about the same data, provided that they are not logically incompatible. Again, where is the problem?gpuccio
October 14, 2014
October
10
Oct
14
14
2014
01:51 AM
1
01
51
AM
PDT
GP
In all cases, we reject H0 as an answer to a specific question (which has nothing to do with any need to have an alternative H1).
Think about this. It means that given the same data but a different question we might reject H0 in one case and not in another. So the conclusion depends not on the data but on the interests of the experimenter! (This incidentally is one of the acknowledged problems with classical hypothesis testing.)Mark Frank
October 13, 2014
October
10
Oct
13
13
2014
11:21 PM
11
11
21
PM
PDT
HeKS: This discussion started as an answer to you. Are you following it? Your contribution would certainly be very welcome!gpuccio
October 13, 2014
October
10
Oct
13
13
2014
11:32 AM
11
11
32
AM
PDT
Mark: a) All the complex specifications we observe are the result of design. Except biological molecules, which are the object of our debate. Please, offer any counter-example. b) Complex function is not something we find in nature. We find it only in designed things and in biological molecules and structures. Nowhere else. Therefore, observing a complex function is a special observation, and it is perfectly normal that we try to explain why we observe it. Random variation can generate apparent meaning and function, which is not intentional and is not designed. The purpose of design detection is to differentiate between those forms of pseudo function (a true functionality ion an object which was never intended for a function by some designer) and truly designed things. Now, we cannot do that if the function is simple. Random variation can generate simple pseudo functionalities. But we can do that if the function is complex. Random variation cannot do that. Again, that is first of all an empirical observation. So, we reject H0 because our question is very specific: "can random variation do that?". And the answer is simple: "No, because that result is too unlikely, and, while logically possible, it will never be empirically observed". IOWs, we reject H0 as a credible answer to our question. There is no need of any book which legitimates that procedure. The procedure is the same as in classic hypothesis testing, but its application to design detection is specific of ID theory. And, as you know, ID theory bis not exactly popular in our academic world. No book will tell you that you can safely reject the random origin of complex functional information, because that means validating ID theory. I need no book to believe what is evidently true. The proof of how true the theory is is that you cannot provide any true example of complex functional information generated in a random system. IOWs, you cannot offer any example of complex language, or functional software, generated randomly. So, what can you do? You do your best, and try to "play tricks" (in good faith, in good faith) with the definition of functional information. For example, your last example tries to renovate, in different form, the only objection that you made when I met the "challenge" of applying my procedure to any example offered by our friends at TSZ. Both are wrong arguments. This last example is even improper methodologically. You continue to treat the rejection of H0 as though it were an absolute. No. In all cases, we reject H0 as an answer to a specific question (which has nothing to do with any need to have an alternative H1). So, in the RCT example, H0 is simply that there is no difference between the drug group and the placebo group. So, we reject H0 if no significant difference is observed. We are not interested, here, in peculiarities of the string which measure the difference in means. You seem really confused about that. Now, let's say instead that our experiment has the purpose of answering the question: can a random difference of means be an indirect measure of the president’s birthday in seconds since 1900 (or of any other specified set of data)? To that question, it is easy to find a negative answer by an experiment, and so reject the H0. Indeed, what will happen is that you will observe no similarity between the value of the difference of means and the birthdays, except for rare random correspondences, perfectly compatible with random variation of random values. But again, if you take any sequence of random digits, and just google it to see if there is any data in the internet which is similar to it, you can probably observe some result. It is simple: again, you are comparing a post-specification with all the available data in a big, very big database. That's the probability you have to check. You get a series of digits, and you compare it to a database which is very huge. The scenario is similar to a search on BLASTp. It is possible to compute a probability for the H0 hypothesis, but you have to compute it well. Even so, I think that if you use a long enough series of random digits (let's say, just to be safe, 500 decimal digits) you will never find a perfect correspondence, not even in the Internet, not even in the universe. 10^500 (about 1600 bits, I believe) is a really a big number. Big enough. Dembski's UPB for the universe is 500 bits. A big number, isn't it? Not so big, after all. 1600 bits is more or less the functional complexity in the alpha subunit of ATP synthase (300-400 perfectly conserved AAs from LUCA to humans)! Any comments?gpuccio
October 13, 2014
October
10
Oct
13
13
2014
10:28 AM
10
10
28
AM
PDT
GP It is not that I do not understand. It is that I did not explain myself clearly.
But the point is: there are not exactly a lot of complex specifications, of complex functions. IOWs, of specifications or functions that generate an extremely small target space.
My point is twofold: How do you know there are very few complex specifications? There are infinitely many specifications out there and in general it is not obvious given a string whether it conforms to a specification. More importantly - why is this a reason for rejecting H0? I don't see it in any textbook on hypothesis testing. If your principle were adopted generally then we should reject H0 whenever the results conform to a complex specification. Imagine an RCT of a new drug (double blind with a placebo). The experimenters notice that the difference in means between drug group and control group, while small, is the president's birthday in seconds since 1900. It meets a complex specification. The p-value is tiny and they can confidentally reject H0. The medicine is clearly effective! The potential for pharma companies wantig to get their drugs to market is enormous.Mark Frank
October 13, 2014
October
10
Oct
13
13
2014
09:18 AM
9
09
18
AM
PDT
Mark: No. Again, you do not understand. My point is, as explained, that any specification which generates a binary partition in the search space so that the probability of the target space is extremely low is a reason to reject H0, if we observe exactly a string which is part of that tiny subset. I have given the example of a highly ordered state: all binary strings whose bits have the same value. I have given the example of specifications based on meaning (Shakespeare's sonnet), and on function (software, machines, proteins). The concept is the same. The error you make is that you consider only the specification. That is really wrong, and I don't understand how you, who have been discussing ID for years, can still make that error. The point is: complex specifications. You are right. There are a lot of possible specifications. Almost any string can be specified in some way. And there are a lot of potential functions. Almost any object can be used for some purpose. But the point is: there are not exactly a lot of complex specifications, of complex functions. IOWs, of specifications or functions that generate an extremely small target space. A highly ordered string is an example (if long enough). Meaningful and functional strings are another example (if long enough and if the function requires a sufficient number of specific functional bits). The point is: that kind of specification is only observed as the outcome of necessity or design. Not of random variation (the probability is too low, and H0 can be safely rejected). A highly ordered string can be the outcome of necessity. Complex meaning and function in strings can never be explained by a simple necessity algorithm, and is a certain indicator of design. The reason why we do not observe outcomes which are part of extremely improbable special subsets, as often said by KF, is the same why we do not observe ordered states of gas molecules: they are possible, but too unlikely. For the same reason, gas molecules or any other random system do nor generate the characters of a Shakespeare sonnet (not by any possible code). For the same reason functional proteins do not emerge from random mutations.gpuccio
October 13, 2014
October
10
Oct
13
13
2014
08:26 AM
8
08
26
AM
PDT
GP When I write:
all strings 336 bits in length are equally improbable
I mean just that – no more. Each individual possible string from all 1s through to all 0s has the same probability given H0. However, it is clear you are talking about the probability of the string being in a specified subset of the set of all possible strings. Your specification is  “fulfils the function of beta lactamase”. My question is how does that act as a justification for rejecting H0?  We don’t normally dismiss H0 because the test statistic happens to have a value which performs a function (or meets any other arbitrary specification) however small the probability of meeting that specification. It needs some justification! You stress the importance of the function being specified in advance. I take it you don’t literally mean the timing is important. If we happened to discover the function of beta lactamase after the protein was decoded I imagine you would still say it could be used to reject H0. I assume what you are getting at is the specification should not be derived from the string but should be derived independently. But there are an infinite number of such specifications (not necessarily functional – any old specification would do - my Sanskrit poem expressed in binary would be one). For all we know the vast majority of the 2^336 strings meet some such specification – we just don’t know what they are. We should not reject H0 because we discovered the string met a specification even if it were the only string to do so. What you are doing is like setting a rejection region for the difference of two sample means because that difference happens to be an interesting number for some other reason (perhaps it is someone’s birthday).  This is a game everyone can play – finding their own interesting number range and rejecting H0 because there is a very small chance of the difference in two means falling into that range. You need to find some justification why a particular small range is grounds for rejecting H0 (and that justification lies in H1).Mark Frank
October 13, 2014
October
10
Oct
13
13
2014
08:09 AM
8
08
09
AM
PDT
Mark: I think that your mistake is in saying: "all strings 336 bits in length are equally improbable". You certainly understand that when we compute the probability of something, we are dealing with events, and the events must be clearly defined. If I throw a die, I can say that each of the six individual outcomes has the same probability. But I can well define the probability of different events: for example, the probability of getting an even number will be 1/2. The probability of getting a number lower than 3 will be 1/3. And so on. Now, let's go to our example, and to simplify it let's say that we are discussing binary strings of 336 bits length, and that only one of all strings has the functionality I have defined. So, the target space/search space ratio is 1:2^336, and therefore the functional complexity of the (only) functional string (according to my explicit definition) is of 336 bits. OK? Now, the search space is 2^336 strings. Each of them has the same probability of being found in a single random attempt (we assume a uniform probability distribution). The point is: what is the event whose probability we want to compute? Let's see. 1) What is the probability of getting a string 336 bit long in a single random "extraction" from the search space? 2) What is the probability of getting a string which has not the defined functionality? 1-(1:2^336). Extremely high. 3) What is the probability of getting the string with the defined functionality? 1:2^336. Extremely low. In my definition of specification, I have said very clearly that specification is any explicit rule which generates a binary partition in the search space. What we are interested in is the probability of the target space, not of any specific string. In functional specification, the partition is generated by an explicit definition of a function. Given the definition, each string can be classified as having the functionality, or not. So, what do you mean when you say: ""all strings 336 bits in length are equally improbable"? What is the event you define whose probability we should compute? You can say: I define as success the random generation of this particular string: and then give each single bit of your specification. Well, that is called: pre-specification. It works only if you give the sequence in advance. Then, if you really get that sequence in a random attempt, I will gladly admit that ID is wrong, or simply that you are either a splendid fraud or a splendid magician. But if you get a random string, and then define the event as: getting exactly this string (and here you give the bits), and then you state that you have got an event extremely improbable, than you are cheating, or simply misunderstanding. There is nothing improbable in the event you got: you got exactly one of the many strings without any special specification, whose only individual specification can come from enumerating its bits after you have got the string. IOWs, you cannot use the random complexity that you have already got, and use it to build a "specification", and then use that definition to say that the previous event was improbable. The previous event was not improbable at all. If you get that string after you have specified it, then that would be something. But with true specifications, based on function, or meaning, or simply on ordered states, it's all another matter. A string of 336 1s will never be seen as an outcome of a random system. Why? Because it is part of an extremely tiny subset of the search space, the subset of very ordered sequences (which, as you certainly understand, can be defined in different ways, while remaining extremely tiny). In this case, the general definition generates an objective partition, even if I don't use the individual bits of the string in my definition. So, I can certainly compute the probability of the following event: 4) What is the probability of getting a string where all the 336 bits have the same value? 2:2^336. Extremely low. So, if I get that type of string, I will reject the null hypothesis: "This is the result of the random variance in the system" (the system could well be the throwing of a fair coin 336 times). As you can see, I need no explicit H1 to reject H0. I just reject it. Then I can inquire on possible alternative explanations. The simplest one is that the coin is not fair. Another one is that there is some fraud in the system. As you can see, there are many possible explanations of what we observe, and they are based either on necessity or on design, or a mix of the two. But they have no relevance in our rejection of the null hypothesis. And, to decide which is the best explanation, we really need other types of cognitive considerations, and not statistics. The same is true for design detection. The event we observe is a functional string. We define the function. We compute how many bits are essential to implement the function. We compute the probability of the following event, well defined: "How likely is it that a string which implements this function is generated by the random variation in the system?" We compute the probability. And, if it is really low, we reject H0. We need no explicit H1. If you want, we can say that we have a generic H1, the logical opposite of H0, in the form: "This string was not generated by the random variation in the system, but by something else". But again, that is simply the negation of H0, and not an explicit alternative explanation. Regarding the rejection region, as I have said I use the binomial distribution. I simply compute the upper tail of "getting at least one success (as defined) in n attempts, when the individual probability of success in each attempt is p". Or, if necessary, we can compute the probability for at least n successes. But, in our discussion, "at least one" is appropriate.gpuccio
October 13, 2014
October
10
Oct
13
13
2014
06:14 AM
6
06
14
AM
PDT
GP #228 So let’s agree to concentrate on whether you can dismiss H0 without any assumptions about H1. As far as I can see this is your core argument:
Now, my point is, any time that any specific functional definition (you can define all the functions you like) requires a sufficiently high number of bits to be implemented (has a sufficiently high functional specification), IOWs exhibits dFSCI, so that the value of its dFSI is so big that it is vastly greater than the probabilistic resources of the system, and the p value computed in the way I have described is extremely small (1e-62 certainly qualifies), then we can safely reject random generation as an explanation, and, if no other credible explanation based on reasonable necessity “contributions” is available, we can safely infer design. I know your objection. You say that we should consider “all possible functions” or “all possible specifications”, and that my procedure does not work.
Actually that is not my objection on this occasion. To simplify the argument let us imagine that beta lactamase is a bit string 336 bits in length. My problem is that all bit strings 336 bits in length are equally improbable and so creating any one of them is vastly greater than the probabilistic resources of the system. Yet you would not reject H0 because it happened to generate a bit string 336 bits in length.  It is invalid to reject H0 simply because the observed result is vastly improbably given H0. You must define a rejection region and give a rationale for that region. I have shown that in Fisherian hypothesis testing the rejection region can differ depending on your H1 (e.g. one-tail or two-tail) but  typically it is defined as “more extreme” than the observation. However, in your example you have rejected H0 not because it is extreme (which would presumably be a very high proportion of one particular base pair) but because it happens to fulfil a specific function.  How does that justify rejecting H0? (Hint - there is a reason but it requires assuming H1). Mark Frank
October 13, 2014
October
10
Oct
13
13
2014
01:23 AM
1
01
23
AM
PDT
REC:
gpucio, congrats on still quantifying design, where others here have casted doubt on quantification of information.
lol.Mung
October 12, 2014
October
10
Oct
12
12
2014
07:15 PM
7
07
15
PM
PDT
gpuccio, I'm glad to know you can use some of the references in the "third way" thread. It's a pleasure to serve you, amico mio! BTW, I still encounter difficulties while trying to understand the reports, but see some gradual progress in my learning. :)Dionisio
October 12, 2014
October
10
Oct
12
12
2014
04:44 PM
4
04
44
PM
PDT
Dionisio: I really like your love for precision of language! :) By the way, always thanks for your references in the "third way" thread. I am using them a lot!gpuccio
October 12, 2014
October
10
Oct
12
12
2014
02:29 PM
2
02
29
PM
PDT
#223 grammatical error correction Here I want to correct a grammar mistake I made in post #223:
220 REC
How does that get lost during a cut and paste?
Is there any cut&paste* going on in cellular/molecular biology? Do ‘slash’ characters get lost in those cases too? :) Is that biological cut&paste functionality part of an open source app or proprietary software? :) (*) splicing and that stuff
Dionisio
October 12, 2014
October
10
Oct
12
12
2014
02:28 PM
2
02
28
PM
PDT
#224 grammatical error correction: Here I want to correct a grammar mistake I made in post #224:
221 REC
As biochemists reveal the simple molecular pathways that give new functions,…
Since we got into this cut&paste* thing a few posts ago, what are the simple molecular pathways that lead to the cut&paste* functionality?
(*) splicing and that stuffDionisio
October 12, 2014
October
10
Oct
12
12
2014
02:14 PM
2
02
14
PM
PDT
1 2 3 9

Leave a Reply