Uncommon Descent Serving The Intelligent Design Community
meniscus

FOOTNOTE: On Einstein, Dembski, the Chi Metric and observation by the judging semiotic agent

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

(Follows up from here.)

Over at MF’s blog, there has been a continued stream of  objections to the recent log reduction of the chi metric in the recent CSI Newsflash thread.

Here is commentator Toronto:

__________

>> ID is qualifying a part of the equation’s terms with subjective observation.

If I do the same to Einstein’s, I might say;

E = MC^2, IF M contains more than 500 electrons,

BUT

E **MIGHT NOT** be equal to MC^2 IF M contains less than 500 electrons

The equation is no longer purely mathematical but subject to other observations and qualifications that are not mathematical at all.

Dembski claims a mathematical evaluation of information is sufficient for his CSI, but in practice, every attempt at CSI I have seen, requires a unique subjective evaluation of the information in the artifact under study.

The determination of CSI becomes a very small amount of math, coupled with an exhausting study and knowledge of the object itself.>>

_____________

A few thoughts in response:

a –> First, let us remind ourselves of the log reduction itself, starting with Dembski’s 2005 chi expression:

χ = – log2[10^120 ·ϕS(T)·P(T|H)]  . . . eqn n1

How about this (we are now embarking on an exercise in “open notebook” science):

1 –> 10^120 ~ 2^398

2 –> Following Hartley, we can define Information on a probability metric:

I = – log(p) . . .  eqn n2

3 –> So, we can re-present the Chi-metric:

Chi = – log2(2^398 * D2 * p)  . . .  eqn n3

Chi = Ip – (398 + K2) . . .  eqn n4

4 –> That is, the Dembski CSI Chi-metric is a measure of Information for samples from a target zone T on the presumption of a chance-dominated process, beyond a threshold of at least 398 bits, covering 10^120 possibilities.

5 –> Where also, K2 is a further increment to the threshold that naturally peaks at about 100 further bits . . . . As in (using Chi_500 for VJT’s CSI_lite):

Chi_500 = Ip – 500,  bits beyond the [solar system resources] threshold  . . . eqn n5

Chi_1000 = Ip – 1000, bits beyond the observable cosmos, 125 byte/ 143 ASCII character threshold . . . eqn n6

Chi_1024 = Ip – 1024, bits beyond a 2^10, 128 byte/147 ASCII character version of the threshold in n6, with a config space of 1.80*10^308 possibilities, not 1.07*10^301 . . . eqn n6a . . . .

Using Durston’s Fits from his Table 1, in the Dembski style metric of bits beyond the threshold, and simply setting the threshold at 500 bits:

RecA: 242 AA, 832 fits, Chi: 332 bits beyond

SecY: 342 AA, 688 fits, Chi: 188 bits beyond

Corona S2: 445 AA, 1285 fits, Chi: 785 bits beyond  . . . results n7

The two metrics are clearly consistent . . . .one may use the Durston metric as a good measure of the target zone’s actual encoded information content, which Table 1 also conveniently reduces to bits per symbol so we can see how the redundancy affects the information used across the domains of life to achieve a given protein’s function; not just the raw capacity in storage unit bits [= no.  of  AA’s * 4.32 bits/AA on 20 possibilities, as the chain is not particularly constrained.]

b –> In short, we are here reducing the explanatory filter to a formula. Once we have specific, observed functional information of Ip bits,  and we compare it to a threshold of a sufficiently large configuration space, we may infer that the instance of FSCI (or more broadly CSI)  is sufficiently isolated that the accessible search resources make it maximally unlikely that its best explanation is unintelligent cause by blind chance plus mechanical necessity. Instead, the best, and empirically massively supported causal explanation is design:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Fig 1: The ID Explanatory Filter

c –> This is especially clear when we use the 1,000 bit threshold, but in fact the “practical” universe we have is our solar system. And so, since the number of Planck time quantum states of our solar system since the usual date of the big bang is not more than 10^102, something that is in a config space of 10^150 [500 bits worth of possibilities] is 48 orders of magnitude beyond that threshold.

d –> So, something from a config space of 10^150 or more (500+ functionally specific bits) is on infinite monkey analysis grounds, comfortably beyond available search resources. 1,000 bits puts it beyond the resources of the observable cosmos:

Fig 2: The Observed Cosmos search window

e –> What the reduced Chi metric is telling us is that if say we had 140 functional bits [20 ASCII characters] , we would be 360 bits short of the threshold, and in principle a random walk based search could find something like that. For, while the reduced chi metric is giving us a value, it tells us we are falling short and by how much:

Chi_500(140 bits) = 140 – 500 = – 360 specific bits, within the threshold

f –> So, the Chi_500 metric tells us instances of this could happen by chance and trial and error testing.   Indeed, that is exactly what has happened with random text generation experiments:

One computer program run by Dan Oliver of Scottsdale, Arizona, according to an article in The New Yorker, came up with a result on August 4, 2004: After the group had worked for 42,162,500,000 billion billion monkey-years, one of the “monkeys” typed, “VALENTINE. Cease toIdor:eFLP0FRjWK78aXzVOwm)-‘;8.t” The first 19 letters of this sequence can be found in “The Two Gentlemen of Verona”. Other teams have reproduced 18 characters from “Timon of Athens”, 17 from “Troilus and Cressida”, and 16 from “Richard II”.[20]

A website entitled The Monkey Shakespeare Simulator, launched on July 1, 2003, contained a Java applet that simulates a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line from Henry IV, Part 2, reporting that it took “2,737,850 million billion billion billion monkey-years” to reach 24 matching characters:

RUMOUR. Open your ears; 9r"5j5&?OWTY Z0d

g –> But, 500 bits or 72 ASCII characters, and beyond this 1,000 bits or 143 ASCII characters, are a very different proposition, relative to the search resources of the solar system or the observed cosmos.

h –> That is why, consistently, we observe CSI beyond that threshold [e.g. Toronto’s comment] being produced by intelligence, and ONLY as produced by intelligence.

i –> So, on inference to best empirically warranted explanation, and on infinite monkeys analytical grounds, we have excellent reason to have high confidence that the threshold metric is credible.

j –> As a bonus, we have exposed the strawman suggestion that the Chi metric only applies beyond the threshold. Nope, it applies within the threshold and correctly indicates that something of such an order could come about by chance and necessity within the solar system’s search resources.

k –> is a threshold metric inherently suspicious? Not at all. In control system studies, for instance, we learn that once you reduce your expression to a transfer function of form

G = [(s – z1)(s- z2) . . . ]/[(s – p1)(s-p2)(s – p3) . . . ]

. . . then, if poles appear in the RH side of the complex s-plane, you have an unstable system.

l –> A threshold, and one that, when poles approach close to the threshold from the LH half-plane, will show up in a tendency that can be detected in the frequency response as peakiness.

m –> Is the simplicity of the math in question, in the end [after you have done the hard work of specifying information, and identifying thresholds], suspicious? No, again. For instance, let us compare:

v = i* R

q = v* C

n = sin i/ sin r

F = m*a

F2 = – F1

s = k log W

E = m0*c^2

v = H0D

Ik = – log2 (pk)

E = h*νφ

n –> Each of these is elegantly simple, but awesomely powerful; indeed, the last — precisely, a threshold relationship — was a key component of Einstein’s Nobel Prize (Relativity was just plain too controversial). And, once we put them to work in practical, empirical situations, each of them ” . . .  is no longer purely mathematical but subject to other observations and qualifications that are not mathematical at all.”

(The objection is clearly selectively hyperskeptical. Since when was an expression about an empirical quantity or situation “purely mathematical”? Let’s try another expression:

Y = C + I + G + [X – M].

How are its components measured and/or estimated, and with how much application of judgement calls, including those tracing to GAAP? [Cf discussion here.] Is this expression therefore meaningless and of no utility? What about M*VT = PT*T?)

o –> So, what about that horror, the involvement of the semiotic, judging agent as observer, who may even intervene and– shudder — judge? Of course, the observer is a major part of quantum mechanics, to the point where some are tempted to make it into a philosophical position. But the problem starts long before that, e.g. look at the problem of reading a meniscus! (Try, for Hg in glass, and for water in glass — the answers are different and can affect your results.)

Fig 3: Reading a meniscus to obtain volume of a liquid is both subjective and objective (Fair use clipping.)

p –> So, there is nothing in principle or in practice wrong with looking at information, and doing exercises — e.g. see the effect of deliberately injected noise of different levels, or of random variations — to test for specificity. Axe does just this, here, showing the islands of function effect dramatically. Clipping:

. . . if we take perfection to be the standard (i.e., no typos are tolerated) then P has a value of one in 10^60. If we lower the standard by allowing, say, four mutations per string, then mutants like these are considered acceptable:

no biologycaa ioformation by natutal means
no biologicaljinfommation by natcrll means
no biolojjcal information by natiral myans

and if we further lower the standard to accept five mutations, we allow strings like these to pass:

no ziolrgicgl informationpby natural muans
no biilogicab infjrmation by naturalnmaans
no biologilah informazion by n turalimeans

The readability deteriorates quickly, and while we might disagree by one or two mutations as to where we think the line should be drawn, we can all see that it needs to be drawn well below twelve mutations. If we draw the line at four mutations, we find P to have a value of about one in 10^50, whereas if we draw it at five mutations, the P value increases about a thousand-fold, becoming one in 10^47.

q –> Let us note how — when confronted with the same sort of skepticism regarding the link between information [a “subjective” quantity] and entropy [an “objective” one tabulated in steam tables etc] — Jaynes replied:

“. . . The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its thermodynamic state. This is a perfectly ‘objective’ quantity . . . it is a function of [those variables] and does not depend on anybody’s personality. There is no reason why it cannot be measured in the laboratory.”

r –> In short, subjectivity of the investigating observer is not a barrier to the objectivity of the conclusions reached, providing they are warranted on empirical and analytical grounds. As has been provided for the Chi metric, in reduced form.  END

Comments
Mung: The sense of information that is gained is -- you guessed it -- FUNCTIONALLY SPECIFIC information; as in Abel et al, on functional sequence complexity. For at the first the sites will not bind, but when mistake count goes to zero due to convergence, the functional info reaches its targetted peak. And of course this is closely comparable to the action of a servo system that tracks and hits a moving target. If a MiG desperately trying to evade a Sidewinder by all sorts of acrobatics is a target, so are the sites that you want to match in ev. (And the launch platform may in turn be a target for said MiGs.) Of course the Shannon value peaks for a purely random sequence, but RSC is not at all the same as either OSC or FSC. Cf my background note here and onward links. (Notice my use of what I have subsequently called the X-metric for want of a better name.) Durston et al cashed out he qualitative analysis of 2005 with the H-based fits metric in 2007. Above in OP I use it to show that certain protein families have functional info values beyond the threshold where it makes good sense to infer to design. In short, there is excellent quantitative evidence of design in cell based life, on observation and derivation of numerical values for CSI. Fact, leading to metric, measured value, comparison to threshold, and well warranted conclusion. Hey, let's footnote to MG et al: If your problem (MG et al) is a worldview level one with the conclusion, please don't try to pretend that CSI is not an observable and significant fact. Please don't try to pretend that Ik = - log pk is not a well established metric of information, or that it makes no sense to identify that something may be functionally specific -- cf what we may call Schneider's blunder above -- or that the resulting measueres and conclusions are meaningless. And, most of all, please don't try to pretend that we are would be theocratic tyrants who have threatened you with the equivalent of thumbscrews. That was where you went totally over the top, MG. GEM of TKI kairosfocus
Mung, 285: My first comment in MG's guest post thread was to analyse the FSCI in her post. She ignored the point. GEM of TKI kairosfocus
PS: The estimation of Ik is a standard technique in telecommunications work. The results are as familiar as the size of computer files, in bits. kairosfocus
EZ: Re 284, cf the original post where you will see three worked out examples, building on the Durston et al FITS metric for 35 protein families, and if you will look at the UD WACs you will see a toy example at a level more suited for school children. If we are dealing with directly information-storing entities like DNA or ASCII text, Ik can be directly estimated to an order of magnitude (where also the presence of a coded digital store is enough to guarantee functional specificity so the code for a typical 300 AA protein (and there are hundreds in a living cell) would yield 1800 bits or 1300 beyond the solar system threshold -- this example has been given several times over in the course of the past 3 months but has been ignored by MG), and the result directly follows. GEM of TKI kairosfocus
MathGrrl,
ev, on the other hand, is not looking for a specific solution.
...nothing in ev knows what the solution should be so there is no target at which to aim.
Let's look at some quotes from the Schneider paper:
Here this method is used to observe information gain in the binding sites for an artificial 'protein' in a computer simulation of evolution. The simulation begins with zero information and, as in naturally occurring genetic systems, the information measured in the fully evolved binding sites is close to that needed to locate the sites in the genome. Locating sites in the genome sounds like a goal or target to me.
...one can use the size of the genome and the number of sites to compute how much information is needed to find the sites.
Finding sites in the genome sounds like a goal or target to me.
The purpose of this paper is to demonstrate that R_sequence can indeed evolve to match R_frequency (12). To simulate the biology, suppose we have a population of organisms each with a given length of DNA. This fixes the genome size, as in the biological situation. Then we need to specify a set of locations that a recognizer protein has to bind to. That fixes the number of sites, again as in nature. We need to code the recognizer into the genome so that it can co-evolve with the binding sites. Then we need to apply random mutations and selection for finding the sites and against finding non-sites. Given these conditions, the simulation will match the biology at every point.
Specifying a set of locations that a recognizer protein has to bind to. In advance. Finding the sites. MORE TARGETS SIR! PERMISSION TO FIRE!
Remarkably, the cyclic mutation and selection process leads to an organism that makes no mistakes in only 704 generations (Fig 2a).
Remarkable indeed. Good thing we weren't actually looking for such an organism. We might have destroyed it by mistake. Get real MathGrrl. Le me know when or if you want to talk about ev and CSI.
Mung
MathGrrl,
ev, on the other hand, is not looking for a specific solution. As I’ve emphasized a number of times during this discussion, in ev the recognizer co-evolves with the binding sites.
I'm pretty sure I brought it up first. Thanks for finally catching up. What do you mean by "a specific solution"? HOw does evn know when to stop running? What does the recognizer "recognize"? Does ev, at any point, compare the recognizer to the binding sites? Why can't we call the binding sites targets? Why can't we call the recognizer a target? Why can't we call "an organism that makes no mistakes" a target?
There is no measurement of Hamming distance because the solution is unknown to ev.
A solution is not required to be known in advance in order to perform a Hamming distance measurement. Perhaps this is where you are going wrong. Do you think that in order for something to qualify as a target it most be known in advance? Do you think that in order for something to qualify as a target it most be fixed and not change? Mung
MathGrrl, Speaking of Weasel you wrote:
In fact, the fitness function measures the Hamming distance to that target.
Does ev have a fitness function? I say yes. Does ev have a selection mechanism? I say yes. Does ev identify, for each "generation," which 50% of the population is "fit enough to survive" and which 50% is "not fit enough to survive"? Again, I say yes. How is "fitness" determined in ev? I say ev has a fitness function. What do you say? Mung
MathGrrl, You claim you'd like to read my comments on Schneider's dismissal of the Montanez paper. My comments are at the link I posted. If in our discussion of ev you think anything Schneider had to say in response to Montanez et. al is pertinent, please, by all means bring it up. I'm certainly not relying on the Montanez paper for any of my arguments and the only reason I even posted my link is because you brought it up. I thought I'd done a good job of explaining the differences between Weasel and ev. It's almost as if you didn't even read what I wrote. So I'm just going to not even address Weasel again or how it relates (or doesn't) to ev. I'd prefer to concentrate specifically on ev. Thanks. I repeat, no one here is modeling ev as a targeted search. It is a targeted search. Period. But I really did try to come to some basic understandings, and I don't see where you ever responded to those attempts on my part. Just what sort of search is it that you have in mind that searches for nothing at all? In fact, ev is searching for something. I see no problem with calling what it is searching for a target. Do you, and if so, why? Let's first get the semantics out of the way then perhaps we can make progress. There is a reason that GA's were developed, after all. Mung
http://en.wikipedia.org/wiki/Oracle_%28software_testing%29 http://en.wikipedia.org/wiki/Random_oracle http://en.wikipedia.org/wiki/Oracle_machine A Search Strategy Using a Hamming-Distance Oracle Efficient per query information extraction from a Hamming oracle Simply, an oracle accepts a query and returns a response. Mung
ellazimm, some good questions. That is precisely the sort of dialogue I attempted to generate with MathGrrl, but her reaction was to just to repeat the same old same old.
1. ‘Target’ being used in a general sense in this discussion but being rigorously defined in the programming/biological sense. Is a target an unchanging goal OR anything that gives a reward.
Could be. I've been asking MathGrrl to clarify what she's looking for as far as what qualifies as a target in her thinking. I've made it clear from my first posts on ev that it, unlike Weasel, did not have a single fixed target sequence that it was trying to match. But that does not change the underlying operation or the fact that ev is a search algorithm designed to perform better than a blind search. There is nothing about targets in general that requires that they be an unchanging goal. I think you'd agree, but that does seems to be what MathGrrl is arguing.
... but being rigorously defined in the programming/biological sense
I don't know what that means.
2. Is a ‘target’ something that is loaded before the simulation starts or can it arise later?
Well, in ev, the location of the binding sites can change between different runs of the program, but once the run begins the locations are fixed. The width is also fixed. How that can be taken to mean that there are no targets is beyond me.
AND can it arise spontaneously with no design implication?
If there's some underlying issue regarding whether ev is designed to do a specific thing I haven't heard it. I think we all know that it is designed.
And there is the whole issue of how accurately the simulation models the real world.
Well, that's not really at issue. Schneider claims it matches at every point, but I think we all know better because no one has even tried to defend that statement, lol.
I have skimmed this thread and will probably go back and reread some of the pertinent replies. And MathGrrl’s guest thread. Don’t hold your breath though!!
I would say don't waste your time. If you want to know about ev you and I can, I think, have a very reasonable discussion. Mung
On ev and targets: Schneider measures the information content, both before and after and subtracts the before from the after in order to get the information increase. How does he know when and where to measure? [That seems sort of backward to me, since the Shannon Information should be highest when the string is completely random, and so what he is measuring is the information decrease, but hey, what do I know.] Mung
KF, if you chose to do so, perhaps start out with a textual analysis of one of her posts :). Why we should think it the product of an intelligent cause. I think you can easily do this using Dembki's metric or your own. Mung
KF: Thank you, I shall pursue those references in the near future I hope. I personally would like to see any worked out examples of computing Dembski's metric. I never gave my students a formula without showing them how to use it. ellazimm
EZ: You will see a definition of what a Hamming Oracle is and does in the footnotes enfolded in the cite from the recent Dembski et al paper. GEM of TKI kairosfocus
EZ: Please, take a moment to look through 34 - 5 above on the subject of adequacy of warrant vs the rhetorical games that have been played with demands on "rigour" in "mathematical definitions," relative to what is discussed in the OP. Do you see why I have concluded that I am dealing with selective hyperskepticism as rhetorical gambit, especially when adequate warrant has been provided over and over and MG has failed to be responsive on ANY significant point, for something like three months now? Worse, she out and out conflated a probability calculation with a log reduction, and on being asked to explain herself, has failed to do so to date. I will only note in passing the outrage of alluding to how Galileo was made to recant by threat of the thumbscrews. That was utterly out of order and has never been explained or apologised for. Three months of red herrings led out to convenient strawmen soaked in ad hominems and ignited resulting in clouding, poisoning and polarising the atmosphere as I have just had to point out is enough. GEM of TKI kairosfocus
MG: Please, don't add to the problem. You full well know that a set of initials is vastly different from a given name. And, it is the name that is the problem, not the initials -- or I would long since have stopped using initials, too. Now, I have asked for a very simple thing: respect the privacy of my name in contexts like blogs that as you know are always being scanned by all sorts of electronic software spies. On this very page, some software I am using has blocked three trackers on my PC; and on some pages I have seen up to about a dozen. Your rhetoric above is upholding those who have sought to play outing tactics with my name, at minimum hoping to cause email trouble for me. In some cases they have sought to be outright rude and disrespectful with my name. In other cases, I do not doubt that hey have sought to make me an example and a warning to others not to stand up in public for ID, on pain of being publicly outed and subject to career harrassment or outright career busting. Is that sort of behaviour what you want to associate yourself with? That, madam, is the company you are keeping. Going beyond that, some have falsely accused me of homosexuality -- a mortal insult where I come from, that could easily cost someone foolish enough to make such an accusation his life [I want you to understand just how intentionally and offensively disrespectful, that slander is] -- and have gone on to all sorts of other slanders. I have even seen someone who imagines that by attaching all sorts of crude vulgarities to what they oppose they imagine they have adequately refuted it. All of this, you seem to have chosen to support. Which speaks volumes. This, in addition to having persisted for weeks and months in a pattern of false declarations that you know or should know are false. Such behaviour is willfully deceitful, and that -- whether or not you realise it just now -- is unfortunately tantamount to lying. I do not doubt that this is painful for you to hear, and it is not something I say lightly, but it must be corrected because of the damage it does. For one last and final time, I note to you that CSI is NOT primarily an abstract mathematical concept. So, please drop the talking point that tries to pretend that it is. It is an empirical commonplace fact (as Orgel and Wicken pointed out long ago), one somewhat amenable to modelling and quantification, and reasonable models have therefore been developed on standard metrics for information (notice how Schneider seems to have missed that the definition Ik = - log Pk is a STANDARD and common acceptable usage), set in the context of the challenge of search for relatively small zones of interest in very large config spaces. This last is a reasonable extension to thought in statistical thermodynamics. The metrics work, pretty much as advertised, i.e. they do reliably pick out cases where objects of directly known cause that are identified as in the design zone. And, when they pick up things as being within credible reach of blind chance and mechanical necessity, we see that for instance random text programs can pick things out from fields of order 10^50 possibilities. When it comes to ev and the like, it has been more than adequately warranted that such are intelligently designed programs that use Hamming distance metrics to drive forward a process of convergence comparable to the approach of a group of air to air missiles to their targets. Good day, madam GEM of TKI kairosfocus
MathGrrl: I spent some time today reading the original paper and running the ev program and you've addressed a couple of issues I thought might be points of contention. Coming from a mathematical background I am always keenly aware of how terms are defined and used within a specific context. A tree, for example, in the field of graph theory has a specific and well defined meaning. I remember getting into an argument at a party with someone who thought they had found some wonderful set theoretical result and didn't understand what I meant when I told them their measure was not well defined. No wonder I didn't get invited to many parties. ellazimm
ellazim and Mung, I'm going to use ellazim's comment as a springboard to address what I believe is the root of the disagreement between Mung and myself with respect to whether or not ev is a targeted search in the same sense as Dawkins' Weasel program.
I'm assuming a target in this context is a goal in the program against which new 'individuals' are measured and ones that more closely match the target are allowed to 'breed' and the rest are destroyed?
I think we're in agreement, but I'd like to be very explicit about the difference between a goal and a target in the context of a GA. I addressed this here: https://mfinmoderation.wordpress.com/2011/03/14/mathgrrls-csi-thread/#comment-1858 as pointed out two or three times in this thread, but it can't hurt to provide more detail. Modeling ev as a "targeted search" and comparing it to Weasel confuses the problem domain and the solution domain. In Weasel, which Dawkins himself makes clear is nothing more than a simple pedagogical toy program, the two are conflated. The Weasel problem is to produce a particular string via cumulative, stochastic selection. The Weasel solution is that particular string. That is clearly a target. In fact, the fitness function measures the Hamming distance to that target. ev, on the other hand, is not looking for a specific solution. As I've emphasized a number of times during this discussion, in ev the recognizer co-evolves with the binding sites. Neither is specified in advance and the sections of the genome that represent them will be different in different runs. This makes it painfully clear that there is no target for the solution. There is no measurement of Hamming distance because the solution is unknown to ev. ev does, however, have constraints that make certain genomes more fit than others. Those constraints are the number and location of the binding sites. The only feedback provided by the environment modeled in ev is the number of incorrect binding sites coded for by a particular genome. As Schneider describes in the ev paper, this reflects the number and location of binding sites in real world biological genomes. If I understand Mung's argument, his confusion arises from considering those sites to be a target. They are not. The number and location of the binding sites are part of the problem domain and as such do not specify anything in the solution domain. This separation between the constraints of the problem and the specification of the solution are one of a number of important differences that distinguish ev and similar GAs from simple programs like Weasel. That difference alone demonstrates that ev cannot be modeled as a targeted search; nothing in ev knows what the solution should be so there is no target at which to aim. In the same vein, the various Steiner problem solutions can also not be modeled as targeted searches. The only measure of fitness in those solutions is the length of the graph, with shorter, connected graphs having higher fitness. There is no target and hence no measurement of the Hamming distance to the solution. I hope this clears up any confusion about the differences between Weasel and the other GAs we've been discussing. If I have missed any questions that are still relevant, Mung, please raise them again. MathGrrl
Mung,
The Schneider response to the Montanez paper has been shown to be without merit.
What can be asserted without evidence can be dismissed without evidence. -- Christopher Hitchens The link you provide does not demonstrate what you claim. If you feel that Schneider's response is somehow in error, I would be very interested in reading your defense of such a claim. MathGrrl
kairosfocus, Throughout the discussion of CSI on various threads here I have made it a practice to focus only on the points you raise that directly address the questions I've been asking. Your many other issues, while potentially interesting in their own right, would serve only to distract from the core topic of defining and calculating CSI. I'm going to break briefly from that focus to respond to a separate issue, primarily because you have provided a good conversational hook to extend an invitation from someone who is not allowed to participate here.
Perhaps it has not dawned on you that the situation has now fundamentally changed, once your side has tolerated outing behaviour and increasingly disrespectful rhetoric leading to the creation of an attack blog that resorts to vulgarity as well as slander-laced outing behaviour as its main tactics. Madam, you are now associated with and unavoidably tainted by a cesspit of misbehaviour, and have a lot to answer for.
I must confess that I don't follow your logic. Nowhere have I participated in or supported "outing behavior", "vulgarity", "slander", or a "cesspit of misbehavior". In fact, assuming that you are discussing Mark Frank's blog, I consider your characterization grossly over the top. I am no more responsible for what other people post there than you yourself are for some of the, um, earthier comments Joseph posts on his blog, for example. That being addressed, one of the most polite and knowledgable participants on Mark Frank's blog, Seversky, (http://mfinmoderation.wordpress.com/2011/05/14/does-uncommon-descent-deliberately-suppress-dissenting-views/#comment-5225) has asked that someone who isn't banned at UD to extend an invitation to the ID proponents here to discuss ID in a neutral online venue where no one has the ability to moderate, ban, or otherwise censor anyone else. The suggestion in that thread is for the discussion to take place on the Usenet newsgroup talk.origins, available from any Usenet server or via Google Groups: http://groups.google.com/group/talk.origins/topics The only moderation on talk.origins is an automated script that prevents cross-posting to more than five newsgroups. There are a number of people interested in this topic who are not allowed to participate here. I hope that some ID proponents will accept this invitation to join them on talk.origins. I would like to make one suggestion, that I hope you will consider constructive, before returning to the topic at hand. You sign your comments thus:
GEM of TKI
If you are as concerned as you appear to be about your real name being associated with what you write here, you may want to reconsider using your actual initials as a signature. MathGrrl
KF: Could you give me a link where Hamming oracle is defined and discussed? I'm having trouble finding an online resource. I think I AM going to have to try and look at the ev program. Sigh. Oh well, it happens!! Thanks! ellazimm
KF: No, none of my examples generate digitally coded information but I was thinking maybe they created complex and specified information. Before a tree is born the information in it's trunk does not exist but after it dies the record is there. And to describe that information would take a measurable amount of bits of information . . . I only popped in for a few minutes between tasks. I'll spend more time thinking about all this. And looking up Hamming distance and oracle when I can. Earlier today I was looking over Dr Dembski's metric. And, according to Wikipedia he's only ever demonstrated it's application once! Is that true? Surely it's been calculated for other cases. If you've got any cases please let me know, aside from MathGrrl's scenarios obviously. I find worked out examples to be a great help in understanding the procedure of applying the theory. Fascinating stuff!!! Back later! ellazimm
Enjoy your tea. kairosfocus
KF: THANKS! Just off to do grocery shopping, etc and then over to a friend's for 'tea'. :-) Will come back and read and, hopefully, comprehend later!! ellazimm
EZ: Volcanoes create a new state of affairs (and, from experience, make chaos and a mess while doing it) but they do not in themselves create information. Particularly, digitally coded, functionally specific complex information that uses symbols to create a meaningful vocabulary, elements of which are combined according to rules to create messages. Similarly, a snowflake's form reflects the state of affairs where and when it formed, and the rings in a tree reflect the passage of time, the weather, etc, but these are simply dynamical results on initial conditions leading to outcomes, they are not information. It is we who look on who observe and study the dynamics, outcomes etc, creating information as we do so. Also, targets are somethings you are trying to hit. Collins English Dict:
target [?t??g?t] n 1. (Individual Sports & Recreations / Archery) a. an object or area at which an archer or marksman aims, usually a round flat surface marked with concentric rings b. (as modifier) target practice 2. a. any point or area aimed at; the object of an attack or a takeover bid b. (as modifier) target area target company 3. a fixed goal or objective the target for the appeal is £10 000 4. a person or thing at which an action or remark is directed or the object of a person's feelings a target for the teacher's sarcasm
Once you are trying to hit it, it is a target. In the relevant case, the receptor sites and the binding sites are in the genomes, so called, of ev. Values are assigned and algorithmic steps are taken in train to seek and hit the receptors. That the latter are also moving only means that you have a moving target to hit. A Hamming distance metric (number of mistakes) is used to detect better performing binding sites, and the worse performing are flushed, with the better ones being allowed to further move in. That's a Hamming oracle, with a warmer/colder homing approach. And Mung long since showed both fine tuning to get desired performance and language right there in Schneider's statements and in the program itself that underscored this. I pointed out how a chart Schneider put up shows the use of negative feedback to move to a target point (notice what happens when certain modules are turned off in the program, i.e. tracking ability is lost); in this case it is moving in a pseudo-space so the process is very similar to a servosystem, which is why I raised the comparison of guided missiles. MG cleverly refused to respond in the thread where that happened, and is turning up in a following thread to claim that such did not happen. Please go up to 137 above to link to the original discussion and see for yourself. GEM of TKI kairosfocus
There was a notion that crept into my head . . . might as well ask it here . . . please be gentle if it's completely stupid . . . Regarding the ability of non-directed processes for creating complex, specified information . . . you're going to tear me to shreds I'm sure . . . but . . . Do volcanoes create information? If an erupting volcano creates a mountain where there was none before is that new information? It certainly changes any mathematical model of the landscape. We've discovered that the Earth's magnetic field has completely moved/reversed over the millennia. We found that out by looking at the alignment of the magnetic particles in some igneous rocks. That was new information to us but was it created or recorded? When sedimentary rock is formed with defined chronologically arranged strata is that new information? Or just a recording of information? Are the layers of ice in Antarctica created information? What about oil deposits? Is recorded information new if there is no other way of finding it? When erosion creates pillars and arches which did not exist before is that new information? I'm thinking that an arch is more complex and specified than a huge block of sandstone. And even normal wind erosion gives indication of long tern wind patterns . . . is that information. How about tree rings? You can tell how old a tree is, spot wet and dry years, etc from tree rings. Over eons and eons the hills of Scotland have been transformed into peat deposits. Did the plants create new information that was not there before if you start from when the land was barren? Non-intelligent life forms have altered the Earth's atmosphere and that change would be detectable from a great distance and would indicate the presence of life. Is that creation of new information? Okay, have it!! I apologise ahead of time for not being around soon. It's 9:30 in the morning where I live and there's stuff to do!! But I'm very interested in how you see the above examples. Maybe it should be put into a new thread so that it gets a bit of independent attention? ellazimm
KF & Mung: I had read KF's earlier post about the incoming missies and Mung I think your reiterating of that analogy is fairly accurate. I THINK. I wonder if some of the confusion/disagreement comes from: 1. 'Target' being used in a general sense in this discussion but being rigorously defined in the programming/biological sense. Is a target an unchanging goal OR anything that gives a reward. 2. Is a 'target' something that is loaded before the simulation starts or can it arise later? AND can it arise spontaneously with no design implication? And there is the whole issue of how accurately the simulation models the real world. I have skimmed this thread and will probably go back and reread some of the pertinent replies. And MathGrrl's guest thread. Don't hold your breath though!! ellazimm
Mung: Bogies that are not only inbound but jinking, weaving and dancing unpredictably. Floating like a butterfly so they may yet sting like a bee. What's a missile-eer to do? GEM of TKI kairosfocus
ellazimm:
Why not point out one of the targets and see what she says?
Multiple inbound bogeys! I'm gunner on a ship. I have multiple inbound targets. How do I convince someone who denies that an inbound aircraft is a threat that the inbound aircraft is a valid target? MathGrrl doesn't understand searches or targets. Her assertion that ev has no targets has no basis in reality. You, on the other hand, might be amenable to being convinced. I accept that you're not familiar with my prior postings on this subject. 1. What is a search? 2. What is a target? 3. Can you conduct a search without a target? These are really simple and basic questions which MathGrrl refuses to address. An evolutionary algorithm (EA) is a search strategy. ev is an EA. (Finally admitted to by MG) Is a search that does not "search for" anything even coherent? Mung
EZ: Actually, the situation is one of multiple moving targets, approached by a population of self-replicating seekers. Or, as I suggested in 258 above:
we could picture ev as a barrage of self-replicating missiles chasing a moving formation, where in each generation half lose lock and are self-destructed, being replaced by doubling the half population that remains, which are in closer to lock condition.
GEM of TKI kairosfocus
F/N: What I now suspect has been going on: _____________ >> Rule 1: Power is not only what you have, but what an opponent thinks you have. If your organization is small, hide your numbers in the dark and raise a din that will make everyone think you have many more people than you do. Rule 2: Never go outside the experience of your people. The result is confusion, fear, and retreat. Rule 3: Whenever possible, go outside the experience of an opponent. Here you want to cause confusion, fear, and retreat. Rule 4: Make opponents live up to their own book of rules. “You can kill them with this, for they can no more obey their own rules than the Christian church can live up to Christianity.” Rule 5: Ridicule is man’s most potent weapon. It’s hard to counterattack ridicule, and it infuriates the opposition, which then reacts to your advantage. Rule 6: A good tactic is one your people enjoy. “If your people aren’t having a ball doing it, there is something very wrong with the tactic.” Rule 7: A tactic that drags on for too long becomes a drag. Commitment may become ritualistic as people turn to other issues. Rule 8: Keep the pressure on. Use different tactics and actions and use all events of the period for your purpose. “The major premise for tactics is the development of operations that will maintain a constant pressure upon the opposition. It is this that will cause the opposition to react to your advantage.” Rule 9: The threat is more terrifying than the thing itself.
(When Alinsky leaked word that large numbers of poor people were going to tie up the washrooms of O’Hare Airport, Chicago city authorities quickly agreed to act on a longstanding commitment to a ghetto organization. They imagined the mayhem as thousands of passengers poured off airplanes to discover every washroom occupied. Then they imagined the international embarrassment and the damage to the city’s reputation.)
Rule 10: The price of a successful attack is a constructive alternative. Avoid being trapped by an opponent or an interviewer who says, “Okay, what would you do?” Rule 11: Pick the target, freeze it, personalize it, polarize it. Don’t try to attack abstract corporations or bureaucracies. Identify a responsible individual. Ignore attempts to shift or spread the blame. >> ______________ 1 --> These are of course some of Saul Alinski's Rules for [Neo-Marxist] Radicals. (And, DK, since you are monitoring to try the Rules 3, 4, 5 & 11 credibility kill by red herring- strawman- ad hominem tactic game, when on p. xix RFR, Alinsky refers to the revolution, in the context of 1971 that strongly points to a modified marxian frame of thought, but in the ideas context of exactly that: the marxian frame of thought on revolutionary transformation by the masses towards the socialist and onwards the ideal, hypothetical golden age communist state. So, to cite p.10 on the marxian frame of thought is quite legitimate, even though he is not an orthodox, Moscow or Peking partyline Marxist Leninist or Maoist. Don't forget that Marx and Engels saw ancient Christian communitarianism per Ac 2, 4 & 5 as a proto-communism, and that they actually argued that the rise of Christianity in the Empire was in effect a prototype of the triumph of socialism.) 2 --> The utter cynicism in rules 4, 5 and 11 easily explains the pattern of demands and unresponsiveness to reason and evidence we have been seeing over the past several months. 3 --> That is, the point has been to personalise, strawmanise and ridicule, not to seriously engage issues on the merits. 4 --> But the threshold of incivility was irrevocably passed this week gone, when an attack blog that imagines that vulgarity, abuse and outing behaviour are adequate responses to serious points on the merits, was spun off from MF's blog. 5 --> Such destructive polarising incivility is a revelation of the moral and intellectual bankruptcy of the objectors who resort to it, and those who enabled it by using or tolerating attempted outing tactics and the disrespect of ignoring serious inputs on the flimsiest of excuses. 6 --> But what about the issue of living up to rules? Isn't that failure a proof of hypocrisy and doesn't it mean that any tactics that expose the hypocrites are warranted? Isn't it true that we only act decisively when we think the angels are on our side and the devils on the other, and that we need to exaggerate even small points of concern in order to set the climate of retreat on the other side that makes for advantageous negotiations? 7 --> Not at all. All it reveals is the moral self-blindness of the radical objectors. 8 --> For, moral struggle to do the right is the lot of us finite, fallible, fallen, struggling and too often ill-willed sinners. Y'know, biblical illiteracy and dismissive contempt towards the scriptures that lie at the base for traditional morality in our civilisation are now so common that I will add a key citation or two on this, first from what was recently dismissed as an "obscure" Epistle by Paul -- which is actually the hard core of NT theology:
Rom 2: 1 Therefore you have no excuse, O man, every one of you who judges. For in passing judgment on another you condemn yourself, because you, the judge, practice the very same things. 2 We know that the judgment of God rightly falls on those who practice such things. 3 Do you suppose, O man-you who judge those who practice such things and yet do them yourself-that you will escape the judgment of God? 4 Or do you presume on the riches of his kindness and forbearance and patience, not knowing that God's kindness is meant to lead you to repentance? 5 But because of your hard and impenitent heart you are storing up wrath for yourself on the day of wrath when God's righteous judgment will be revealed. 6 He will render to each one according to his works: 7 to those who by patience in well-doing seek for glory and honor and immortality, he will give eternal life; 8 but for those who are self-seeking1 and do not obey the truth, but obey unrighteousness, there will be wrath and fury . . . . 13 For it is not the hearers of the law who are righteous before God, but the doers of the law who will be justified. 14 For when Gentiles, who do not have the law, by nature do what the law requires, they are a law to themselves, even though they do not have the law. 15 They show that the work of the law is written on their hearts, while their conscience also bears witness, and their conflicting thoughts accuse or even excuse them 16 on that day when, according to my gospel, God judges the secrets of men by Christ Jesus. Gal 6: 1 Brothers,1 if anyone is caught in any transgression, you who are spiritual should restore him in a spirit of gentleness. Keep watch on yourself, lest you too be tempted. 2 Bear one another's burdens, and so fulfill the law of Christ. 3 For if anyone thinks he is something, when he is nothing, he deceives himself. 4 But let each one test his own work, and then his reason to boast will be in himself alone and not in his neighbor. 5 For each will have to bear his own load. James 3:4 Look at the ships also: though they are so large and are driven by strong winds, they are guided by a very small rudder wherever the will of the pilot directs. 5 So also the tongue is a small member, yet it boasts of great things. How great a forest is set ablaze by such a small fire! 6 And the tongue is a fire, a world of unrighteousness. The tongue is set among our members, staining the whole body, setting on fire the entire course of life,1 and set on fire by hell.2 7 For every kind of beast and bird, of reptile and sea creature, can be tamed and has been tamed by mankind, 8 but no human being can tame the tongue. It is a restless evil, full of deadly poison. 9 With it we bless our Lord and Father, and with it we curse people who are made in the likeness of God. 10 From the same mouth come blessing and cursing. My brothers,3 these things ought not to be so. 11 Does a spring pour forth from the same opening both fresh and salt water? 12 Can a fig tree, my brothers, bear olives, or a grapevine produce figs? Neither can a salt pond yield fresh water. 13 Who is wise and understanding among you? By his good conduct let him show his works in the meekness of wisdom. 14 But if you have bitter jealousy and selfish ambition in your hearts, do not boast and be false to the truth. 15 This is not the wisdom that comes down from above, but is earthly, unspiritual, demonic. 16 For where jealousy and selfish ambition exist, there will be disorder and every vile practice. 17 But the wisdom from above is first pure, then peaceable, gentle, open to reason, full of mercy and good fruits, impartial and sincere. 18 And a harvest of righteousness is sown in peace by those who make peace. [ESV]
9 --> So, as Jesus of Nazareth highlighted, a key task is to be aware of the potential planks in our own eyes even as we set out to help our BROTHERS and SISTERS with the sawdust that has got in their eyes. Likewise, let me add from the relevant part of the Sermon on the Mount:
Matt 7: 1 “Judge not, that you be not judged. 2 For with the judgment you pronounce you will be judged, and with the measure you use it will be measured to you. 3 Why do you see the speck that is in your brother's eye, but do not notice the log that is in your own eye? 4 Or how can you say to your brother, ‘Let me take the speck out of your eye,’ when there is the log in your own eye? 5 You hypocrite, first take the log out of your own eye, and then you will see clearly to take the speck out of your brother's eye. 6 “Do not give dogs what is holy, and do not throw your pearls before pigs, lest they trample them underfoot and turn to attack you. [ESV]
10 --> Once there is a failure to accept that partnership in moral struggle, the self-blindness we have been seeing leads to a destructive demonisation of the other, and this is a major root of the arrogance, disrespect, undue polarisation, outright rudeness, contempt, disrespect and hostility verging on hate we have so plainly seen. 11 --> And, these are of course precisely the sort of signs of might makes right amorality triggered by evolutionary materialism that Plato warned against 2350 years ago in The Laws Bk X. 12 --> And, it is precisely the same Plato's Cave moral blindness that makes the person who launched an attack blog not see the irony of dismissively citing the clip from Plato but not recognising how aptly it applied to his sort of rude and disrespectful factionalism. ______________ It is high time that we do better than that. GEM of TKI kairosfocus
Mnug: "But when you keep repeating “show me the target” and I keep saying, “there are multiple targets,” one has to wonder. 1. If I can show you just one target in ev would you be satisfied? 2. What would convince you that what I am showing you would qualify to meet your expectation of what constitutes a target in ev?" Why not point out one of the targets and see what she says? I'm assuming a target in this context is a goal in the program against which new 'individuals' are measured and ones that more closely match the target are allowed to 'breed' and the rest are destroyed? Should be easy enough to show the target and the place in the code where the comparison is made. ellazimm
MathGrrl @253:
Please point out the target.
You would not recognize a target if one was painted on your forehead. I refuse to humor someone who doesn't know what a search is, and doesn't understand what a target is. At my post in @241 I asked you to demonstrate your understanding of searches and targets. Let me know when you have done so. I also requested that you demonstrate that you understand algorithms and evolutionary algorithms. I'm still waiting. Until then, I'd be wasting my time with someone who doesn't know what I am talking about. If we can get on the same page concerning the subject of discussion perhaps we would have some basis on which to proceed. But you've shown a fundamental lack of competency to understand and discuss these issues. I'm willing to help. I am not saying you're incapable or unwilling. After all, you were the one who came here to UD claiming that you had already grasped all the fundamental ideas required to understand these subjects. But when you keep repeating "show me the target" and I keep saying, "there are multiple targets," one has to wonder. 1. If I can show you just one target in ev would you be satisfied? 2. What would convince you that what I am showing you would qualify to meet your expectation of what constitutes a target in ev? Your demands are impossible to meet, because they are impossible demands. They are impossible demands because they refuse to define what qualifies as having met the demand. Don't pretend you are being reasonable. No one here accepts that claim, because all the evidence to date demonstrates otherwise. Can you change? Will you? Mung
MathGrrl:
Nope, just questions. I can’t force anyone to answer them, so they’re certainly not demands. I note that you failed to answer them in your comment. If you choose to do so, I would be interested in reading your response.
So the difference between a question and a demand is that one can force a response to a demand but one cannot force a response to a question? If you were interested in reading my response, you would have read my response. You didn't read my response, so I conclude that you were not interested in reading my response. Please don't pretend interest where you have none. It's insulting. You've claimed to be interested in Intelligent Design. Prove it. I'm amenable to discussion. I ask for just one "question" which you have asked to which I have not provided a response. One ping. One ping only. (In my best Sean Connery accent.) On the other side, you've appeared here with numerous un-substantiated claims, such as your claim that ev has no targets. Do you agree or disagree that you have proffered such claims? Need I provide links? You have also made claims about Schneider's response to the Montanez paper which are likewise without basis in fact. If you were only here "asking questions," that would be one thing. But you're not here just asking questions. You are also here making assertions. So the burden to answer is not just upon me, and kf, and cy, it's also upon you. Yet while we have all provided answers, you have provided none. You have merely engaged in repeating your demands. Mung
MathGrrl @253:
Proof by repeated assertion is unconvincing, to say the least.
Thank you, thank you, thank you! I agree! Please stop repeating yourself. Your repeated assertions fail to convince anyone. But your claim that I am repeating myself lacks any factual basis. What I have done, repeatedly, is seek ways towards dialogue. What you have done, repeatedly, is avoid dialogue. Mung
MathGrrl @250:
With respect, you really should read the source material of the ev paper and Schneider’s PhD thesis for yourself. There is no Hamming Oracle in ev and your characterization of “let the targets wander around a bit” doesn’t reflect how ev models known evolutionary mechanisms.
With respect (I swear I've said this before) you really should read the source material of the ev paper and Schneider’s PhD thesis for yourself. You don't understand what a Hamming Oracle is. You don't understand what a Hamming Distance is. You don't understand how ev works. I would be willing to swear on a stack of Bibles that I've already explained what it is that ev is intended to model. Did you even read what I wrote? The "genome" is subject to mutation. That includes both the "recognizer" and the binding sites. What about that is so difficult for you to understand? Mung
MathGrrl @247:
There is no Hamming Oracle in the ev implementation.
First, one must understand what a Hamming Oracle is. Schneider apparently knows what a hamming oracle is, and what a hamming distance is, and thus does not launch as an objection to the Montanez paper that ev does not employ such an oracle. MathGrrl, on the other hand, apparently does not know what a Hamming Oracle is or what a Hamming Distance is. I explicitly posed those questions to MathGrrl in my post @244. She could have done the research. It certainly looks like she couldn't be bothered, and thus once again demonstrates a true lack of desire to understand the issues which are being debated. Hamming Distance I repeat. Schneider does not object to the Montanez paper on this matter. See for yourself: http://www-lmmb.ncifcrf.gov/~toms/paper/ev/dembski/dissection.html Mung
Schneider has responded to the Montanez et al. paper and demonstrated that the authors misunderstand significant and salient points about ev.
The Schneider response to the Montanez paper has been shown to be without merit. It was not that Montanez et al failed to understand ev, it was that Schneider failed to understand the Montanez paper. No Free Lunch Schneider's response was apparently off the cuff and not well considered as demonstrated by his failure to address the issues raised by the Montanez paper. I look forward to seeing MathGrrl's continued participation in discussion on this matter. Mung
MG: Perhaps it has not dawned on you that the situation has now fundamentally changed, once your side has tolerated outing behaviour and increasingly disrespectful rhetoric leading to the creation of an attack blog that resorts to vulgarity as well as slander-laced outing behaviour as its main tactics. Madam, you are now associated with and unavoidably tainted by a cesspit of misbehaviour, and have a lot to answer for. As was already noted long since above. I see where you try above to discredit and dismiss the analysis of ev that fits with what we can see for ourselves. Sorry, the "don't believe yer lyin eyes" tactic is long past its sell-by date. And as for "the journal has only published three articles . . . " that is simply rudely out of order and disrespectful to the need to address issues on the merits. Yes, Mr Schneider denies that the situation with ev is as described, as would be expected. His core problem is not with how long BC has been published or how many articles have been published or who sits on its editorial board [can you kindly compare the editorial policies of Annalen Der Physik c 1905 and its decision to publish four somewhat controversial articles by a then unknown patents clerk with a freshly minted PhD granted in the context of an appeal by one of the members of the panel of reviewers?], or with whether he can try to make it out that once a target [or a cluster of such] moves it is not being tracked and hit by the equivalent of a barrage of guided missiles, but with the facts we can see for ourselves. And, yes, I am saying that ev is in effect a software simulation of an inefficient servo mechanism, one evidently prone to instability and needing a lot of tweaking to get it working as desired; as Mung documented. In short, we could picture ev as a barrage of self-replicating missiles chasing a moving formation, where in each generation half lose lock and are self-destructed, being replaced by doubling the half population that remains, which are in closer to lock condition. As for the there is no Hamming oracle assertion, pardon me but the number of mistakes metric is a Hamming metric. And the number of mistakes is used to cull the population, i.e. we see exactly a Hamming oracle --
[from previously excerpted at 196, from the BC paper:] f/n 3: "A Hamming oracle uses the Hamming distance (number of bits that differ from a target sequence) as its fitness metric" where from f/n 2: "A software oracle is a software object that answers queries posed to it. In our case, a software oracle is a function that takes in a configuration and returns a value denoting the fitness of that configuration"
-- on warmer colder hill climbing. All of course within a conveniently and intelligently set up island of function where trends get you pointed right. The only sort of evolution that such an entity would even crudely model would be micro-evo, which is not in dispute even by modern young earth creationists. The use of such fitness functions and trends is the fatal flaw, as this begs the question of islands of function in vast seas of non-function. Which is exactly the issue that the CSI metrics highlight. Before you can non-question- beggingly hill climb within an island of function, you must first get there, i.e. the challenge that a GA should first generate itself out of lucky noise into a functioning program perhaps by starting with a Hello World, and evolving step by functional step, is valid. And, unanswered. And, we have abundant evidence of the climate of hostility, not within a journal, but across a wide cross section of the academic community, and now to the point where I see people who imagine that outing behaviour and crude vulgarities suffice to respond on matters of fact, reasoning and logic. You ought to be ashamed of the level your side has sunk to. And, I see where you have again plainly deliberately dodged aside from the specific response in 34 - 5 above [and that in earlier threads, which gave relevant analyses and calculations on cases that were illustrative, using data supplied in links and in at least one case data provided by you], just as earlier you deliberately dodged the fact that I gave summary responses in 23 -4 (with explanation of a serious moral hazard in 28) and in 34 - 5. As of now, this sort of clever selection tactic has to be seen as a willfully deceptive strawman tactic, not a mere accidental oversight. It has happened far too many times. I therefore reiterate the challenge that you have so assiduously and cleverly ducked again, which leads a reasonable onlooker to infer that the most probable explanation is that you resort to rhetorical tricks of distraction because you have no serious answer on the merits, and no reasonable explanation for the more unsavoury actions that have occurred, such as the snide allusion to how Galileo was forced to recant by threat of torture, when in fact the ones using force and threats in the modern discussion of design theory are the materialist neo magisterium in the holy lab coats, as can be seen from the recent Gaskell case [and the still playing out fiasco at Synthese], and from others ranging back through Sternberg and onward to Bishop and Kenyon. And so, I repeat:
On CSI and its "rigour," that has been addressed over and over again, in most specificity to the issue of rigour, at 34 - 5 above. Similarly, the talking points MG tends to use over and over as thought hey have not been cogently answered, were last dissected in 23 - 24 above. And, the overall summing up of the issues MG has needed to explain herself on has been kept up in the editorial response to Graham at no 1 in the CSI newsflash thread; which MG has persistently ignored. When it comes to ev, 137 above shows my links to the places in the CSI Newsflash thread where it is dissected by Mung. (One of MG's tactics seems to be to wait until something is buried under enough posts in a thread, or has been continued in a successor thread, before repeating the assertion that was rebutted.) She knows or should know better than she has acted.
Drumbeat repetition of already cogently answered claims as though there has been no moving on beyond the point where such were asked, is not a responsible reply. In that context, making additional demands, especially in a climate of the sort of vulgarity and outing behaviour already seen, comes across as disrespectful and uncivil. The attempt to discredit and dismiss those on the other side without seriously addressing issues on the merits comes across as outright rude. In that context, repeated refusal to address reasonable response, comes across as arrogant. It is your side that has worked very hard to polarise the situation, and so you now have to live with the consequences, madam. You have a lot of fence mending to do, to even begin to come across as a reasonable person engaging a reasonable discussion on reasonable terms. In short, "congratulations": your ilk of objectors associated with MF's blog have successfully undermined the presumption of good faith on the part of such objectors to the design inference and to CSI. So, from now on, you and others of like ilk have to first establish that you are civil persons acting in good faith to be entertained for reasonable discussion. Which you have not even begun to address. (We know enough about the Alinsky methods, attitudes and tactics.) Good day, madam. GEM of TKI kairosfocus
Mung, In your post 245 you provide a reasonable overview of ev that should allow us to resolve the question of whether or not it can be modeled as a targeted search. Thank you for that.
1. Each “genome” contains multiple binding sites. 2. Each “genome” contains a recognizer. What, you may reasonably ask, is the purpose of the recognizer?
Schneider makes it clear in the ev paper that he is modeling what he observed in real world biological organisms. The recognizer co-evolves with the binding sites. Neither is specified in advance and the sections of the genome that represent them will be different in different runs of ev. There is no target for either.
3. Each generation, half the population is replaced. How, you might reasonably ask, is it decided who shall "die" and who shall "live" to "reproduce?"
As described very clearly in the paper and the source code, the relative fitness of the digital genomes is related to the number of binding site errors coded for. The binding sites reflect what is observed in those real world biological organisms. This on the first page of the ev paper.
4. Schneider measures which genomes have more mistakes. 5. It is the "mistake makers" who are replaced. What, you might logically ask, determines how many "mistakes" are present in each genome?
Natural selection is modeled in ev, very simply, by removing the half of the genomes that code for the most binding site errors. This very roughly reflects the greater likelihood that a real organism with such errors would fail to reproduce.
6. The "genome" of each survivor is mutated. As a result, a binding site may be changed, or the recognizer may be changed. What, you may reasonably ask, happens if the mutations/per genome is increased? [Hint: See the Horse Race paper mentioned by kf above]
High mutation rates due to environmental factors can destroy real world populations, just as unreasonably high mutation rates can prevent digital organisms from adapting successfully to their virtual environments.
How does the program know when to stop?
It doesn't. ev runs for the number of generations specified or until the person running it gets bored. The fascinating result is that Rsequence converges to Rfrequency just as Schneider observed in real biological organisms, and it stays at that level rather than going higher or lower. That suggests that there is some profound mathematical relationship hiding in there.
Only someone who has deliberately closed their eyes to the facts would deny that ev has targets.
You haven't identified any targets. Once again, the sections of the genome that represent the recognizer and the binding sites co-evolve, with different results on every run. There is no target for the solution. The only thing I can think that you might be thinking of as a target is the number of binding sites, but that is a constraint that reflects the simple model of the real world that Schneider is investigating with ev. There is nothing in the code that identifies the number of binding site errors that provides a target for the solution. Just as in the real world, more fit genomes tend to reproduce, but what makes them fit is determined by evolutionary mechanisms. MathGrrl
Mung,
Frankly, I think I have some idea of what MathGrrl is engaged in. How much time can it get us to waste responding to repeated repetitious repetitions.
I'm not interested in wasting anyone's time, least of all my own. I would simply like to understand CSI in sufficient detail to test the claims of ID proponents with respect to it. As I've noted before, the scientists and mathematicians I know would be thrilled to have someone showing that level of interest in their work, especially with a willingness to put in time and effort to research it. MathGrrl
Mung,
Schnedier’s PhD thesis isn’t about ev. Not sure why you think it’s relevant, especially since it explicitly discusses targets.
Schneider's PhD thesis is of interest because he wrote the original version of ev to test his thesis before defending it. The most interesting result of ev isn't that a small subset of known evolutionary mechanisms could create Shannon information in a genome, it is that Rsequence converges to Rfrequency just as Schneider found in real world biological organisms. Understanding the thesis is essential to understanding what ev is modeling. Now, I'm curious to understand what you see as targets discussed in Schneider's PhD thesis, if you'd be kind enough to expand upon your statement. MathGrrl
Mung,
Rather than criticize me for doing so, perhaps you could help move the conversation forward by providing a mathematically rigorous definition of CSI, as described by Dembski, and demonstrate in detail how to calculate it for the four scenarios described in my guest thread?
I don’t think it’s unfair to be critical of your stance, as you just keep repeating the same demands over and over. And that’s precisely what they are, demands.
Nope, just questions. I can't force anyone to answer them, so they're certainly not demands. I note that you failed to answer them in your comment. If you choose to do so, I would be interested in reading your response. MathGrrl
Mung,
If you believe that ev can be modeled as a targeted search
There’s no need to model ev as a target search. It is a target search. No “model” required.
Proof by repeated assertion is unconvincing, to say the least. You also cut out the meat of my question. Here it is again for your convenience: This is a simple issue to resolve. If you believe that ev can be modeled as a targeted search, please identify the target either in the ev paper or in the Evj source code. Please point out the target. MathGrrl
CannuckianYankee,
I think the bottom line with MG is that her question has been sufficiently answered
If that were the case, you would be able to respond to my questions to you in comment 190 of this thread. In fact, no ID proponent has yet provided a rigorous mathematical definition of CSI as described by Dembski nor has any ID proponent used such a definition to demonstrate in detail how to calculate CSI for any of the examples in my guest thread. Further, kairosfocus has failed to answer the questions I asked in comment 59 in response to his non-response to my request for a definition and example calculations. kairosfocus continues to refer to his comment 23, but that neither provides a rigorous mathematical definition of CSI as described by Dembski nor does it provide any detailed examples of how to objectively calculate the metric for any of my examples. I must admit that I am more than bemused by this entire situation. ID proponents, including kairosfocus, make some very strong claims about CSI being a clear indicator of the involvement of intelligent agency, but when asked cannot even define the metric with any rigor. Not only that, but it is very clear that no ID researcher has actually bothered to attempt to calculate CSI and publish the results for review and extension. Those two points make it clear than any claims about CSI are either literally nonsensical (due to the lack of a definition) or completely unsupported (due to the lack of calculations). What really astonishes me, though, is the response I have received when I ask for more clarity. Instead of an actual definition and real examples, I have received a number of non-responsive comments followed by assertions that my questions have been answered. This is not, in my experience, the way that scientists and mathematicians respond to questions about their areas of expertise. Rather than closing ranks with other ID proponents, please support your statement that my questions have been "sufficiently answered" by copying and pasting the rigorous mathematical definition of CSI in response to this comment, along with a detailed example of how to calculate it for one of my scenarios. If you can't do that very simply, you need to reconsider your claim. MathGrrl
Chris Doyle,
Evolutionists like “Mathgrrl” take their starting point from Dawkins: those who don’t believe in evolution are “ignorant, stupid, or insane, (or wicked, but I’d rather not consider that).”
Care to document that statement with anything I've written here on UD or elsewhere? Based on your refusal to support your insulting comments over on Mark Frank's blog, I suspect not.
Obviously, we cannot expect any respect or decency from people like that. We can only expect rudeness, evasiveness and double-standards.
Once again you demonstrate why you have no business criticizing the online manners of others. MathGrrl
kairosfocus,
Now, on the “co-evo” of binding and reception sites, this boils down to, we let the targets wander around a bit, so the negative feedback used to reduce “mistakes” — i.e. to reduce Hamming distance — is more of a servo-mechanism than a straight regulator; to use control system terms.
With respect, you really should read the source material of the ev paper and Schneider's PhD thesis for yourself. There is no Hamming Oracle in ev and your characterization of "let the targets wander around a bit" doesn't reflect how ev models known evolutionary mechanisms.
The fact of targetting — as Mung documented in so much specific detail — has not changed.
Indeed, the fact that ev does not have an explicit target remains. MathGrrl
CannuckianYankee,
Just for the record, it appears that MG came here originally as an assignment of some sort from a blog called “In Moderation”
Nope, I'm not on an assignment from anyone. I've been following the ID movement in general and UD in particular since the Dover trial. Based on that experience, I find the claims by ID proponents with respect to CSI to be the most obviously testable ones I've seen here. That's what prompted my delurking. Mark Frank has a lovely online persona but that doesn't mean I'd let him be my puppeteer. MathGrrl
CannuckianYankee,
“In that comment I make the point that “ev has a goal of co-evolving binding sites and their recognizers so that the Shannon information in the binding sites can be measured.” Darwinian ToE holds that complex life (now acknowledged as containing highly complex information in the form of DNA), and the required increase in such information, is an accident of chemical and physical processes without intervention from a mind or intelligence of any sort. Thus, evolution did not involve a computer algorithm with a goal to co-evolve, or to assist evolution in any way according to Darwinian evolutionary understandings.
I don't see anyone claiming otherwise. Genetic algorithms model what we observe in the real world, not the other way around.
Computer programs, which purport to demonstrate how evolution can produce complex biological information from mere chemical and physical processes, are therefore suspect when there is a “goal” as you say. Evolution supposedly has no goal or “target.” Schneider’s own language regarding ev is full of indications of a targeted search, as has been pointed out several times.
And yet no one has been able to point to the target in either the ev paper or in the Evj source code, nor has anyone addressed my repeated note that "ev has a goal of co-evolving binding sites and their recognizers so that the Shannon information in the binding sites can be measured. The only feedback provided is the number of sites recognized. There is no target for the content of either the binding sites or the recognizers. In fact, the makeup of those parts of the genome will be different in different runs."
The only way a computer program purporting to demonstrate the efficacy of Darwinian evolution by it’s own definition could do so, would be for the computers to first of all design and construct themselves, and then to design and construct the programs that demonstrate how it is possible for Darwinian evolution to work.
GAs such as Tierra, ev, and the Steiner problem solutions we've been discussing model known evolutionary mechanisms and known aspects of real environments. This is no different in principle from modeling the weather or plate tectonics or planetary orbits. Your objection misses this point.
"ev is an evolutionary search algorithm proposed to simulate biological evolution. As such, researchers have claimed that it demonstrates that a blind, unguided search is able to generate new information. However, analysis shows that any non-trivial computer search needs to exploit one or more sources of knowledge to make the search successful"
Leaving aside what could be a long discussion about the issues with modeling evolutionary mechanisms as a search and several other issues with the last sentence, I refer you to this paper: http://www.ncbi.nlm.nih.gov/pubmed/10781045 which is referenced on Schneider's website. The core point it makes is, as Schneider puts it, "the genetic information in biological systems comes from the environment." GAs like ev demonstrate the ability of known evolutionary mechanisms to accomplish this, without the need for any intelligent agency. MathGrrl
kairosfocus,
The bottomline is that the — peer-reviewed, Dec 15, 2010 Bio Complexity 2010(3):1-6. doi:10.5048/BIO-C.2010.3 — Dembski et al vivisection of ev turns out to be quite correct, despite all dismissals and obfuscations.
That's quite incorrect. Schneider has responded to the Montanez et al. paper and demonstrated that the authors misunderstand significant and salient points about ev. Even the section you quote has at least one error. There is no Hamming Oracle in the ev implementation.
Let’s just say that in the current climate of hostility, Dembski et al would not have been published in such a journal unless their article had serious merit on matters of substance.
Bio-Complexity is an online journal that published three articles, including the one you reference, and a review in 2010 and one article so far in 2011. All are from ID proponents. The editorial team includes many of the authors of those articles, including Dembski and Marks. Do you really think there was much hostility directed to that paper? MathGrrl
I’m a gunner on a ship in WWII. I’m trying to hit an enemy aircraft that threatens to destroy my ship. Does that aircraft qualify as a target in your mind? Does my ship qualify as target? Why?
My ship is moving. My gun is moving. Every other projectile is a different projectile, and they are moving as well. The aircraft is moving. I score hits and it loses pieces! The aircraft itself is changing. So many things changing. Why does that change the fact that my ship and that airplane are targets? It doesn't. Anyone can see that the fact that because things are moving or changing it doesn't mean there are no targets. MathGrrl's objection is absurd. Mung
Dear reader, I've rejected, for now, any hope that MathGrrl is seriously interested in any of the subjects he/she/it raises. The evidence is quite clear. I have dealt with the subject of ev at length, and from all appearances MathGrrl never even read what i wrote. I image many of you feel the same way. There are many subjects which inter-twined, but lets see if we can set forth some basic facts. 1. Each "genome" contains multiple binding sites. 2. Each "genome" contains a recognizer. What, you may reasonably ask, is the purpose of the recognizer? 3. Each generation, half the population is replaced. How, you might reasonably ask, is it decided who shall "die" and who shall "live" to "reproduce?" 4. Schneider measures which genomes have more mistakes. 5. It is the "mistake makers" who are replaced. What, you might logically ask, determines how many "mistakes" are present in each genome? 6. The "genome" of each survivor is mutated. As a result, a binding site may be changed, or the recognizer may be changed. What, you may reasonably ask, happens if the mutations/per genome is increased? [Hint: See the Horse Race paper mentioned by kf above] How does the program know when to stop? Only someone who has deliberately closed their eyes to the facts would deny that ev has targets. Mung
MathGrrl, What is a Hamming Distance? What is a Hamming Oracle? Seriously, it appears as if you lack understanding of some of the foundational concepts to understand what we are discussing here. So let's see if we can address that. How does Schneider decide where to perform his measurements for information content? Does he somehow know in advance where the binding sites are going to be? No, that can't be. You've already rejected that idea. So then, how does he know what to measure and where in order to determine whether Rfreq = Rseq? Do tell, please. Mung
Mung: Nyquist on stability is of course closely related to the above on gain and phase margins. The Laplace transforms mentioned are closely related to the Fourier transforms used in frequency analysis of communication systems. Indeed, the jw axis is the frequency response axis. The sigma axis relates to damping (especially in the left half of the plane). It relates to transient behaviours. But this is beginning to dig into much deeper waters than a blog discussion thread can reasonably bear. Let's just note we are here looking at the beginnings of the study of linear time invariant systems, which is the beginning of the study of dynamical systems governed by differential or difference equations (the latter for discrete time). The fruit of it is that the per generation behaviour of ev can be seen as a discrete time pattern, which can then be analysed on its patterns and processes, as a target-tracking control system. In that system, "mistakes" is obviously a comparator output, a Hamming distance metric, and the process of approach is critically dependent on in effect target tracking. Behaviour is further biased by a perceptron structure, per Dembski et al, that biases towards the sort of bit patterns that are to be expected. The noisy nature of the process is suggestive on why tweaking is important to achieve convergence. GEM of TKI kairosfocus
Frankly, I think I have some idea of what MathGrrl is engaged in. How much time can it get us to waste responding to repeated repetitious repetitions. Anyone want to help me develop a bot to respond to any future MathGrrl post? I mean heck, we already know with a high degree of certainty what the content of the post will be. This is an opportunity of ID in action! We just need to come up with the specification. Better yet, let's see if we can develop a program that can tell the difference between MathGrrl and a troll. Mung
MathGrrl @188:
This is a simple issue to resolve.
It's not as simple as you make it out to be. Convincing someone of the presence of a target who does not believe in the existence of targets is not an easy thing!
If you believe that ev can be modeled as a targeted search...
There's no need to model ev as a target search, for that is what it is. You seem to need a lesson in the absolute basics. Please answer the following questions: 1. What is a search? 2. What is a target? 3. Can you conduct a search without a target?
...please identify the target either in the ev paper or in the Evj source code.
You didn't read what I have previously written, did you. I made it clear that ev has multiple targets. PLURAL. I also made it clear that the location of the targets change for each run, but thereafter remain fixed during the run. If you're going to continue to pretend that you're engaging any points I raised you seriously need to do a better job. One additional question: 4. Do you know what a genetic algorithm is? Mung
MathGrrl @163:
I believe we have reached the point of significantly diminishing returns with respect to the discussion of CSI in this thread. I will continue to monitor it...
Deja vu. Is it just me, or is this the second time MathGrrl has returned to UD only to assert the same demands and then depart without actually engaging?
While I have little hope that it will happen in this particular thread, I do suspect that this topic will arise in the future here at UD and I look forward to engaging in the discussion with you then.
I don't think anyone here takes you seriously. I've offered ways to move the debate along a number of times, so have others. You've never pursued those opportunities. Mung
Yes it is, I agree. It’s actually a parallel to what’s going on in the ev algorithm with a moving target.
I would not be at all surprised to also find a connection to coding/communication/information theory. When was radar developed?
During World War II he was particularly involved with servomechanism problems. - http://en.wikipedia.org/wiki/Ralph_Hartley
The Nyquist stability criterion can now be found in all textbooks on feedback control theory. - http://en.wikipedia.org/wiki/Harry_Nyquist
Mung
I really do not wish to get into theological issues in-thread...
Understood and respected. Mung
MUNG, MG & CY: Let me try to do a closed loop diagram using Text style elements: T'get -> {+/-} e --> [PLANT] -|-> o/p . . . . . . . . | ------ [F/B ] -----------| (Hope f/b path aligns: sample o/p and feed back to comparator, subtracting f/b from target i/p.) Forward path, the target sets where the plant aims for, feedback path, a sample of o/p is fed back to compare with target & create error signal. That error signal is used to drive plant so that e -> 0. When e = 0, plant is on target. As said before, in a servo, the target point is usually moving. The o/p tracks based on the dynamics of the system. This is best analysed on differential equations, converted into the complex frequency domain by Laplace transforms. In a MIMO system, the transfer is a matrix, and things get very interesting. Equivalently difference equations and z transforms [effectively delay elements] can be used. this brings in the biggie problem, lags tend to give rise to instability, and the diagrams for ev Schneider shows have such a tendency. Lags are equivalent to phase shifts and that is frequency dependent. If you are unlucky, at some point you meet the Barkhausen criterion B*Ao/l = 1 and phase lag = 180 degrees for some frequency f [the subtraction already supplied 180 degrees], so you get oscillations. Before that point, you get in effect a tendency to damped oscillations. I suspect that this is where some of the noisiness Schneider has in the graph is coming from, especially as random injections will in effect inject high frequency noise, which is exactly what will push you towards the Barkhausen criterion. Some poles in the implied transfer functions are approaching the instability criterion. Name of the game here is phase and gain margin. GEM of TKI PS: Mung, I really do not wish to get into theological issues in-thread [I only addressed Camping because of the global media effect.] But I do point you to 2 Pet 3:3 - 12 [the same text HC took out of context], for a balance. kairosfocus
MathGrrl @160:
I strongly suggest you read the source material, namely the ev paper and Schneider’s PhD thesis for yourself.
Even better than the source material is the source code. Have you read that yet? Schnedier's PhD thesis isn't about ev. Not sure why you think it's relevant, especially since it explicitly discusses targets.
If you still believe that ev models a targeted search, please explain why you think so, with reference to the ev paper and the program source code, and we can no doubt have an interesting discussion.
Already been there and done that. No discussion ensued. Merely further repetition of the same assertion on your part, which stands refuted. Enough with the false promises please. Please define what you mean by a "target." Perhaps once you've done that we can proceed. I can't show you something you refuse to admit even exists. Let me give you an example: I'm a gunner on a ship in WWII. I'm trying to hit an enemy aircraft that threatens to destroy my ship. Does that aricraft qualify as a target in your mind? Does my ship qualify as target? Why? Mung
KF, Sorry, in my last post, I noticed something missing - it should read; "Yes it is, I agree. It’s actually a parallel to what’s going on in the ev algorithm with a moving target, and what's necessary in order to hone in on that target." CannuckianYankee
O/T:
Acts 17:31Because He has fixed a day when He will judge the world righteously (justly) by a Man Whom He has destined and appointed for that task, and He has made this credible and given conviction and assurance and evidence to everyone by raising Him from the dead.
kairosfocus, how's your Greek? Here's Young's Literal Translation:
because He did set a day in which He is about to judge the world in righteousness, by a man whom He did ordain, having given assurance to all, having raised him out of the dead.'
http://www.greeknewtestament.com/B44C017.htm See Also Weymouth:
seeing that He has appointed a day on which, before long, He will judge the world in righteousness, through the instrumentality of a man whom He has pre-destined to this work, and has made the fact certain to every one by raising Him from the dead.'
Mung
KF, "The comparison to a missile tracking a flying jet is a bit closer than just analogy." Yes it is, I agree. It's actually a parallel to what's going on in the ev algorithm with a moving target. CannuckianYankee
KF, "10^48" Whoops, in quoting Meyer, I wrote those out as "10>48." (for example) Must get that right. :) CannuckianYankee
My little corner: I would say that ev implements one or more algorithms [algor + coding + data structures --> programs], rather than being strictly speaking an algorithm itself. But that is a minor refinement Yes, I thought of wording my post differently in order to make a distinction but for now i'll not worry about it unless it becomes an issue. :) ev consists of a number of algorithms and/or modules, but in it's essence it at least pretends to be an evolutionary algorithm.
Mung
Skipping ahead: Perhaps this has already been mentioned, perhaps not. markf, I'll understand if you don't have either of these. On information in nucleotides and what they code for, and in amino acid sequences: Information Theory, Evolution, and The Origin of Life See also Yockey's earlier work: http://www.amazon.com/gp/product/0521169585 Mung
CY: The comparison to a missile tracking a flying jet is a bit closer than just analogy. A targetting system that is moving towards a target in a config space is in effect a servosystem, so the similar analysis to a targetting system in physical space applies. Regulators try to maintain a state, servos try to track a target, two of the major facets of control systems. Such a system is going to require:
1: a set point input -- that is what tells the system what is to be tracked. 2: A comparator, giving rise to an error function by comparing desired to actual performance, that 3: drives an actuator that moves 4:the plant towards the target. Where also 5: A sensor will detect actual current performance, to produce the 6: feedback signal for comparison.
There is a whole discipline of engineering and related fields on this. if ev is tracking towards a moving target, it will need these or similar elements. And, from Mung's earlier clippings and the analysis by Dembski et al, such are not exactly missing in action. GEM of TKI kairosfocus
Mung: MG has had her well-warranted explanations for the CSI metrics, and for how they apply to real world, biological systems. She has had her answers to her four original questions and her was it a dozen more. And more. Again and again, week after week, for coming on three months now. She simply refuses to accept that there are answers on the table. That is why I keep repeating the following now. On CSI and its "rigour," that has been addressed over and over again, in most specificity to the issue of rigour, at 34 - 5 above. Similarly, the talking points MG tends to use over and over as thought hey have not been cogently answered, were last dissected in 23 - 24 above. And, the overall summing up of the issues MG has needed to explain herself on has been kept up in the editorial response to Graham at no 1 in the CSI newsflash thread; which MG has persistently ignored. When it comes to ev, 137 above shows my links to the places in the CSI Newsflash thread where it is dissected by Mung. (One of MG's tactics seems to be to wait until something is buried under enough posts in a thread, or has been continued in a successor thread, before repeating the assertion that was rebutted.) She knows or should know better than she has acted. GEM of TKI kairosfocus
Mung: Useful questions and observations. My little corner: I would say that ev implements one or more algorithms [algor + coding + data structures --> programs], rather than being strictly speaking an algorithm itself. But that is a minor refinement GEM of TKI kairosfocus
Rather than criticize me for doing so, perhaps you could help move the conversation forward by providing a mathematically rigorous definition of CSI, as described by Dembski, and demonstrate in detail how to calculate it for the four scenarios described in my guest thread?
I don't think it's unfair to be critical of your stance, as you just keep repeating the same demands over and over. And that's precisely what they are, demands. That's not the sort of thing reasonable people do, especially when invited to guest post. A reasonable person doesn't take a hard stance that unless someone meets their demands to their satisfaction that there is no room for discussion. If you wanted to move things long you would have responded to vjtorley's quite reasonable request. If you want to move things along, that would be a good start. When you do, I'll begin to take seriously your claim that you want to understand CSI. Until then, I refuse to play your game. Plenty of others have gone down that path with you to no avail. Mung
CY: Useful elaboration. It is indeed the balance of accessible resources and the scope of config space that determines a reasonable outcome. I do however stress that the real crunch factor is not so much a probability estimate as the search challenge in a config space. When the available P-time q-state search resources in terms of number of available states are like 1 in 10^48 of the configs for 500 bits (on the gamut of our solar system) or 1 in 10^150 of the possibilities for the configs of 1,000 bits (on the gamut of the observed cosmos) then you cannot claim to be mounting a credible search of the space of possibilities. Multiverse speculations try to get around this by imagining either an extension to our observed domain or separate domains so that somehow with enough sub-cosmi there is enough room for resources so someone gets lucky. But, what is the evidence for that multiverse? Only, the desire for such additional resources. Philosophical speculation, not science. And, speculation that runs into the other side of the design inference, as a multiverse would have to have in effect a cosmos bakery to cook up sub cosmi suitable for life. That makes the underlying bakery itself quite fine-tuned. And, complicated, fine tuned objects at an operating point are a strong sign of design anyway. GEM of TKI kairosfocus
MathGrrl, Let's see if we can find anything we both (all) agree on. Is ev an algorithm? How would you define or describe an algorithm? Is ev an evolutionary algorithm? If so, what makes it so? Is there such a thing as a search algorithm? Is an evolutionary algorithm a class of search algorithm? Regards Mung
If you believe that ev can be modeled as a targeted search
There's no need to model ev as a target search. It is a target search. No "model" required. Tell us MathGrrl, what do you know about search algorithms? Mung
KF, "That the target — a flying jet usually — normally moves, does not remove the fact of targetting, and the missile hits as long as it can move fast enough to close to the moving target and as long as it has an oracle — the IR signal from the jet exhaust being the usual one." Good analogy, and this should make it abundantly clear. CannuckianYankee
I for one understand that when there's limited resources, there's no (magical) free lunch. CannuckianYankee
On a related matter. Here's Meyer from Chapter 10, discussing Dembski: "As I investigated the question of whether biological information might have arisen by chance, it became abundantly clear to me that the probability of the necessary events is exceedingly small. Nevertheless I realized, based on my previous conversations with Bill Dembski, that the probability of an event by itself does not alone determine whether the event could be reasonably explained by chance. The probabilities, as small as they were, were not by themselves conclusive. I remember that I also had to consider the number of opportunities that the event in question might have had to occur. I had to take into account what Dembski called the probabilistic resources." (SITC pg. 215) He then addresses issues pertaining to the typical Darwinian understandings of how by chance, "amino acids or nucleotide bases, phosphates, and sugars" in an "ocean-sized soup" were able to arrange the elementary building blocks for life. Then he states: "Dembski's calculation was elegantly simple and yet made a powerful point. He noted that there were about 10>80 elementary particles in the observable universe. (Because there is an upper limit on the speed of light, only those parts of the universe that are observable to us can affect events on earth. Thus, the observable universe is the only part of the universe with probabilistic resources relevant to explaining events on earth.) Dembski also noted that there had been roughly 10>16 seconds since the big bang......He then introduced another parameter that enabled him to calculate the maximum number of opportunities that any particular event would have to take place since the origin of the universe. Due to the properties of gravity, matter, and electromagnetic radiation, physicists have determined that there is a limit to the number of physical transitions that can occur from one state to another within a given unit of time. According to physicists, a physical transition from one state to another cannot take place faster than light can traverse the smallest physically significant unit of distance (an indivisible "quantum" of space). That unit of distance is the so-called Planck length of 10>-33 centimeters. Therefore, the time it takes light to traverse this smallest distance determines the shortest time in which any physical effect can occur. This unit of time is the Planck time of 10>-43 seconds." (SITC pg. 215-216) Based on this, "there are a limited number of opportunities for any given event to occur in the entire history of the universe" since the Big Bang. The probabilistic resources that would be required for Darwinian process alone to account for the origin of life far exceeds those available, and Meyer discusses this at length in this chapter. Thus, and in short, it's not that a large improbability or small probability for Darwinian processes is sufficient to dispel chance and necessity as adequate to account for the origin of life and complex biological information, it's that Darwinian processes alone and of necessity, far exceed the available probabilistic resources; and this argument leaves me further mystified as to how any hypothetical multiverse could add anything to the necessary probabilistic resources to affect events on Earth, since any universe outside our own is far outside the observable universe, so it would be irrelevant to the problem at hand. The only process that could originate biological information within the parameters of the available probabilistic resources (and it's interesting that teleology actually transcends the necessity of probabilistic resources), is design. The evidence leads to this conclusion, and this conclusion alone. Also, this dispels the oft repeated charge that ID is an argument from incredulity. http://rationalwiki.org/wiki/Argument_from_incredulity CannuckianYankee
KF, "Once we have CSI beyond that, we are most credibly there by design. On induction from consistent empirical observations and on related analysis of available search resources on chance plus necessity." Yes. This is pretty much the main point of SITC. Which is why Darwinists appear reluctant to acknowledge issues regarding CSI as it pertains to a design inference. If they admit to it, then the whole Darwinian rhetorical strategy crumbles. Another issue that is related is the multiverse hypothesis. If Darwinian natural processes (resources) alone are sufficient for the origin of complex biological information, then an increase in such resources provided by a hypothetical multiverse should not be required. So with that, I'm mystified why it's even brought up in materialist circles. It's as if it's a solution they apply to a problem they don't admit exists. CannuckianYankee
CY: Shannon was using a metric for info based on symbol frequencies in typical messages. The relative frequencies were interpreted as probabilities and this was used with the log metric to get additivity: Ik = log (1/pk) = - log pk Meaning and/or function are secondary to their context which was things like channel capacity in the presence of noise. When we use that metric and then use the restriction that we are coming from events E on an island of function or zone of interest T (T being the detachable specification that puts you there), we are almost at a CSI metric. The threshold is then put in to identify when the needle in the haystack hurdle exceeds the search resources of our solar system or of the observed cosmos. Once we have CSI beyond that, we are most credibly there by design. On induction from consistent empirical observations and on related analysis of available search resources on chance plus necessity. (Information depends on high contingency, and the only credible explanations for a contingent outcome like that are choice or chance. And chance will be controlled by the relative weight of identifiable clusters of possibilities. Here, on/off the zone of interest.) GEM of TKI kairosfocus
Joseph - I'm trying to find out how you (well you and kairosfocus) define meaning and function in a way that is formal enough that we start doing mathematical analyses. I'm still non the wiser, I'm afraid. I'm not sure what your quote of Weaver is meant to say, as you previously wrote this:
IOW “information” as it is used by IDists is the same as every day use.
Weaver's saying that the Shannon Information isn't the same as the everyday use. So are you saying he's wrong, or that ISists don't use Shannon Information? Heinrich
Heinrich- Here are a couple of clues for you to follow: Pertaining to Shannon:
The word information in this theory is used in a special mathematical sense that must not be confused with its ordinary usage. In particular, information must not be confused with meaning.- Warren Weaver, one of Shannon's collaborators
Pertaining to function:
Biological specification always refers to function. An organism is a functional system comprising many functional subsystems. In virtue of their function, these systems embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the same sense required by the complexity-specification criterion (see sections 1.3 and 2.5). The specification of organisms can be crashed out in any number of ways. Arno Wouters cashes it out globally in terms of the viability of whole organisms. Michael Behe cashes it out in terms of minimal function of biochemical systems.- Wm. Dembski page 148 of NFL
If you really have difficulty understanding meaning and function then perhaps you are in the wrong place. Joseph
Here's what Meyer says regarding the DNA enigma: "When Watson and Crick discovered the structure and information-bearing properties of DNA, they did indeed solve one mystery, namely, the secret of how the cell stores and transmits hereditary information. But they uncovered another mystery that remains with us to this day. This is the DNA enigma-the mystery of the origin of the information needed to build the first living organism." Stephen C. Meyer, "Signature in the Cell," pg. 13. CannuckianYankee
"Notice, how persistently resistant evo mat advocates and fellow travellers are to the issue of getting TO islands of function in a sea of a vast space of configs on random walks and trial and error, where by far and away most of these will be decidedly non-functional." Excellent point. I think MG "and fellow travelers" would do well to re-read Dembski's book on the matter and continuously consider origins of life - and what Meyer refers to as the DNA enigma. ev and programs like it are irrelevant in consideration of the arrival of the first life. CannuckianYankee
CY: What I will say is that it is pretty clear that originally MG was part of a co-ordinated group with KL and another. What happened is that the latter two patently became too rowdy. MG haws continued with a rhetorical strategy of ignoring substantial answers and corrections, and repeating her talking points ad nauseum. To a point where her behaviour is now unfortunately willfully insistent and deceptive. Not to mention, making some pretty nasty snide allegations or insinuations. There is a very long list of points where she needs to answer seriously on the merits, or to explain herself. unfortunately, she shows scant indication of doing so. In absence of such, he needs to be regularly reminded of what she needs to do. You will see that I now have a standard challenge to her:
On CSI and its "rigour," that has been addressed over and over again, in most specificity to the issue of rigour, at 34 - 5 above. Similarly, the talking points MG tends to use over and over as thought they have not been cogently answered, were last dissected in 23 - 24 above. And, the overall summing up of the issues MG has needed to explain herself on has been kept up in the editorial response to Graham at no 1 in the CSI newsflash thread; which MG has persistently ignored. When it comes to ev, 137 above shows my links to the places in the CSI Newsflash thread where it is dissected by Mung. (One of MG's tactics seems to be to wait until something is buried under enough posts in a thread, or has been continued in a successor thread, before repeating the assertion that was rebutted.) She knows or should know better than she has acted.
I will consistently re-post this challenge until she begins to answer seriously on the merits. So that all who care about the truth and about serious warrant for claims, will be able to see the real balance on the merits. GEM of TKI kairosfocus
CD, "Fortunately, we have a few serious, courteous opponents who are open-minded and conversant with the facts. More like them please!" Yes we have, and I appreciate them as much as those I agree with. CannuckianYankee
Hear Hear, Kairosfocus. I'm currently reading Wells' "The Myth of Junk DNA" and I'm beginning to fully appreciate just how much introns have changed the game. The Scientific American article, "The Unseen Genome: Gems Among the Junk" says it all with this statement: "The failure to recognize the importance of introns 'may well go down as one of the biggest mistakes in the history of molecular biology'" If a given nucleotide performs different functions [depending on the different ways (and times) it can be used to express genes], then it is simply inadequate to talk about random mutations in DNA without addressing the full range and impact of such mutations with reference to the influence of introns. Chris Doyle
markf, Thanks for the clarification regarding your blog. I guess I've been reading posts under one thread entitled "in Moderation?" However, the top banner seems to suggest that this is a blog in itself. I think the bottom line with MG is that her question has been sufficiently answered, and that she is not about to have UD posters and ID supporters change their minds on the issue, given the strategy she is applying. I think a better strategy would be to research new information to support her views. She has not done so. When she has raised objections, the strategy of KF and others has been to further clarify with just such new information - but ultimately, there is probably even more information that has not been mentioned here. But so far, the record speaks for itself. CannuckianYankee
KF and others, "It seems that we are at a stage where the Alinsky mentality has so pervaded sectors of the public, that they are unable to think that protecting civil discussion towards assessing the warrant of claims is a legitimate act." Alinsky - the ends justify the means. The problem with the means is that they don't always produce the desired ends (as Marxism has shown), and we've demonstrated that with MG. We can control the ends such that the means are counterproductive. UD has already done such controlling here by allowing MG to have her own thread, and by continuing to put up with her repetitive talking points - thus dispelling that those with dissenting views are arbitrarily moderated by the emotional whims of the moderators. So we have control over that perception (that the perception continues on one particular blog is inconsequential - given the actions here), and I think the moderators here have been quite fair with those who come here with dissenting views. As I mentioned in an earlier post, some of those who have been moderated posted here for several years - I can name some of them, but I'd like to respect their privacy. Bottom line is that UD would not be UD without the contribution of dissenters towards the discussions at hand. UD would be biting off it's own foot if it arbitrarily and emotionally booted out those who dissent form ID. But if the decision comes down to moderating her, it is not because she has not been superficially civil in language, but because by her own repetitive talking points, she's not bringing anything new to the discussion; which I think would be quite a legitimate reason for moderation. This is why I suggested to her that she move on; even if she doesn't for one reason or another, accept the conclusions. No one is forcing her to do so, but it wouldn't be an arbitrary and emotional whim to finally end the repetition in light of her apparent and underlying motives. CannuckianYankee
CD: At most, such simulations may help us see how micro evo can happen, within an island of existing function. Which is not controversial, even among modern Young Earth Creationists. Notice, how persistently resistant evo mat advocates and fellow travellers are to the issue of getting TO islands of function in a sea of a vast space of configs on random walks and trial and error, where by far and away most of these will be decidedly non-functional. That holds for the first body plan, and it holds for major new body plans thereafter. Including accounting for the linguistic capacity that is so central to our humanity. The same linguistic capacity that the evo mat advocates have to use to make their claims. In short, the whole debate is a massive exercise in self-referential incoherence on their part. GEM of TKI kairosfocus
#202 CY I just stumbled upon your comment (I have not been following this discussion for several days). 1) Delighted that you have been reading my blog from time to time - welcome. 2) There is only one thread on the blog about UD moderation policy. Other threads are devoted to different items. Most of the participants are anti-ID ex-contributors to UD - but that was by evolution not design. 3) You are right that in that thread I did argue that there is no strategy on UD to suppress dissent through moderation or banning and I deliberated responded to an item in more personal language than I would normally use to prove my point.( I was not moderated - so I guess my point was made.) 4) I have absolutely no reason to suppose that Mathgrrl is making any kind of test of UD moderation policy. She could have been a lot less polite if she wanted to try that out. I am sure she genuinely believes her concerns have not been met. 5) I still have no idea who she is or even if she is really a "she". (I did once think I knew - but it turns out there is more than one Mathgrrl in the world.) markf
F/N: A bit of explanation on targetting with servosystems will help. Think about an air-to-air missile like the classic sidewinder. That the target -- a flying jet usually -- normally moves, does not remove the fact of targetting, and the missile hits as long as it can move fast enough to close to the moving target and as long as it has an oracle -- the IR signal from the jet exhaust being the usual one. Such a missile goes ballistic if it loses lock and is no longer in target-location controlled flight. kairosfocus
Hello CannuckianYankee (re: posts 201 and 202). Two good posts: the first one being particularly interesting. Computer simulations of evolution created by intelligent designers are exactly that. They shed absolutely no light whatsoever on how amino acids came to form DNA and how DNA itself evolved through random mutations (let alone how the cell itself evolved). This has to be the starting point of any computer simulation attempting to demonstrate the power of random mutation and natural selection. Evolutionists like "Mathgrrl" take their starting point from Dawkins: those who don't believe in evolution are "ignorant, stupid, or insane, (or wicked, but I’d rather not consider that)." Obviously, we cannot expect any respect or decency from people like that. We can only expect rudeness, evasiveness and double-standards. So the sooner such people withdraw from this debate the better. Fortunately, we have a few serious, courteous opponents who are open-minded and conversant with the facts. More like them please! Chris Doyle
CY: Significant. Especially so, since you can see above that I have been made a target of abusive slander, in obvious connexion with the mess that is going on at MF's blog. The slanderer's notion that blocking abusive comments is improper protection and privileging is in turn quite revealing. It seems that we are at a stage where the Alinsky mentality has so pervaded sectors of the public, that they are unable to think that protecting civil discussion towards assessing the warrant of claims is a legitimate act. And, all the time, the slanderer unwittingly reveals just why there is a pattern of evo mat advocates being banned at UD: far too many of them tend to be uncivil and abusive. Now, on the "co-evo" of binding and reception sites, this boils down to, we let the targets wander around a bit, so the negative feedback used to reduce "mistakes" -- i.e. to reduce Hamming distance -- is more of a servo-mechanism than a straight regulator; to use control system terms. All this usually means is the system is inherently more unstable [servos tend to be more headache-y as control systems], as the amount of tweaking we see above supports. (And you have in reserve self-modifying i.e. so-called adaptive control mechanisms.) The fact of targetting -- as Mung documented in so much specific detail -- has not changed. Nor has the basic reality we see: the system is designed, is tuned to produce a particular performance, and profits from injected active information. That is how it beats the search space limits. Intelligent design. GEM of TKI kairosfocus
Joseph, "MathGrrl has to be a ruse and this is all a prank. When she spews stuff like:" Just for the record, it appears that MG came here originally as an assignment of some sort from a blog called "In Moderation" ... http://mfinmoderation.wordpress.com/2011/05/07/mathgrrls-csi-thread-cont/#comment-2381 ...hosted by markf. The blog holds a discussion among people have been banned from commenting on UD for one reason or another. Many of them are angry at UD for having placed them in moderation, and the discussion on that blog is almost exclusively centered around UD's moderation policy. There's not much discussion on the merits of either ToE or ID. In those discussions, many of the people who post here have been mentioned - sometimes in slanderous language - but I don't fault markf for that. I've been reading posts there for several weeks, and it appears that some of the comments from markf here are intended to test whether certain things he says will lead to him being moderated. He does not believe that people are moderated due to any particular policy, but based on the emotional whims of the moderators. So I would not be surprised if MG's continuous repetition is the result of an agreed-upon test of our moderation policy among the readers of that blog. If so, the premise of her question is not so much in trying to get answers to a scientific question, but rather to test how far she can go before being moderated, for the purpose of further confirming that moderations are arbitrary and frequent towards dissenting views. This leads to another issue. If MG is posting on a blog for former UD posters of dissenting views, then likely she is one of those former posters and is using another name. I got a hint of that when on the other blog, she erroneously posted under the name of one "Patrick," on 3 recent posts, then after catching herself and saying that she outed herself there, she explained that she was using her father's laptop, and that markf could decide what he was going to do with her 3 posts under that name; which is interesting, since markf apparently doesn't censor anything on that blog. CannuckianYankee
MG, "In that comment I make the point that “ev has a goal of co-evolving binding sites and their recognizers so that the Shannon information in the binding sites can be measured." Darwinian ToE holds that complex life (now acknowledged as containing highly complex information in the form of DNA), and the required increase in such information, is an accident of chemical and physical processes without intervention from a mind or intelligence of any sort. Thus, evolution did not involve a computer algorithm with a goal to co-evolve, or to assist evolution in any way according to Darwinian evolutionary understandings. One must continually keep this in mind when using computer algorithms to somehow evolve complex information or synthetic organisms. Unfortunately, Darwinists do not appear to keep this in mind. They ignore the very premise they're attempting to confirm. Computer programs, which purport to demonstrate how evolution can produce complex biological information from mere chemical and physical processes, are therefore suspect when there is a "goal" as you say. Evolution supposedly has no goal or "target." Schneider's own language regarding ev is full of indications of a targeted search, as has been pointed out several times. I find it interesting that you keep attempting to drive home a point regarding "rigorous mathematical quantifications" for CSI when the very premise by which Darwinian evolutionists attempt to rigorously quantify evolution - via computer programs that are designed, is suspect right from the very premise of the methodology compared with Darwinian evolution's own definition. The only way a computer program purporting to demonstrate the efficacy of Darwinian evolution by it's own definition could do so, would be for the computers to first of all design and construct themselves, and then to design and construct the programs that demonstrate how it is possible for Darwinian evolution to work. Computer programs are always artificial. The key part of "artificial" is "art." Art is the product of mind and intelligence. Supposedly biology is not artificial, but "natural." It doesn't produce synthetic organisms, but natural organisms. Therefore artificiality can in no way demonstrate natural processes according to Darwinian definitions and understandings of "natural." I have to repeat here what KF pointed out from Dembski in a peer reviewed paper: http://evoinfo.org/papers/vivisection_of_ev.pdf "ev is an evolutionary search algorithm proposed to simulate biological evolution. As such, researchers have claimed that it demonstrates that a blind, unguided search is able to generate new information. However, analysis shows that any non-trivial computer search needs to exploit one or more sources of knowledge to make the search successful." The problem with ev is that it is not a blind, unguided search; it is as Dembski states, an algorithm, which exploits sources of knowledge (read: "information") to reach a goal (of increased information), which is then interpreted from a Darwinian standpoint as demonstrating how Darwinian process can achieve a similar increase in information. I don't think anyone's denying that with ev there is an increase in information. What the detractors are saying is that it does so not by Darwinian processes as explained by the Darwinian ToE, but by artificial processes programmed into it by designers. Therefore, it is by definition, a targeted search, with the goal to confirm what the programmers already believe about Darwinian processes; and Mung showed how so in his several posts on the matter, and if you read Dembski's entire paper, he demonstrates this empirically. This is also the point of Meyer's 13th Chapter in SITC. Designed evolutionary algorithms are nothing more than an exercise in elusive question-begging and viewpoint confirmation on the part of Darwinian evolutionists. And this recognition is extremely important in relation to your initial question regarding a rigorous mathematical quantification of CSI. And in demonstrating this, those who pointed it out are actually doing you a favor. It appears as though your initial question stems from an assumption that Darwinian processes ARE capable of producing and increasing complex information (so you also require a rigorous mathematical quantification of CSI and you should be thankful that such quantification has been provided in several posts over the last several months). Unfortunately what has been provided does not confirm your worldview. The logical thing to do would be to acknowledge this and move on, rather than attempting to drive home an already well-refuted point. You appear to base this assumption on examples such as the ev algorithm, which have been shown to be counterproductive - well that is assuming you're looking for an honest evaluation of evolution's abilities, and not simply a confirmation for what you already believe. CannuckianYankee
kf - True, but you didn't respond to my response at 158 to your reply at 157. I also asked you something (along similar lines) at 187. Heinrich
H: I don't know about J's response, but I answered at 157. GEM of TKI kairosfocus
Onlookers: It is time to draw some conclusions (some of which, regrettably but needfully, will be painful) on the past several months worth of exchanges at UD on this general topic. Some of those conclusions -- as just pointed out -- are not happy ones; and, it is to be noted before I go on that this morning I have received a comment elsewhere along the following lines:
[Condescending diminutive of my name] you're a delusional, dishonest, hypocritical, pompous, narcissistic dolt. You're going to get a lot of exposure here: [blog address of an attack blog, communicated to management, UD] Your [homosexual reference] buddies at UD won't be able to protect you there. The truth about you and your insane religious and political agenda will come out for all to see. Consider yourself 'outed'.
This is an example of the turnabout accusation rhetorical attack, crudely slanderously uncivil and self-justifying mentality we unfortunately too often have to deal with on the part of objectors to design thought; here in the crudest form of utterly unwarranted personal insults. Perhaps, too, this commenter needs to know that there are jurisdictions that are applicable (jurisdictions where the US's fatally flawed libel laws do not hold), in which patently false and utterly unwarranted accusations are actionable. And even before we get to the level of action, the notion that "this is not a Sunday School," or the like, is a thinly disguised way of admitting that one is being rude, uncivil and out of order. The red herring, led away to the strawman caricature, and then the pouring on of ad hominems and igniting through incendiary rhetoric, the better to cloud, choke, confuse, poison and polarise the atmosphere, is the strongest proof of a want of basic broughtupcy and of utter want of a serious case on the merits. Such a person should therefore pause and think twice before hitting send, when that message is going to be received in jurisdictions other than what s/he -- most likely, he -- has become used to. (And BTW, if you will take the moment to look above, you will see that when J went overboard above, I corrected him at once. Civility is the first requirement of serious dialogue that moves towards soundness and truth.) A commentator like this -- instead of resorting to abuse and insult -- would better expend his or her energy seriously addressing on the merits the issues here, where I have laid out what serious minded citizens have to think through if they are going to come to grips with origins science and the significance of the dominant a priori evolutionary materialist school of thought for not only the world of thought but for our wider civilisation. People like the just cited, sadly, do not seem to understand the matches they are playing with, or the fires they can set in our civilisation, even though Plato warned in his The Laws, Bk X 2350 years ago as follows:
[[The avant garde philosophers, teachers and artists c. 400 BC] say that the greatest and fairest things are the work of nature and of chance, the lesser of art [[ i.e. techne], which, receiving from nature the greater and primeval creations, moulds and fashions all those lesser works which are generally termed artificial . . . . [[T]hese people would say that the Gods exist not by nature, but by art, and by the laws of states, which are different in different places, according to the agreement of those who make them; and that the honourable is one thing by nature and another thing by law, and that the principles of justice have no existence at all in nature, but that mankind are always disputing about them and altering them; and that the alterations which are made by art and by law have no basis in nature, but are of authority for the moment and at the time at which they are made.- [[Relativism, too, is not new; complete with its radical amorality rooted in a worldview that has no foundational IS that can ground OUGHT. (Cf. here for Locke's views and sources on a very different base for grounding liberty as opposed to license and resulting anarchistic "every man does what is right in his own eyes" chaos leading to tyranny.)] These, my friends, are the sayings of wise men, poets and prose writers, which find a way into the minds of youth. They are told by them that the highest right is might [[ Evolutionary materialism leads to the promotion of amorality], and in this way the young fall into impieties, under the idea that the Gods are not such as the law bids them imagine; and hence arise factions [[Evolutionary materialism-motivated amorality "naturally" leads to continual contentions and power struggles; cf. dramatisation here], these philosophers inviting them to lead a true life according to nature, that is, to live in real dominion over others [[such amoral factions, if they gain power, "naturally" tend towards ruthless tyranny; here, too, Plato hints at the career of Alcibiades], and not in legal subjection to them . . .
In the slightly more sophisticated form of the so-called new/gnu atheists, the same underlying attitude unfortunately still applies: a priori materialists see themselves as the "brights," and any who differ with them are therefore ignorant, stupid, insane or wicked. At the further sophisticated level we have been dealing with for some months now, all of that crudity of thought is fuzzed out by using indirection, allusion and suggestion, rather than direct declaration. That is how for instance MG managed to suggest by citing Galileo's apocryphal "It still moves," that this is a case of religion persecuting science. Somehow, it slipped her attention that no-one is threatening anyone with the thumbscrews here, and if anything it is the Materialist Neo-Magisterium in the Holy Lab Coat that has been persecuting those who it deems heretics in recent years. Similarly, in the eagerness to play the rhetorical game of pushing persuasive talking points through the tactic of drumbeat repetition -- see how easy it is ("nothing wrong with repeating a point over and over again is there . . . ?"), it became all too easy for MG to lose sight of the duties of care to truth, fairness, and reciprocity in a serious discussion. And, in the end, such behaviour becomes subtly willfully deceptive; tantamount to lying. But such a process is so subtle that one may not see what one has actually done; until it is far too late. And that is why the thread above is so subtly painful. Oh, that it had gone in a different path, of genuine exchange of thoughts; as MG et al were invited to, over and over and over, in her case to the point of a guest post at UD. But, day by day, week by week, it became all too plain that the point was to project talking points and play the game of selectively hyperskeptical objection, not to actually engage in genuine exchange of ideas. So, the real bottomline for this thread was laid out in 34 - 35 above, which in the course of all but a fortnight since, MG has plainly been unable to respond. We can therefore freely conclude that -- despite the many talking points to the contrary -- the concept, complex specified information is meaningful and relates to a key challenge in origins science. Secondly, the Chi metric -- as the log reduced form shows -- is based on well accepted information theory concepts, starting with the common basic definition of quantified information, Ik = log (1/pk). It then raises the issue of a threshold sufficient to swamp the search resources of the solar system or the whole cosmos, and in so doing arrives at a highly useful result. Namely, a criterion of difficulty by which sufficiently specific pieces of functionally meaningful information will be so isolated in the space of possible configurations, that it is maximally implausible to try to explain them on chance and/or necessity. This is backed up by the needle in the haystack/infinite monkeys type analysis similar to that used to statistically ground the second law of thermodynamics. Such FSCI, however, is routinely and only observed to be the product of intelligence. And so, we are well warranted to infer from CSI or FSCI as reliable sign to the best, empirically and analytically warranted explanation, design. Never mind the ongoing drumbeat repetition of the many talking point objections to the contrary. (Indeed, we recall here how at a certain point Einstein's theory of Relativity became a subject of ideological objection in his native land. At one point, he was subjected to a public meeting with one speaker after another rising to subject the theory to shrill objections. His reply was, that if his theory was false, just one speaker on the merits would have sufficed to overturn it. Likewise, in the face of a cloud of angry mosquitoes tanked up on talking points and spreading them far and wide, we have yet to see that one sound speaker on the merits.) GEM of TKI kairosfocus
Joseph - as you're still following this thread, could you answer my comments @152? Heinrich
MG: I am finished with trying to answer you on points, as the only result is dismissal and reiteration. The message you have got through at length is that you are so far utterly unresponsive to duties of care about truth, fairness or reciprocity in discussion. Secondarily, after coming on three months now, you show no signs of relevant capacity to handle the concepts and the mathematical reasoning associated with those concepts. That includes your yet unexplained confusion of a log reduction of the Dembski metric with a probability calculation, and your attempt to dismiss the point of the issue of isolation of islands of function in large spaces of possibilities as irrelevant. In addition, as you seem to be an advocate for Schneider, you need to address the case where he tried to "correct" Dembski when the latter used the most common definition of information from my experience, Ik = log(1/pk) = - log pk (which is what I was introduced to in telecomms many years ago now as the main quantification of info, all Dembski has done is to add the criterion of the relevant configs in the string being from a zone of interest, often related to meaning-based, coded function in a system such as in DNA); and did so by trying to substitute a rarer synonym, "surprisal." Also, you need to answer to how Durston et al used their functional state H-metric (based on Shannon's avg info per symbol metric AKA entropy AKA uncertainty) and indicated in their 2007 paper that:
The number of Fits quantifies the degree of algorithmic challenge, in terms of probability [info and probability are closely related], in achieving needed metabolic function. For example, if we find that the Ribosomal S12 protein family has a Fit value of 379, we can use the equations presented thus far to predict that there are about 10^49 different 121-residue sequences that could fall into the Ribsomal S12 family of proteins, resulting in an evolutionary search target of approximately 10^-106 percent of 121-residue sequence space. [notice the use of the concept of an island of function in a space] In general, the higher the Fit value, the more functional information is required to encode the particular function in order to find it in sequence space.
Of course, Dembski's Chi metric in log reduced form [cf original post here and the onward linked thread], shows that an easy way to quantify that search challenge is to use a threshold beyond which sufficiently specific and isolated zones of interest -- notice the case Durston et al cite -- will be maximally hard to find on random walk plus trial and error searches, especially where the zone of interest is based on function, i.e. we have isolated islands of function in vast config spaces beyond the search resources of the observed solar system or cosmos. Which last, you -- as already noted -- tried to dismiss as irrelevant. In short, on the evidence we have in hand, the claim you often make of a lack of adequate warrant for an empirically based mathematical model and metric of an observed phenomenon described in the technical literature at least since Orgel and Wicken in the 1970's, i.e. complex specified information -- the only meaning of lack of rigour that is reasonable [notice your unresponsiveness to 34 - 35 above] -- is a product of your own refusal to engage the key concepts and their roots in standard work in information theory and in light of the infinite monkeys/needle in the haystack type analysis. In further short, the well-warranted conclusion is that you are -- on evidence of coming on three months of attempted discussion in the teeth of drumbeat repetition of a wall of dismissive talking points -- being selectively hyperskeptical and/or willfully obtuse to the point of being willfully defiant and dismissive of what you know or should know. Which, in the context of promoting highly misleading talking points by drumbeat repetition in defiance of repeated correction, is tantamount to making willfully deceptive false claims. To lying, in brutally direct short. (A word I do not like to use, but which -- regrettably -- is looking ever more like the appropriate one.) And I am still deeply offended whenever I recall your snide, atmosphere-poisoning allusion to Galileo's whispered "it still moves" after he was forced to publicly recant by threat of torture. I remind you, that no-one is threatening anyone with torture here, and that if anyone is playing the august magisterium imposing its views by fiat and threats to careers, it is the evolutionary materialist magisterium, as say the recent Gaskell case shows, and earlier ones going back to the likes of Sternberg, Bishop, and Kenyon made all too plain. In short, you have indulged in a turnabout, blame the victim, false accusation. You have some serious explaining and apologising to do, madam. For weeks or months now. I simply point you to 195 just above and the onward links above and in the previous thread. If you are interested in getting serious after coming on three months that is. Good day, madam GEM of TKI kairosfocus
Joseph: It is increasingly clear that MG is simply pushing talking points, even in the teeth of patent reality. But this is reality, not one of those comedies where denial denial denial and dismissal, drumbeat fashion can substitute for reality. The episode with the clips in 171 above is perhaps the clearest immediately accessible proof of it. The bottomline is that the -- peer-reviewed, Dec 15, 2010 Bio Complexity 2010(3):1-6. doi:10.5048/BIO-C.2010.3 -- Dembski et al vivisection of ev turns out to be quite correct, despite all dismissals and obfuscations. Abstract:
ev is an evolutionary search algorithm proposed to simulate biological evolution. As such, researchers have claimed that it demonstrates that a blind, unguided search is able to generate new information. However, analysis shows that any non-trivial computer search needs to exploit one or more sources of knowledge to make the search successful. Search algorithms mine active information [f/n 1: "active information is defined as -log2(p/q) where p is the probability of success for an unassisted search and q is the probability of success for an assisted search. Informally, it is the amount of information added to the search that improves the probability of success over the baseline search."] from these resources, with some search algorithms performing better than others. We illustrate these principles in the analysis of ev. The sources of knowledge in ev include a Hamming oracle [f/n 3: "A Hamming oracle uses the Hamming distance (number of bits that differ from a target sequence) as its fitness metric" where from f/n 2: "A software oracle is a software object that answers queries posed to it. In our case, a software oracle is a function that takes in a configuration and returns a value denoting the fitness of that configuration"] and a perceptron structure that predisposes the search towards its target.[nb f/n 8: "Although all 256 positions along the genome [used in ev] are evaluated for errors and contribute to an organism’s fitness, the randomly placed binding sites are restricted to the second half of the genome. In Figure 1 of reference 16 [16. Schneider TD (2000) Evolution of biological information. Nucleic Acids Res 28: 2794-2799. doi:10.1093/nar/28.14.2794], these correspond to bases 126 to 261. There are other nucleotides whose identities are interpreted as weights, window values, or the bias in the construction of the perceptron. Five additional bases are used at the end to accommodate a sliding window used in ev." and f/n 9: "The target binding sites start at location 131 (zero-indexed) in the first Figure of reference 16. Thus, location 10 here corresponds to nucleotide 141"] The original ev uses these resources in an evolutionary algorithm. Although the evolutionary algorithm finds the target, we demonstrate a simple stochastic hill climbing algorithm uses the resources more efficiently.
Let's just say that in the current climate of hostility, Dembski et al would not have been published in such a journal unless their article had serious merit on matters of substance. Mung simply provided clips and comments from Schneider that inadvertently corroborated the point of the critique of ev in the literature. Schneider's race horse page, as the rest of the discussion in the CSI thread will show, is particularly rich in such implicitly telling admissions. Similarly, we again see that MG is unwilling to face and address on the merits the specific challenges to her main claims. Notice, how she is clearly unable and/or unwilling to click on links and address specific points on the merits. Let's repeat, again. First, on CSI and its "rigour," that has been addressed over and over again, in most specificity to the issue of rigour, at 34 - 5 above. Similarly, the talking points MG tends to use over and over as thought hey have not been cogently answered, were last dissected in 23 - 24 above. And, the overall summing up of the issues MG has needed to explain herself on has been kept up in the editorial response to Graham at no 1 in the CSI newsflash thread; which MG has persistently ignored. When it comes to ev, 137 above shows my links to the places in the CSI Newsflash thread where it is dissected by Mung. (One of MG's tactics seems to be to wait until something is buried under enough posts in a thread, or has been continued in a successor thread, before repeating the assertion that was rebutted.) She knows or should know better than she has acted. GEM of TKI kairosfocus
kairsfocus- MathGrrl has to be a ruse and this is all a prank. When she spews stuff like:
The record shows that no ID proponent has provided a rigorous mathematical definition of CSI as described by Dembski
For the record I have to call her a liar- either that or she is purposely obtuse. Joseph
kairosfocus,
Simply go up to 171 and look to see the targetting in action for ev.
Your comment 171 shows no such thing. You seem to think that the recognizer or the binding site is some sort of target, but that simply shows confusion about how ev works. As I noted here: https://mfinmoderation.wordpress.com/2011/03/14/mathgrrls-csi-thread/#comment-1858 and very recently requoted to CannuckianYankee, "ev has a goal of co-evolving binding sites and their recognizers so that the Shannon information in the binding sites can be measured. The only feedback provided is the number of sites recognized. There is no target for the content of either the binding sites or the recognizers. In fact, the makeup of those parts of the genome will be different in different runs." ev absolutely does not have a target for the solution. Again, if you disagree, please identify the target either in the ev paper or in the Evj source code. MathGrrl
Onlookers: Notice how I am now repeating the links to the answers that MG has studiously avoided for ten or more days now, just in this thread, including the stunt of looking at from 61 or so on when the links went to comments above her artfully chosen cutoff. And this is just for this thread, she has studiously been unresponsive to cogent answers for over two months now, in thread after thread. GEM of TKI kairosfocus
Onlookers: Simply go up to 171 and look to see the targetting in action for ev. As for "tweaking," the clip in 171 shows it for what it is, fine-tuning to achieve intelligently designed purposeful performance. The sad joke is that after composing the program, fine tuning for hitting targets measured with Hamming distances (number of "mistakes" -- digital values to change to transform one point to another in a digital space -- is a Hamming distance metric by another name) and more, Schneider imagines that his program is a model of blind watchmaker chance variation plus natural selection creating macroevo. The creation of Shannon info as such is no big deal, tossing a coin at random will create what can be quantified in a Shannon metric as information. The real challenge is to create FSCI beyond the threshold, without intelligent direction, and tha tis precisely the problem with Schneider's ev and the exact significance of the targetting tuning and selection of nice trendy fitness functions that give rise to hill climbing. Again and again MG et al fail or refuse to see that the real issue is not hill-climbing within an island of function (micro-evo in effect) but getting to shores of islands of function in large config spaces. And meanwhile it still remains the case that on CSI and its "rigour," that has been addressed over and over again, in most specificity to the issue of rigour, at 34 - 5 above. Similarly, the talking points MG tends to use over and over as thought hey have not been cogently answered, were last dissected in 23 - 24 above. And, the overall summing up of the issues MG has needed to explain herself on has been kept up in the editorial response to Graham at no 1 in the CSI newsflash thread; which MG has persistently ignored. When it comes to ev, 137 above shows my links to the places in the CSI Newsflash thread where it is dissected by Mung. (One of MG's tactics seems to be to wait until something is buried under enough posts in a thread, or has been continued in a successor thread, before repeating the assertion that was rebutted.) MG is studiously ignoring the fact that her favourite talking point has been more than adequately answered, over and over again. Which is actually quite rude or uncivil, just as CY pointed out. She knows or should know better than she has acted and written. GEM of TKI kairosfocus
kairosfocus, Your issues with the "tweaking" of parameters by Schneider to beat Dembski's UPB is addressed here: https://uncommondescent.com/intelligent-design/news-flash-dembskis-csi-caught-in-the-act/#comment-378783 The relevant paragraph is:
The second point is the discussion of Schneider's "horserace" to beat the UPB. You both make a big issue about Schneider tweaking the parameters of the simulation, population size and mutation rate in particular, but you don't discuss the fact that, once the parameters are set, a small subset of known evolutionary mechanisms does generate Shannon information. This goes back to my discussion with gpuccio on Mark Frank's blog where we touched on the ability of evolutionary mechanisms to result in populations that are better suited to their environment than were their parent populations. That, in turn, suggests that, while it might be possible to make a case for cosmological ID, there is no need to posit the involvement of intelligent agency in biology.
MathGrrl
CannuckianYankee, On a separate point....
I sense that civility is waning with your recent repetitions. Repetition can be uncivil when it doesn’t respect the fact that a question was answered with careful patience and knowledge-based insight.
Your assumption is incorrect, hence your conclusion does not follow. I am continuing to ask for a rigorous mathematical definition of CSI, as described by Dembski, and a detailed example calculation because neither have yet been provided. Perhaps you would care to answer the questions I posed to kairosfocus in my comment 59 of this thread? Here it is again for your convenience:
I have read through all of your responses since my comment numbered 60 in this thread and have yet to see you address the two very simple questions I've asked. Let's try to make some progress by breaking this down into simple questions that can be answered succinctly. First, you repeatedly claim that CSI has been rigorously defined mathematically, but nowhere do you provide that rigorous mathematical definition. You could eliminate the need for your assertions by simply reproducing the definition here in this thread, in a single comment without any extraneous material. Could you please do so? Second, you have yet to reply to my question in comment 59:
CSI, I have explicitly said, many times, is a descriptive concept that describes an observed fact
By this, are you asserting that it is not possible to provide a mathematically rigorous definition of CSI, even in principle? If your answer is yes, I think you have a disagreement with some of your fellow ID proponents. If your answer is no, could you please simply state the mathematically rigorous definition of CSI, as described by Dembski, in a single, stand alone comment, without myriad tangential points, postscripts, and footnotes? It would go a long way to clarifying your position.
With these two questions answered, again as succinctly as possible, I believe we can make some progress in the discussion. Are you willing to work with me on this?
Since you are claiming that I am continuing to ask questions that have already been answered, I presume that it is not a problem for you to reproduce those answers in response to this comment. MathGrrl
CannuckianYankee, Welcome to the discussion!
I really want to address one thing to MathGrrl: What is your criteria for determining that the ev program does not involve a targeted search? I think this is really key to one of the main disagreements here. So far I’ve only seen you assert that it does not, . . .
You must have missed the two comments I referenced above, this one in particular: https://mfinmoderation.wordpress.com/2011/03/14/mathgrrls-csi-thread/#comment-1858 In that comment I make the point that "ev has a goal of co-evolving binding sites and their recognizers so that the Shannon information in the binding sites can be measured. The only feedback provided is the number of sites recognized. There is no target for the content of either the binding sites or the recognizers. In fact, the makeup of those parts of the genome will be different in different runs." MathGrrl
kairosfocus,
Similarly, Mung is not speculating, he gave citations from the text by Schneider (which we can all follow up), and I was able to confirm some of the key points through my own clips from Schneider.
I see that you continued your discussion of ev later in comment 171, but did not identify any target in your discussion there, despite quoting from the ev paper. This is a simple issue to resolve. If you believe that ev can be modeled as a targeted search, please identify the target either in the ev paper or in the Evj source code. MathGrrl
And I am not saying “i don’t need to calculate,” I am saying we have empirical data in hand on the matter that tells us the sort of order we are looking at, and that this is consistent with what common sense would have told us
Where is this empirical data? How is the resemblance of a portrait to a face objectively measured? IOW, how do you make the specification? Heinrich
Dr Bot Please, try not to twist what I actually said, which was that the specification of a portrait -- not some vague resemblance like burn marks on toast can yield or the like -- will require sufficient complexity and specificity of information that it will not be achieved by chance and necessity on the gamut of our cosmos, with so high a degree of confidence that it is practically certain; similar to other cases of FSCI. The evidence -- as actually cited from those who do this sort of thing professionally -- is the required info for a sculptural portrait is of the order of Mbits of info. (And I am not saying "i don't need to calculate," I am saying we have empirical data in hand on the matter that tells us the sort of order we are looking at, and that this is consistent with what common sense would have told us.) As I read your string of one objection after another, I keep getting the feeling that you are twisting me into pretzels to try to fit some strawman ignoramus. Now, when you say many natural phenomena will not be found on a random walk, in part my answer is of course, e.g. the DNA and the machinery to put it to work in the living cell. Such cells may be self-replicating, but heir origin seems to be intelligent, per the basic point highlighted 200 years ago by Paley in the Ch II on the self-replicating watch that is seldom mentioned when objectors hastily dismiss his watch argument in ch 1. Namely, when we see intricate machinery that does a job and then has the additional -- additional-ity is crucial here -- provisions that make it self-replicating, then that is a further reason to infer design. In other cases, what you are suggesting is of the order that if one sets up a given outcome, and then hopes to replicate it by chance and necessity, then that is unlikely on the gamut of the cosmos. For example if 200 dice are tossed and the record of the toss is kept, the exact pattern is unlikely to recur in the history of the cosmos. That would be because the first toss has been tuned into a specification, of a very narrow cluster of possibilities. Each possibility is equiprobable, but he cluster of at random tosses that are in no particular order so outweigh the one you are interested in that to find it a second time would be a practical impossibility. This is similar to how the same dice reading all 1's would be a practical impossibility on the gamut of the observed cosmos, from chance and or the necessity of falling then tumbling and settling. If you see 200 dice reading all 1's the best bet is that this was by design. This is similar to the thermodynamic result and reasoning that explains how the O2 molecules in the room where you sit could with equal probability be in any one possible outcome as any other one. But, the ones where all the O2 molecules are clumped to one end of the room are so utterly outweighed by the numbers where they are more or less evenly scattered, that we will reliably see the latter not the former. Indeed, if you see a room that has the O2 molecules clumped like that, it is almost certainly by design, even if we do not know how that were done. So, the attempt to dismiss the needle in haystack and infinite monkeys illustrations, fails. BTW< the IM example was formerly advanced quite frequently by advocates for chance + necessity to yield OOL and evolution, including online. Of course Weasel type arguments tried to weight the case as thought he fitness function did not have to address seas of non-function and isolated islands of function. But that too is overwhelmingly reasonable, on many grounds, starting with what is needed, per observation to get codes and algorithms. It is only now that we have shown what is being suggested that this has been abandoned and has been turned now into an attempt to suggest that pointing out that the chance that has to be the source of variation -- contingency -- in the darwin type model, is not viable is somehow a strawman misrepresentation. But in fact the natural selection half is a description that some variations will do worse than others and will be culled out over time. The variations have to come from chance processes, at least if you are a darwinist. NS may explain survival of the fittest, but it does not explain the arrival of the fittest, this last being understood as reproductive advantage. I suggest you read App 1 point 6 here to see the point on macro vs micro states and relative statistical weight. GEM of TKI kairosfocus
You are not my teacher and I am not a lazy student ducking on an assignment.
So you claim that a 3d shape that resembles a real person cannot ever exist without design, and that it would contain > 1000 bits of functional information but you don't need to do any calculations to know this is true. Fair enough!
Further to all this, the point of the 500 – 1,000 bit threshold for FSCI is precisely that the quantum state Planck time resources of our solar system and of our observed cosmos beyond that, would be an impossibly small fraction of what would be required to search out a reasonable fraction of configs to reasonably expect to arrive at the relevant island of function on random walks plus trial and error. That has been pointed out in detail over and over again, including the point that the relevant scale of interaction, chemical interaction, takes up ~ 10^30 P-times for the fastest (ionic) interactions.
All true, and if you use the same criteria to judge many complex but natural phenomena you find that a random walk will not stand a chance of finding them. Your arguments are, and always have been, based on flawed reasoning but I guess I only have myself to blame for failing to educate you in this matter. Infinite monkeys will not produce lots of things observed to be the products of natural forces. It is a straw-man argument. DrBot
Dr Bot: You are not my teacher and I am not a lazy student ducking on an assignment. Right from the beginning, the link I gave on the nodes-arcs approach has in it an onward link on 3-d modelling, which was actually a supplemental for teaching math in high school. You will find in it a report on the typical sort of scope of information used in sculptural 3-d models, and it is as I have reported. Let me clip a relevant paragraph:
To get a sculptural face that looks closely like that of George Washington or Nefertiti [[i.e. we have defined a specific function], a dense network of quite precisely located points has to be set up; so that a smooth, accurate portrait can be made. [by contrast, Old Man of the Mountain or anything reasonably close would be recognisable as somewhat face-like, and would be “acceptable”; so it is not anywhere nearly so tightly specified. That's why with a spot of imagination, one can easily see face-like figures in wood paneling, clouds in the sky, and in brown marks on toast.]
The first link in that paragraph in its original location goes here, to the referenced Math supplement note. Clipping:
Often the first step in creating the life-like computer generated characters we are now so used to in the movies — such as King Kong, Iron Man, WALL-E and Gollum — is for an artist to produce a highly detailed physical scultpure of the creature, just like the ones that now decorate Dench's office. Once the studio is happy that the creature looks just right, a 3D scanner is used to produce a highly detailed three-dimensional digital model of the object that can then be manipulated by animators on a computer. A 3D scanner shines a line of red laser light onto the object's surface, and a camera records the profile of the surface where the line of light falls. The position and direction of the laser and the camera lens are known, hence it is possible to calculate the position of each point on the surface highlighted by the laser (a unique triangle is formed by the point on the surface, the laser and camera, of which the length of one side and two angles — the orientation and distance between the laser and camera — are known). The three-dimensional coordinates of each point are stored digitally, building up an intricate mesh made from triangular faces that mimics the surface of the real object. The resulting digital model is amazingly realistic — you almost forget that you are looking at a two-dimensional screen, and particularly that you are looking at a surface entirely made of flat triangles. The life-like quality comes from the massive amount of detail: a 3D scan can produce a model with as many as six million triangles making up the surface. The resulting model can be viewed on the computer screen either as a wire frame, or more realistically with each flat face shaded as it would be in real three-dimensional life . . .
I therefore find your latest objection annoyingly repetitive and stubborn in the teeth of already provided and reasonable information. Failure to do due diligence before objecting on your part does not constitute failure to warrant claims on mine. Further to all this, the point of the 500 - 1,000 bit threshold for FSCI is precisely that the quantum state Planck time resources of our solar system and of our observed cosmos beyond that, would be an impossibly small fraction of what would be required to search out a reasonable fraction of configs to reasonably expect to arrive at the relevant island of function on random walks plus trial and error. That has been pointed out in detail over and over again, including the point that the relevant scale of interaction, chemical interaction, takes up ~ 10^30 P-times for the fastest (ionic) interactions. The objections are looking ever more selectively hyperskeptical. GEM of TKI kairosfocus
F/N 3: The key significance of this is that -- cf here, including the video clip -- the DNA information is transferred to mRNA as a template, that in the ribosome the anticodons key-lock fit -- i.e this is closely related to a sculpture -- to attach successive coded for AA's at their opposite ends, to the growing protein. At 300 AA's a typical length, we are looking at 1800 bits of digital info storage capacity expressed sculpturally (as did von Neumann's kinematic replicator). This is of course well beyond the 1,000 bit threshold, and there are thousands of proteins involved in typical cell based life. kairosfocus
You are ducking the point that 500 bits is a practical upper limit for the nodes and arcs pattern (for chance to be a credible explanation), and we can use this objectively and quantitatively as I did. An acceptable sculptural portrait will normally require much more than 500 – 1,000 bits of specific info as assessed by the nodes and arcs method.
As my math teacher would say: Show me your working out! Remember, when we are talking about the subjective notion of a likeness and calculating probabilities of them occuring by natural forces you don't want to limit yourself to one single example (Lincoln). How do the numbers work out for an object looking like any particular individual who exists, or who used to exist.
If you doubt me on this, show a case of such a tree, or swirls in wood, or a cloud shape, or burn marks on toast, etc that produces a sculptural, realistically detailed, accurate portrait of Lincoln.
It order to test your claim I need to survey the entire universe including viewing all transient phenomena from all viewing angles? DrBot
F/N 2: Observe carefully as well, you are strawmannising in order to set up a slectivley hyperskeptical objection, as I am speaking of a sculptural, realistic portrait, the particular context of Mt Rushmore. A lot of things may vaguely look like Lincoln, and be within the range of information that is reachable on chance, e.g. marks in bark on a tree. If you doubt me on this, show a case of such a tree, or swirls in wood, or a cloud shape, or burn marks on toast, etc that produces a sculptural, realistically detailed, accurate portrait of Lincoln. kairosfocus
F/N: The Lincoln case is in a context, and there is a photograph that is the more or less standard of reference, both for the Mt Rushmore statue and the US penny. You are ducking the point that 500 bits is a practical upper limit for the nodes and arcs pattern (for chance to be a credible explanation), and we can use this objectively and quantitatively as I did. An acceptable sculptural portrait will normally require much more than 500 - 1,000 bits of specific info as assessed by the nodes and arcs method. kairosfocus
And (post ferry trip no 1 for morning), Your second red herring notwithstanding, the meniscus example is a typical case of how subjectivity and objectivity interact in scientific work. By the time of my N2 Chem course, it was routine for us to be able to read end-point reliably to within one drop in 25 ml, i.e within a few parts per thousand. Rule was to do three runs and average. We often were able to get the same value for volume on each run. That includes the dummy variable, colour change -- a subjective judgement with an objective basis again. And, the point is that judging that one is at the correct eye level to read the volume of the pipette and the burette -- for the latter a start point and an end point that had to be subtracted -- was a skill, and exercised through judgement, but one that yielded objectively reliable and accurate results. Subjectivity and objectivity are not opposites and are routinely involved in scientific measurements and related mathematical models and analyses. In contexts that are often quite momentous, including life and death. Also, you may want to see the related discussion here on the Glasgow Consciousness Scale. I repeat, subjectivity and objectivity are not opposites, and many subjective things can be reliable and quantitative, on an appropriate scale. GEM of TKI kairosfocus
KF:
Did you pay attention to the nodes, arcs and interfaces approach that you noted on previously?
Yes. If you use that as the basis for a measure then the measure will depend on Lincoln's age and expression. What degree of accuracy are you after, each wrinkle? How would that work for a caracuture? People would see the likeness wouldn't they but is there an objective measure? Try a different approach - if you spend your life rearing pigs you will typically be able to tell the different pigs apart, and even recognise a portrait as representing a distinct pig. If someone else looked at the portrait, and your pigs, they wouldn't be able to differentiate. The problem is that ultimatly what it comes down to is that you are claiming that there can never, anywhere in the universe, be an object, of any scale, that some people would regard as looking like Lincoln's face. DrBot
Dr Bot: Did you pay attention to the nodes, arcs and interfaces approach that you noted on previously? GEM of TKI kairosfocus
I suggest you look back at fig 3 in the original post [judging a meniscus, a common enough scientific measurement task] and then come back to us on whether subjectivity and objectivity are opposites.
The beaker provides an objective tool for measuring a liquid but the property of the liquid and the way it interacts with the beaker influences the degree of accuracy when taking a measurement - use a taller, narrower beaker for more accuracy. Following a correct procedure will increase the accuracy and the correct procedure is based on an emperical understanding of the liquids behaviour. The volume of the liquid does not change if the observer changes but an unskilled observer will take inacurate measurements. What is the measurement system used to measure a facial likeness? Can a face have an actual likeness in the way a liquid has an actual volume.
The scope of the acceptable island would be searched by simply injecting noise. This will certainly be less than 10^150 configs. [Notice, the threshold set for possible islands of function is a very objective upper limit: the number of Planck-time quantum states for he atoms of our observed cosmos.] At the same time, the net list will beyond reasonable doubt exceed 125 bytes, or, 1,000 bits. That’s an isolation of better than 1 in 10^150 of the possible configs. And it is independent of the subjectivity of any given observer.
You make the claim without doing the math! People often see a likeness in natural phenomena, for example the face on mars was hailed as a sign of design, and likenesses of religous figures are frequently observed, and claimed as design. How do you measure any of these objectively - what metric do you propose that is independant of human perception?
In short, your snipping exercise made up- and knocked over a strawman.
It is a point that goes to the heart of the issue - How do you objectively measure function?
PS: The allusion you just made is in very poor taste, and twists my remarks out of context very nastily. Please, do not do the like again.
Your remarks were snide, bordering on uncivil. I responded with a joke. Now please provide a mathematically rigorous way to measure function in a portrait. DrBot
F/N: Onlookers, I am astonished to see Dr Bot's follow on MG's talking point:
If you can’t measure function in a mathematically rigorous, objective way then any CSI calculation is subjective (and lacks mathematical rigour)
Pardon, but have you ever had to get the right car-part or your car will not start? We here subjectively observe an objective situation, one that can be recognised in a mathematical model by a threshold variable, FS = 1/0. Similarly, while there are many contextually responsive possible answers to Dr Bot's argument -- notice, I here have composed a second answer that responds to his claim, which is different, but is likewise in contextually responsive English text coded using ASCII -- there is a sharp, observable objective and quantifiable difference between the FSCI of text in English and at random typing or endless repetition of a single letter. That is, there is no question of functional specificity being in all cases "merely" subjective and so not objective or measurable. Likewise, we can see that Dr Bot's dodge to a red herring on the way reading a meniscus -- note the original actually comes form a pharmacology context, i.e. life and health are at stake in the routine use of the technique -- shows how objectivity and subjectivity are in fact inter-related, and how the subjective involvement can be quantified. (That is we can assess when a volume is read correctly or incorrectly by inspecting a meniscus, just as we do much the same for how a tape measure is used, by tailor or by carpenter. If the measurements are wrong, the clothes or the furniture will not work right.) So, function can be measured, it can be measured objectively -- think metrics on the performance of software for a further instance -- and it can be measured quantitatively, with sufficient consistency to be relied on in serious contexts. It seems clear too that Dr Bot needs to read 34 - 35 above, especially the part that discusses mathematical models and metrics. Let me again clip Wiki in that context (noting that the RION scales issue also needs to be followed up) as cited in point 10, this being a confession against interest:
A mathematical model is a description of a system using mathematical language. The process of developing a mathematical model is termed mathematical modelling . . . A mathematical model usually describes a system by a set of variables and a set of equations that establish relationships between the variables. The values of the variables can be practically anything; real or integer numbers, boolean values or strings, for example. The variables represent some properties of the system, for example, measured system outputs often in the form of signals, timing data, counters, and event occurrence (yes/no). The actual model is the set of functions that describe the relations between the different variables.
See why -- as someone who has worked with mathematical models for decades -- I am astonished to see the rhetorical pretence that valid models based on reasonable practice are now suddenly suspect of being not rigorous enough? Wiki's remarks on rigour are also worth clipping from the excerpt at point 16:
An attempted short definition of intellectual rigour might be that no suspicion of double standard be allowed: uniform principles should be applied. This is a test of consistency . . . . Mathematical rigour is often cited as a kind of gold standard for mathematical proof. It has a history traced back to Greek mathematics, in the work of Euclid. This refers to the axiomatic method . . . . Most mathematical arguments are presented as prototypes of formally rigorous proofs. The reason often cited for this is that completely rigorous proofs, which tend to be longer and more unwieldy, may obscure what is being demonstrated. Steps which are obvious to a human mind may have fairly long formal derivations from the axioms. Under this argument, there is a trade-off between rigour and comprehension. Some argue [obviously on the other side] that the use of formal languages to institute complete mathematical rigour might make theories which are commonly disputed or misinterpreted completely unambiguous by revealing flaws in reasoning.
In fact, the weight of practice is on the side that one formalises to the extent required to be clear, factually adequate and intelligible in steps taken. The Reduced chi metric starts from the most commonly used mathematical metric for information, then addresses the specificity issue by confining it to zones of interest, T -- the specification. The log reduction of the equation Dembski proposed in 2005 then shows that the issue is degree of isolation in a config space. And, for our solar system -- the cosmos we live in -- 150 bits is more than enough. If you think that is not stringent enough, 1,000 bits swamps the search resources of the observed cosmos, as we saw in the case of the Lincoln statue just above. Which should have sufficed to show on a specific indicative example how such cases can be set within a most definite objective threshold. If something is specific, on observed effects of injected randomness beyond a certain point, or on using a code or implementing an algorithm etc, then we have good reason to infer it is in an island of function. If something is so complex that the search resources of our solar system or the observed cosmos would be insufficient to have a random walk and trial and error algorithm credibly work -- the needle in the haystack problem -- then it is reasonable to infer that the FSCI in it has the directly known routinely observed cause of such FSCI, intelligence, as its best explanation. That is FSCI is a well tested and credible sign of intelligence. The best answer to such is to find a counter-example. ev crashes in flames, along with a host of other suggested counter examples ranging all the way out to the infamous Mars canals. Honest and serious tests, on random text generation run up to a space of 1 in 10^50 being searchable, as is similar to the limit suggested by Borel decades ago for the lab scale. So, we have excellent reason to see that FSCI in contexts where we are dealing with 500 to 1,000 bits at the lower end, are enough to make the inference to design on FSCI a best current explanation,. Which is the degree of warrant -- notice warrant is the relevant term not "rigour" -- suitable for a scientific claim. At this point the burden of proof is actually int eh hands of the objectors, and plainly they cannot meet it. So they are resorting to demanding that an empirical inference meet the criteria that not even most mathematical arguments can. Which they know or should know. Selective hyperskepticism leading to reductio ad absurdum, again and again. GEM of TKI kairosfocus
Dr Bot: You are snipping and making up a strawman:
If you can’t measure function in a mathematically rigorous, objective way then any CSI calculation is subjective (and lacks mathematical rigour).
I suggest you look back at fig 3 in the original post [judging a meniscus, a common enough scientific measurement task] and then come back to us on whether subjectivity and objectivity are opposites. Then, you can look at the issue of forming a threshold of judgement when a statue's features lose recognisability. But more to the point, you know or should know from what was already said in 126 that:
The event, E, is a particular statue of Lincoln, say. The zone of interest or island of function, T, is the set of sufficiently acceptable realistic portraits. The related config space would be any configuration of a rock face. The nodes and arcs structure would reduce to a structured set of strings, a net list. This is very familiar from 3-d modelling (and BTW, Blender is an excellent free tool for this; you might want to start with Suzie). Tedious, but doable — in fact many 3-d models are hand carved then scanned as a 3-d mesh, then reduced — there is a data overload problem — and “skinned.” (The already linked has an onward link on this.) The scope of the acceptable island would be searched by simply injecting noise. This will certainly be less than 10^150 configs. [Notice, the threshold set for possible islands of function is a very objective upper limit: the number of Planck-time quantum states for he atoms of our observed cosmos.] At the same time, the net list will beyond reasonable doubt exceed 125 bytes, or, 1,000 bits. That’s an isolation of better than 1 in 10^150 of the possible configs. And it is independent of the subjectivity of any given observer. ["The engines are on fire, sir! WE'RE GOING DOWN . . . ]
In short, your snipping exercise made up- and knocked over a strawman. GEM of TKI PS: The allusion you just made is in very poor taste, and twists my remarks out of context very nastily. Please, do not do the like again. kairosfocus
KF: I've been away, hence my lack of response:
The event, E, is a particular statue of Lincoln, say. The zone of interest or island of function, T, is the set of sufficiently acceptable realistic portraits.
Sufficiently acceptable? What metric are you using and how is it defined mathematically?
And it is independent of the subjectivity of any given observer. ... the function is sculptural resemblance.
? If the function is sculptural resemblance then it requires a person to judge the resemblance. This is subjective. The amount of CSI cannot be calculated precisely because it will depend on the observer, there is no metric to measure a facial likeness in absolute terms as is possible with volume or force.
recognisability as a portrait of a specific individual is subjective but that is not as opposed to being objective.
? So it is subjective, but that doesn't mean it isn't objective ? Quasi objective perhaps?
Now, we address the red herring led away to the strawman: why is “function” a question of MATHEMATICAL “rigour”?
If you can't measure function in a mathematically rigorous, objective way then any CSI calculation is subjective (and lacks mathematical rigour)
To see what I mean, is VOLUME of a liquid an objective thing?
The volume of a liquid can be measured to a degree of precision that depends on the measurement apparatus. Volume does not vary if the person doing the measurement is blind, or from china.
The objection is misdirected, and based on a conceptual error, probably one driven by insufficient experience with real world lab or field measurements.
I have plenty of practical experience both in real world measurement and the design of precise measurement equipment, more than you I suspect. You should consider the fact that an objective measure of function may not be possible for 'a facial likeness' but that does not mean one cannot be found for something else. It may simply be that this particular example is not a good example of CSI because of the inherent subjectivity in the way function has to be measured for a facial likeness.
PS: “We’re going downnnnn . . . !”
Not on me you're not ;) DrBot
F/N 2: Reminder, the rigour question is addressed most directly at 34 - 5 above. If MG is serious about her claim, she will respond to that, which has been drawn to her attention repeatedly, and has been ignored to date; at least once by a clever rhetorical tactic of talking about reading on from her comment at was it 60 above; when she knew or should have known from links that the main response was in 34 - 5, and a rebuttal to a clip of her main argument was in 23 - 4. Let us see if MG will at length actually address a matter on the merits. kairosfocus
F/N: Let's clip out a bit of Mung's dissection from 126 and 182 in the CSI Newsflash thread: ____________ Mung, 126: >> So let’s take a closer look at Schneider’s Horse Race [the link is there in the original thread] page and do a little quote mining. A 25 bit site is more information than needed to find one site in all of E. coli (4.7 million base pairs). So it’s better to have fewer bits per site and more sites. How about 60 sites of 10 bits each? Tweak. We are sweating towards the first finishing line at 9000 generations … will it make it under 10,000? 1 mistake to go … nope. It took to about 12679 generations. Revise the parameters: Tweak. It’s having a hard time. Mistakes get down to about 61 and then go up again. Mutation rate is too high. Set it to 3 per generation. Tweak. Still having a hard time. Mistakes get down to about 50 and then go up again. Mutation rate is too high. Set it to 1 per generation. Tweak. 3 sites to go, 26,300 generations, Rsequence is now at 4.2 bits!! So we have 4.2 bits × 128 sites = 537 bits. We’ve beaten the so-called “Universal Probability Bound” in an afternoon using natural selection! And just a tad bit of intelligent intervention. Dembski’s so-called “Universal Probability Bound” was beaten in an afternoon using natural selection! And a completely blind, purposeless, unguided, non-teleological computer program! Does Schneider even understand the UPB? Does he think it means that an event that improbable can just simply never happen? Evj 1.25 limits me to genomes of 4096. But that makes a lot of empty space where mutations won’t help. So let’s make the site width as big as possible to capture the mutations. … no that takes too long to run. Make the site width back to 6 and max out the number of sites at 200. Tweak. The probability of obtaining an 871 bit pattern from random mutation (without selection of course) is 10-262, which beats Dembski’s protein calculation of 10-234 by 28 orders of magnitude. This was done in perhaps an hour of computation with around 100,000 generations. HUH? With or without selection? It took a little while to pick parameters that give enough information to beat the bound, and some time was wasted with mutation rates so high that the system could not evolve. But after that it was a piece of cake. You don’t say. MathGrrl @105 There is no target and nothing limits changes in the simulation. There aare both targets and limits.>> 182: >> Again, in Schneider’s own words: Repressors, polymerases, ribosomes and other macromolecules bind to specific nucleic acid sequences. They can find a binding site only if the sequence has a recognizable pattern. We define a measure of the information (Rsequence) in the sequence patterns at binding sites. The Information Content of Binding Sites on Nucleotide Sequences Recognizer a macromolecule which locates specific sites on nucleic acids. [includes repressors, activators, polymerases and ribosomes] We present here a method for evaluating the information content of sites recognized by one kind of macromolecule. No targets? These measurements show that there is a subtle connection between the pattern at binding sites and the size of the genome and number of sites. …the number of sites is approximately fixed by the physiological functions that have to be controlled by the recognizer. Then we need to specify a set of locations that a recognizer protein has to bind to. That fixes the number of sites, again as in nature. We need to code the recognizer into the genome so that it can co-evolve with the binding sites. Then we need to apply random mutations and selection for finding the sites and against finding non-sites. INTRODUCTION So earlier in this thread I accused MathGrrl of not having actually read the papers she cites. I think the case has sufficiently been made that that is in fact a real possibility. I suppose it’s also possible that she reads but doesn’t understand. MathGrrl, having dispensed with the question of targets in ev, can we now move on the the question of CSI in ev? >> _________________ The emphases, blocks and links are of course there in the original. The thread has much more. kairosfocus
CY: Thank you. MG needs to address some serious matters on the merits, instead of simply repeating long since cogently responded to talking points over and over again. When it comes to ev, 137 above shows my links to the places in the CSI Newsflash thread where it is dissected by Mung. (One of MG's tactics seems to be to wait until something is buried under enough posts in a thread, or has been continued in a successor thread, before repeating the assertion that was rebutted.) On CSI and its "rigour," that has been addressed over and over again, in most specificity to the issue of rigour, at 34 - 5 above. Similarly, the talking points MG tends to use over and over as thought hey have not been cogently answered, were last dissected in 23 - 24 above. And, the overall summing up of the issues MG has needed to explain herself on has been kept up in the editorial response to Graham at no 1 in the CSI newsflash thread; which MG has persistently ignored. Mung's remarks and clips on MG's tactics, in 117 - 120 in the CSI Newsflash thread, in this light, are a telling corroboration. MG knows or should know better than she has acted. Sadly revealing GEM of TKI kairosfocus
EZ: When I was able to get through to actual pages in Mr Camping's web site, it turns out that he is actually broad-brush writing off the church. So, it is on the face of it unfair to use his folly in a credibility kill attempt against the church, which is exactly what he was set up for -- observing, too, how the same media tip-toes ever so carefully around issues tied to Islam: but then mebbe the attitude is that enraged Muslims KILL, Christians at most will protest . . . But then, ever so much of the media have lost -- did they ever have? -- any sense of duty of care to truth, balance and fairness. You will see that I think the failures of Mr Camping and co were fundamentally those of organisational governance. For, if there were proper ac countability to stakeholders and to genuine expertise in a panel, the sort of blunders indulged in would not have happened. He needs to publicly apologise, including to the church and its leadership that he has broad-brush dismissed. Then he needs to get his message straightened out, equally publicly. I think you are right that we have seen folly like this before, but it is not a peculiarity of the religious, it is a mark of unaccountable autocracy with a mike, or of an unaccountable elite with a mike. Lord Acton was right: power tends to corrupt, absolute -- unaccountable -- power corrupts absolutely, great men are bad men. Including, for those who have imposed evolutionary materialist censorship on origins science, including trying to radically redefine science in ideological ways that fetter it from being able to freely seek the truth about our world in light of empirical evidence. And, as for even the BBC, I am afraid they, too, have slipped far from their former greatness. I have seen or heard far too many one sided accounts, party-line ideological promos and willful omissions from the BBC to trust it anymore. (The BBC's performance in response to the climategate revelations alone suffice to underscore the point. Failure to give us an accurate picture of the history of Islamic expansionism, eschatological Mahdism and its underlying ideology over 1400 years, during the ten years since 9/11 nails it hard home. To see what I mean, try out: what are the black flag armies and Khorasan about? What event's 318th anniversary was September 11, 2001 the eve of? And on the longer running ME dispute, what is the significance of January 1919, London, and the names Chaim and Faisal? In the context of both of these, what is a Gharqad tree and what is its Madhist eschatological significance? What are hadiths? What is the historical allusion of the Islamist chant "Khaybar, Khaybar . . . " and how does this relate to, say, events of last May on a certain boat off the coast of Israel? [Without sound answers to such, we do not understand things that dominate our headlines, and BBC's leading voices know or should know better. As for BBC's vaunted appeals process, I have personal experience of its fox judging the fox failures. And, we could go on and on.]) Ah, well . . . GEM of TKI kairosfocus
KF: I'm afraid Mr Camping was just the latest goofy religious leader that was paraded in front of the world for entertainment purposes. To be fair, he did start the publicity himself. He wanted the word to be spread, all over the world. Hundreds of people in Vietnam were waiting for the Rapture. As you pointed out, he was wrong before and you'd think he'd be a bit more humble about his personal interpretations of Holy Scripture when he was clearly in a very, very small minority. I think he really did believe he was correct and I suspect, if he's honest, that he is examining his precepts. I hope so anyway, for his own sake. The media . . . . sigh . . . it's not about informing and educating the public anymore. Or investigating important issues. It's about entertainment, more and more like facebook and youtube every day. I live in England and am so lucky having the BBC to hand. Sadly, I expect this kind of thing will happen again and the fear of being labelled a weird-o will stop some sincere and honest folks from speaking their mind. ellazimm
Mung, KF, MG, MF, others, I have been quite the onlooker in these threads over the last couple of months, and I have to say, you're (KF and Mung) both doing a fine job. I've had several "aha" moments from these exchanges. I really want to address one thing to MathGrrl: What is your criteria for determining that the ev program does not involve a targeted search? I think this is really key to one of the main disagreements here. So far I've only seen you assert that it does not, but I haven't seen you engage any of the arguments presented by either Mung, with his careful analysis of the ev program in several posts on another thread, nor with KF's very reasoned argument for the quantification of CSI. I think your demands are unreasonable given the careful arguments here, which you have not apparently engaged - no - merely asserting the same rhetorical talking point denials will not get you very far. Well it might on Mark's "echo chamber," but do you honestly care about that? On a semi-related matter: I've also read many of the comments on Mark's blog regarding UD's moderation policy. I find it quite amuzing that many of the complainers there were able to post at UD for several years before being moderated, which says a lot about the tolerance level of the moderators here. Several of the names mentioned who are now in moderation, were posting here for quite some time - years, in fact. I think the reality is that they continued to ramble on the same talking points as you seem to be doing here without much interaction with the points already made ad-nauseum, and which are addressed in the "Frequently Raised Arguments" brief at the right side and top of every page. I think the reason you've been allowed to go beyond the fray is not because you've asked a question that hasen't already been asked here, but because you've for the most part engaged yourself civilly. I sense that civility is waning with your recent repetitions. Repetition can be uncivil when it doesn't respect the fact that a question was answered with careful patience and knowledge-based insight. They've even gone so far as to give you your own guest post. This hardly reflects in the gross misjudgment of UD going on at Mark's blog. Mark: I have to say though Mark, that you've been fair to us to a point, and it appears to have gotten you into a little hot water even at your own blog - I'm referring to recent comments from someone who's decided to leave your blog because of your mild dismissal of some of the complaints towards UD's policy. I also noticed how summarily your readers dismiss ID writings, such as Stephen Meyer's SITC, which is perhaps why you're remaining quite detached from the book while accepting a free copy. I don't know, but that's how it appears. I get the impression that your blog is slightly more opinion-controlling than the moderation here at UD. You seem almost afraid to admit that you're reading ID material apart from a cursory glance in order to dismiss it. So with that in mind, I have to ask MG: is that what you're really afraid of? If you engage in the arguments from KF, Mung and others here, you'll get into hot water on Mark's blog? You are, after all somewhat of a hero there at the moment. In my estimation, being a hero at the expense of understanding a crucial and pointed argument is hardly worth any notoriety you might gain from it. Even if you end up disagreeing with KF after carefully considering all of his points, I think going further with the fine points of his argument will increase your integrity many-fold. Consider it. I think most of us can agree that you're rhetorical talking points are getting a little tiresome. I think a good place to start might be to ask Mark's readers if they have anything to contribute to the question of the ev program and whether or not it involves a targeted search. A little change of subject there is warranted now, given the persistence in merely complaining about UD. I would also suggest that you familiarize yourself with a number of threads we had here a few years back with regard to Dawkins' Weasel program. I think starting there will allow you to see from a more elementary level how these programs are set up by designers themselves in order to demonstrate random chance searches; which is a bit like giving typewriters to monkeys to demonstrate that they can type. Well they can if you make them, but what's the point? You can start here: https://uncommondescent.com/evolution/dawkins-weasel-proximity-search-with-or-without-locking/ Let's have a real discussion here. How about having Mark's readers actually read Meyer's book rather than panning it? - Let's have a discussion of Chapter 13, and then let's really get into the nitty gritty of NFL. I'd really like to see that discussion. Right now, sad to say, I'm getting bored. CannuckianYankee
F/N: Onlookers, in 34 - 5, I addressed the issue of rigour in the context of mathematical models and metrics of phenomena that are fundamentally empirical. In 23 - 4, I clipped one of MG's many repetitions of her claims and answered point by point. If you scroll through the next ten days of comments, you will see that at no point does MG actually respond cogently on the merits. Instead, she simply repeats her drumbeat strawmannised false assertions. So, it is entirely reasonable to call her to answer the issues raised, and to hold that unless she does so cogently, she knows or should know that she has no case but finds it rhetorically effective to repeat false assertions and caricatures endlessly. kairosfocus
PS: Onlookers, simply scroll up (or click up) to 23 - 24 above and 34 - 35 above to see why MG's rhetorical drumbeat repetition that there is no "mathematically rigorous" definition of CSI is an empty talking point -- but then if you want to make a noisy drum it has to be hollow inside. The two links basically summarise the corrections MG has received for over two months now and has just now again refused to engage on the merits, preferring to yet again repeat an ill-founded and patently red herring led on to strawman claim over and over again as though that would make it true. By now, sadly, she knows or should know that her claim is ill-founded, and so the repetition is irresponsible to the point of being willfully deceptive. kairosfocus
MG: You are still refusing to engage facts, e.g. you were given highly specific links in this thread above, that you plainly have not engaged. That is not serious behaviour. Similarly, Mung is not speculating, he gave citations from the text by Schneider (which we can all follow up), and I was able to confirm some of the key points through my own clips from Schneider. His step response graph, to one who has had to analyse closed loop controller behaviour, was utterly and inadvertently diagnostic. Mr Schneider's attempt to correct Dembski's use of probably the most common plain vanilla quantification of information was the most striking point for me. Regardless of his paper qualifications, if he does not know enough to know that Dembski was using common and accepted usage, he is so ill informed on the subject as to not be credible. Period. Your onward behaviour of again repeating long since cogently answered talking points and making pretence that they have not been adequately answered removes you from the list of serious participants in a discussion. And, you have yet to explain yourself on some very serious matters and insinuations you have made, as also pointed out. Good day, madam. GEM of TKI kairosfocus
kairosfocus et al., I believe we have reached the point of significantly diminishing returns with respect to the discussion of CSI in this thread. I will continue to monitor it, but unless you or someone else provides a rigorous mathematical definition of CSI, as described by Dembski, and detailed example calculations for one or more of the four scenarios described in my guest thread, I'm not going to continue to point out that the Emperor is nude. Any of the objective "onlookers" addressed so often by you have sufficient information available to draw their own conclusions. While I have little hope that it will happen in this particular thread, I do suspect that this topic will arise in the future here at UD and I look forward to engaging in the discussion with you then. MathGrrl
kairosfocus,
In the case of the digitally coded FSCI [dFSCI] in the living cell, complex codes, algorithms, and code strings have only one known capable causal force, intelligence.
Unless and until you provide a rigorous mathematical definition of CSI, as described by Dembski, and demonstrate how to objectively calculate it for some real world scenarios, you cannot make this claim. Without a definition, your terms are literally meaningless. Without an objective calculation, there is no way to test your assertion that intelligent agency is involved. Continuing to repeat unfounded claims after repeatedly demonstrating that you are unable to support them is unconvincing, at best. MathGrrl
Joseph,
Your unsupported assertion notwithstanding, it is not possible for ev to be demonstrated to be a targeted search because review of the algorithm and inspection of the code proves that there is no target.
My claim has been supported- by Mung- who provd ev is a targeted search- as did Marks and Dembski.
None of your sources provided any such proof. I have already provided two links to comments where I address your points: https://uncommondescent.com/intelligent-design/news-flash-dembskis-csi-caught-in-the-act/#comment-378783 https://mfinmoderation.wordpress.com/2011/03/14/mathgrrls-csi-thread/#comment-1858 I invite you, along with kairosfocus, to read the source material of the ev paper and Schneider's PhD thesis for yourself. Please show me any support for your claims in either of those documents or in the ev program source code.
IOW you are either lying or just plain ignorant.
*sniff* I love the smell of civility in the morning! MathGrrl
kairosfocus,
Prominent on this — right there in the opening paragraph of the comment — is Mung’s summary dissection of ev at comment 180, which DOES reveal beyond any reasonable doubt — from the horse’s mouth (cf his snippets at 182 and some of his initial examination of the Schneider horse race page from 126 on . . . ) — that it is in fact targetted search, through the target — the string(s) to be matched — are allowed to move around a bit.
Mung's summary of ev is inaccurate and his claim that it is a targeted search are incorrect, for the reasons I provide in these two comments: https://uncommondescent.com/intelligent-design/news-flash-dembskis-csi-caught-in-the-act/#comment-378783 https://mfinmoderation.wordpress.com/2011/03/14/mathgrrls-csi-thread/#comment-1858
In addition, ev uses in effect a Hamming distance to target metric in selecting the next generation.
That is absolutely incorrect. I strongly suggest you read the source material, namely the ev paper and Schneider's PhD thesis for yourself. If you still believe that ev models a targeted search, please explain why you think so, with reference to the ev paper and the program source code, and we can no doubt have an interesting discussion. MathGrrl
Chris Doyle,
I would still appreciate a response to the bacteria comment I made six months ago, it was not a soliloquy, it was a direct challenge to your claims about evolution in bacteria.
The only claim I made was that one of the participants in the conversation would benefit from reading some textbooks and peer reviewed papers on bacterial evolution. I recommend the same to you.
I think it made uncomfortable reading for you and you don’t know how to respond to it. Am I right?
I'm sure you would like to think that, but I made my position very clear both in that thread and in my response to you above. My interest here is in understanding the positive evidence for ID, CSI in particular, to a level of detail that will allow me to test the claims of ID proponents myself. I have neither the time nor inclination to bring you up to speed on basic biology.
Ultimately, the record is here for all to see whether or not your questions have been answered by kariosfocus. I for one think they have been.
Your personal beliefs are irrelevant. The record shows that no ID proponent has provided a rigorous mathematical definition of CSI as described by Dembski nor has any ID proponent used such a definition to calculate CSI for the four scenarios I described. If someone had, you'd find be referencing it in your response rather than simply sharing your thoughts.
I won’t be returning to a blog where people like “The Whole Truth” can make comments like that with the active support of people like “Toronto” and the passive support of all the other banned evolutionists.
How convenient that one easily ignored participant on Mark Frank's blog can prevent you from returning to support the insulting and baseless claims that you made there. Fortunately, Mark doesn't remove comments so anyone interested in your personal standard of online courtesy will find it easily. MathGrrl
kf - You managed to use over 1400 words to fail to answer a question I was asking Joseph. The nearest you get is "we may describe and define a specification, T, that gives us the requirements to fit in the island of meaningful function within the wider space of possible but overwhelmingly non-functional configs." But how do we "describe and define a specification, T"? What properties must it have? This is what I'm not seeing. Heinrich
H: I see your:
Perhaps we should concentrate on the “meaning/function” part – how is that formally specified? I’m not sure how the everyday use of “information” can be formalised to be of use here – can you explain?
1 --> As has been pointed out over and over in response to the underlying talking point, meaning, function and information are first and foremost terms and concepts describing facts of experience. So, definitional statements and mathematical models, variables and associated metrics have to adequately, coherently and simply (but not simplistically) answer to that experience. 2 --> You are currently having the experience of reading this post, which is an instance of functional, coded information in English that responds to a particular context. It is functionally specific and complex information, by contrast with the gibberish in a bit of random typing like this:jfgwegjgegh. (And, already, this is an ostensive definition that points out an example and a counter-example to specify meaning by facts understood by us, judging, experiencing, knowing semiotic agents. Indeed, without that subjectivity of the conscious mind, there would be no knowledge of facts.) 3 --> To try to pretend that in absence of "formal" -- i.e. precising per necessary and sufficient statement and/or genus and difference and/or especially quantitative -- definition, such are meaningless or dismissible, is an example of self-refuting selective hyperskepticism. You are forced to rely on the meaningfulness of FSCI, to try to object to it. Reductio ad absurdum. 4 --> But also, pardon a direct comment: if you had troubled yourself enough to scroll up and look at the UD short glossary, accessible this and every UD page, top right, under "Information" you would find this telling admission against interest scooped from Wikipedia:
Information — Wikipedia, with some reorganization, is apt: “ . . that which would be communicated by a message if it were sent from a sender to a receiver capable of understanding the message . . . . In terms of data, it can be defined as a collection of facts [i.e. as represented or sensed in some format] from which conclusions may be drawn [and on which decisions and actions may be taken].”
5 --> This describes the concept of information. In terms of how we usually measure it, we use the fact that it can usually be reduced to symbol elements (even spoken words are built up from phonemes), which have frequency distributions that can be observed, so information -- on a suggestion by Hartley over 80 years ago, is quantified on symbol frequencies interpreted as probabilities (Dembski in NFL is right, and Schneider's attempt to "correct" him by substituting a rarer synonym, "surprisal," is wrong -- cf my now frequently repeated cite from Taub and Schilling and the discussion in my always linked here); for message element mk: Ik = - log pk, in bits. 6 --> Onward, Shannon developed a measure of average information quantity per symbol (aka entropy, aka uncertainty) across a set of symbols, i, H: H = - [SUM on i] pi log pi (In bits if the log base is 2. This is what is often called "Shannon information," especially in the context of the carrying capacity of a channel of a given bandwidth and int eh face of a set signal to noise ratio with Gaussian white noise, such as a modelled telephone line) 7 --> But of course this quantification so far does not address the meaningfulness or function, which last are at the heart of why information is important. To do this next step, we first note that functional info strings are meaningful and are aperiodic but not at-random, and are not forced into repetitive patterns like how a crystal's unit cell is endlessly repeated in a crystalline body. 8 --> That is why, in 1973, Orgel wrote -- in the decade after DNA had been initially decoded:
. . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [The Origins of Life (John Wiley, 1973), p. 189.]
9 --> What Dembski did was to identify how this could be used to develop a model and metric of complex specified information. (Cf the original post for the CSI newsflash discussion to which this thread is a footnote.) 10 --> In particular, we note that we may observe informational events E1, E2, etc that will carry out the same function and carry the same essential meaning. 11 --> However, not just any arbitrary configuration of symbols will do, only certain ones that follow certain rules and bear a certain content of meaning will do the required job, and so we may describe and define a specification, T, that gives us the requirements to fit in the island of meaningful function within the wider space of possible but overwhelmingly non-functional configs. 12 --> So, we have a non-arbitrary delimiter, T, that specifies an island of function in the wider config space. Not just any arbitrary sequence of ASCII characters will fit into the context of this thread of discussion and make sense in English. Overwhelmingly, most at-random clusters of such characters of the same length would be gibberish. Lucky noise is not a credible source of a message, and indeed the very concept signal to noise ratio rests on the understanding that signals and noise have different and distinguishable characteristics. 13 --> And so we see the key insight: once the length of the string is big enough, where essentially any informational entity can be reduced to structured sets of strings (so this is without loss of generality), it is maximally unlikely to arrive at such a string by a random walk rewarded through a trial and error or hill-climbing algorithm that depends on function [which is an observable and can be quantified by a dummy variable: if it works, 1, if not, 0: pass/fail . . . once that new part is in, does your car engine start? If no, back to the drawing board . . . ] not mere proximity to a target in config space. 14 --> Thus we see the quantification by doing a log reduction of the Chi metric: Chi_500 = Ip * [fs] - 500, functionally specific bits beyond a complexity threshold, where FS is the dummy variable on observed function: 1/0. 15 --> Is this a "mathematically rigorous definition" so beloved of MG in her dismissive talking points? 16 --> The key problem there -- as was pointed out above 23 - 4 and 34 - 5 -- is that not even mathematical proofs are usually rigorous in that sense. We are dealing with real world modelling and metrics, which respond to empirical realities and allow us to reason and analyse them, here using concepts closely parallel to those that are at the foundation of the second law of thermodynamics. Special configs in a large enough config space are going to be unobservable on chance plus necessity. 17 --> Such FSCI is however quite common, and it is routinely observed -- b[TR]illions of test cases, growing by the millions per week -- to be the product of intelligence, which is composing meaningful strings based on knowledge and intent. Like this post. 18 --> As this thread's original post shows, the Dembski type metric in log reduced is demonstrably amenable to real world biological cases. We also see cases where it correctly shows how things within that threshold can be originated by chance, as the OP's clip on random text generation from a config space of about 10^50 elements shows. 19 --> So, the objections are specious. 20 --> The real problem is not that the metric is not sufficiently meaningful, but that it carries an unwelcome message: the extremely complex and functionally specific information in DNA is far, far beyond the threshold where we may confidently infer intelligent design. 21 --> If you don't like the message, the proper way to address it is obvious: show by observed cases, how chance and necessity without intelligent direction -- and Mung has shredded ev etc as claimed cases of this -- can create FSCI beyond the threshold, at least the solar system threshold of 500 bits and preferably the observed universe threshold of 1,000 bits. 22 --> Almost needless to say, the reason why such red herring and strawman tactics as the "rigorous" talking point are being resorted to is plain: there are no such cases. 23 --> in short, the empirical evidence is that the reduced chi metric works as advertised when it is used in a design detecting explanatory filter. 24 --> So, the real and unmet challenge is there for evolutionary materialism advocates: show on empirical evidence (not misleading simulations) that a metabolising, von Neumann self-replicating physical entity like the living cell, can and does arise spontaneously by undirected forces of chance and mechanical necessity in a plausible initial environment. ___________ In short, it is time to stop dragging red herrings out to convenient strawmen, and show that your claimed life origin and body plan diversity origin story is grounded on observed evidence. Design thinkers are able to say that we know already that FSCI is routinely created by intelligence (indeed, in our observation, it has only been so created), and that Venter et al have shown that the intelligent creation of a living cell is possible, though we have not gone all the way yet. So, on inference to best explanation . . . GEM of TKI kairosfocus
EZ: As expected, it is Sunday and we are all here -- save of course those who moved on as individuals overnight. I find it astonishing (or, maybe, telling) that the same major media entities that spend so much time and effort repeating over and over that extremists like OBL are fringe relative to Islam, are so often willing to let the impression be created that a Mr Camping or the like, are typical of the Christian faith or of Christians who take the scriptures seriously. Indeed, the sensationalised coverage over the past few days -- I had never heard of this man before -- sounds to me like a credibility kill attempt: set up someone, make him seem to be a leading figure, knock him over, spread guilt by invidious association. In fact, Mr Camping is demonstrably in error -- easily known before the fact as I pointed out Friday afternoon above -- and is a fringe figure. He seems to have a radio enterprise, and to have amassed a fortune to back it. He is also a long since retired civil engineer (he is 89 years of age, from what I see) essaying into theological waters and using principles of interpretation that are known to be unsound at even basic Bible Study level. E.g. when I wrote a basic Bible study guide 25 years ago, I cautioned that one should not go looking for esoteric "hidden" meanings in the text where the text has a plain and natural sense that makes good sense. And yet, that is exactly what he did, by coming up with some idiosyncratic date for Noah's flood, then taking a text on how the eternal God is patient beyond human understanding (a day is like a 1,000 years in his sight . . . ) and taking a reference to seven days to go to the flood, plugging in the idea that 1 day --> 1,000 years, then voila, we arrive at May 21, 2011. Patent folly based on clipping words out of their natural sense in context and imposing a read-in meaning. Worse, he has done this before, some 15 years ago. He excuses himself as having made a mathematical error then. I do not know what he will say this time around, but he needs to apologise to his followers, like that zealous and self-sacrificing young man I saw Friday in front of our Hospital. he then needs to apologise to the church and the leaders who tried to correct him, who he would not listen to. Sadly, I gather he has dismissed and derides the church at large and has tried to in effect gather circles of listeners into informal groups; a classic sectarian blunder -- and one that will give a bad name to groups that meet for Bible study, prayer and discussion in homes or schools or offices. Then, he needs to go with the church leaders he has been reconciled with and sit before the world, apologising and allowing the leaders to present a more balanced view of the Christian faith's core message and its view of the End of Days and Day of the Lord. After that, he needs to set up a proper board of governance for his ministry, with serious stakeholder representatives on it. And, he needs to attach to it a panel of expert advisors who have the right and responsibility to keep his ministry on track through sound counsel. For, this is in the end a major failure of governance of a corporate entity. Idiosyncratic autocracy is dangerous, too dangerous today to be tolerated. But it is not just Mr Camping who needs to reflect on what just happened and make amends What was troubling is that the coverage did not stress that here is a fringe person who has gone off the deep end and has been repeatedly corrected, but instead it sensationalised the error, as though this is a set up of a strawman. The contrast with the very cautious treatment of Islam, tells me that this is likely to be a cynical agenda at work on the part of key media figures, and once such have big enough mikes, the rest will endlessly repeat and amplify the standard story-line. That lemming-like media mentality is very dangerous, and the sort of cynicism that failed to be balanced in this case -- even while being if anything overly cautious in dealing with the likes of radical Islam -- is even more dangerous. There is a lot of painful and bloody history on what happens when movements of conscience are repeatedly strawmannised and slandered. Demonisation and dehumanisation are the first steps to unjust suppression. (And when I see cases like the John foster parenting case in the UK, where the UK Dept for Equality and Human Rights told a High Court that Bible-believing Christianity is an "infection," and were not roundly rebuked, that is a grim portent. Our civilisation has been down that road before -- too many times, and it is nowhere where any sane person wants to go.) Frankly, it smells a lot like hypocrisy and hostility. Anyway, let us return to focus for the thread. I'll address Heinrich in a moment, DV. GEM of TKI kairosfocus
KF: Thanks! I'm resigned to paying some bills and doing the grocery shopping as usual. Sigh. I'm still thinking about what you said . . . but, I'm still not up to a decent objection. Yet! :-) Now I suppose I'd best mow the lawn while it's not raining in God's Own Country . . . well, that's according to the locals. Yorkshire is quite nice I do admit. But wet. See you all later but don't hold your breath. Life calls! ellazimm
EZ: You enjoy your weekend. I actually ran into a young man today from the organisation promoting the date setting for the end of the world, next to the rum shop in front of the local hospital. Tried to ask him about date setting and the Bible's prohibition on that. He was not really listening, ran into a wound up spiel. Said he had been all over the Caribbean in the past several weeks, handing out booklets and books. He did look tired. However, we can be pretty sure date setters are in error, per say Mt 24:36. But, it seems there is a temptation that some cannot resist. What is far more serious is Paul's statement in Ac 17, and remember this is eyewitness lifetime report (cf here on the minimal facts analysis):
Ac 17:26And He made from one [common origin, one source, one blood] all nations of men to settle on the face of the earth, having definitely determined [their] allotted periods of time and the fixed boundaries of their habitation (their settlements, lands, and abodes), 27So that they should seek God, in the hope that they might feel after Him and find Him, although He is not far from each one of us. 28For in Him we live and move and have our being; as even some of your [own] poets have said, For we are also His offspring. 29Since then we are God's offspring, we ought not to suppose that Deity (the Godhead) is like gold or silver or stone, [of the nature of] a representation by human art and imagination, or anything constructed or invented. 30Such [former] ages of ignorance God, it is true, ignored and allowed to pass unnoticed; but now He charges all people everywhere to repent ([d]to change their minds for the better and heartily to amend their ways, with abhorrence of their past sins), 31Because He has fixed a day when He will judge the world righteously (justly) by a Man Whom He has destined and appointed for that task, and He has made this credible and given conviction and assurance and evidence to everyone by raising Him from the dead.(B) [AMP]
Okay, all best GEM of TKI kairosfocus
KF: I'll keep thinking but I've got nothing else of value to add to the thread at this time. Which is okay by me; I really am more interested in understanding your view and I've got a much better insight into that now. I don't think your posts are random noise . . . . well, maybe some of them. :-) So, thanks for indulging me. I've always liked the idea of the Socratic method and it's nice to be able to wallow in it. I hope you all have a good weekend, not interrupted by the Rapture as predicted by www.familyradio.com. I'd like to be able to continue conversing in the future. But if the Rapture really is coming I'm in for a pretty hideous time. ellazimm
Heinrich: How do you formally define “specified complexity”, and “meaning/ function”? I told you already- Dembski took care of the complexity part in NFL and he also covered “meaning/ function”. In biology specification refers to biological function. IOW “information” as it is used by IDists is the same as every day use.
Where did you tell me? Perhaps we should concentrate on the "meaning/function" part - how is that formally specified? I'm not sure how the everyday use of "information" can be formalised to be of use here - can you explain? Heinrich
EZ: Thanks for your further remark. We could debate the ins and outs of many origins sciences fields till the proverbial cows come home, e.g. just where did the EC arc of explosive volcanoes come from, and what does that mean for old smoky maybe a dozen miles S of where I sit -- who was stinking up the place with H2S earlier this week, just to remind us he is still in business. (I used to have a joke about how he would occasionally break into Mrs Dyer-Howe's Volcano Rum stocks to tipple a sample, tank up and blow . . . ) But the bottomline will remain: in OS work, one provides on inference to best explanation, a provisional causal account of traces of the past in the present in light of observed dynamics causally adequate to account for them. In the case of the digitally coded FSCI [dFSCI] in the living cell, complex codes, algorithms, and code strings have only one known capable causal force, intelligence. BTW, the issue is not whether my posts are a simulation, as that too would be a design, but whether they are lucky noise, such as a burst of sky noise getting into a server on the net. In short, even your own response shows just how strongly we know that the most credible explanation for dFSCI in particular is design. GEM of TKI kairosfocus
KF: Okay, I see what you're saying about the reams of evidence. Have I ever met you? I can't say as I don't even know what your real name is! We're both trained in mathematics so . . . . it is possible we have met. How do I KNOW you're a real person? Well, you are way too stubborn, idiosyncratic and original to be a simulation. I am known to be quite pedantic but you must drive some people mad! I happen to like people who don't take BS lightly. So, even though I am disagreeing with you I respect the way you approach important questions. You think deeply, latch on and don't let go. As far as the issue of first showing a current process that is capable of producing the effect 'observed' in the past before making an inference . . . . . which is, after all, the mantra of modern geography . . . the only example I can think of as a parallel to evolutionary reasoning is plate tectonics. I'm not saying it's a great analogy. So . . . my thinking is that we have observed in the last 100 years creepingly slow sea floor spreading and small shifts along known fault lines but not the incredible shifts of continents the theory 'predicts'. In my mind, perhaps erroneously, generalising the small observed shifts AND all the other consistent evidence into large shifts over eons and eons of time is the same kind of reasoning that, say, the observed morphological changes observed in dog breeds can be extended to explain the development of whales from land dwelling creatures. And considering all the other parallel and confirming evidence as always. (I've been reading Jonathan Well's newest book and, I'm pretty sure, he doesn't address the geographical distribution evidence. Which is a shame. I was hoping to hear his thoughts on ring species.) I just thought of another possible comparison. We observe certain decay rates in radioactive substances and use that to reach into the past and draw conclusions about thing that were not 'observed'. I will keep thinking. I have no doubt you will find a flaw in my 'logic'. But, as I've said many times, I'm here to find out how y'all think about these issues. ellazimm
EZ: Having had to correct unresponsive misbehaviour above, let me first express appreciation for the responsiveness in:
I see your point. I’m going to have a think about what to say in response. I have to admit I’m a bit skeptical of your statement: “In this case, we have abundant — billions of test cases, growing at literally millions per week thanks to the Internet — on how FSCI is a reliable SIGN of intelligent design. This is known, routine source/cause and reliable sign.”
Perhaps I can give you some context: have you ever met me? How do you know that what I have put up is a real person with a real mind writing, and not mere lucky noise on the Internet? One key reason is that you know that contextually responsive posts in the code patterns of a known language -- as opposed to random gibberish: fgwdjjgfuhvhb -- are a hallmark of design. (Think, then of the files in cabinets and drawers etc in offices all over the world, then the groaning shelves of libraries, then the stored plans in design offices, then the Internet full of pages, emails and blogs etc. I can confidently say that any of these documents with over 125 bytes worth of FSCi is designed.) You may want to look here, in a background post for the ID foundations series, and then here on the first post in the series. GEM of TKI kairosfocus
Onlookers: Sadly predictable. Having been bested on the facts again, MF finds an excuse to ignore another participant in discussion. Thus, it is increasingly and sadly plain that he is not here for dialogue or even -- a distinct step down -- debate, but to score distractive or dismissive talking points and try to stir up confused exchanges with those who try to discuss issues with him. In that exercise, he plainly wants to cherry pick who he can play the points scoring game off, without having to actually seriously engage substantial matters, or accept well earned corrections. In short, this is the all too familiar red herring, strawman, ad hominem tactic, in another guise. Pardon some direct observations, only such will in the end clear the fog of misleading rhetoric away. I find that, frankly, dishonest, arrogant and utterly rude; though it is of course a bit more cleverly sophistical than the sort of fever swamp abuse that is all too common from objectors to the design inference. This pattern is also increasingly evident for MG. Now, as you can see above, when Joseph indeed went overboard, I called him up on his tone. (Notice carefully: MF fails to acknowledge that, as he has determined to ignore anything I say on the flimsiest of excuses. So, for excellent reason, I find any pretences on his part to be civil, concerned for respectful discussion or serious about the actual issues distinctly hollow. Of course, he and/or others of his ilk will use talking points like suggesting that I am being abusive to correctively point out the dishonest rhetorical tactics being used. That, too is yet another subtle sophistical, or even propagandistic tactic: the turnabout accusation, designed to confuse the onlooker by pretending that the victim is the chief perpetrator. Just remember, to set this in proper perspective: MF is currently hosting a blog where participants are indulging in privacy violation, which while he is quick to correct CD about someone who has been openly nasty, he glides over in silence when he comes here to comment on a thread on a post by the victim of that outing behaviour. Think about the depth of willful disrespect, sheer chutzpah and plain no-broughtupcy rudeness involved in doing that.) Now, Joseph plainly heeded the correction I gave above. (And, J, the tactics I now have to be engaging are part of why we need to be very careful not to be unjustifiably harsh or abusive, including eslewhere such as in your own blog. Notice how terms you use to tag evo mat advocates in your own blog are being thrown in our faces here at UD.) But Joseph makes a handy target to personalise and dismiss the issue by attacking the man. Instead of accepting well warranted correction on a gross error when MF said:
How can science not know how they originated and yet know that all nucleotides are equally probable?
. . . instead we see an attack to the man. Just, a bit more subtle than the usual fare of open invective. Sadly, predictably typical. On the whole, I think we can now safely take MF's I will not respond to you on excuse X, Y or Z tactic as an admission of want of substance on the merits. In this case, he and MG accused J of not having grounds for his claim that we can assign nucleotide bases at 2 bits storage capacity. If you doubt me on that, simply scroll up and see the exchange over the past couple of days once J intervened and gave his calculation on CSI being Shannon metric info in the context of functional specificity. I drew out the elaboration on the difference between storage capacity and code usage of that capacity, noting that we are going to be at about the same order of magnitude in a context where we have orders of magnitude to play with, and MF and Mg tried dismissive tactics that showed ignorance of the basic fact that DNA is a 4-state storage unit on a per base basis. Yup, that is how ill-informed the objections now are. Both J and I responded, pointing out the chaining in the nucleotide string, and that in DNA, the complementarity is between corresponding points on the TWO helices, with the further note that to store information, we need flexibility: as I noted, if there is not contingency in the chain sequence, the n we have a crystal not an information storing molecule. The sugar-phosphate backbone of the DNA strand assures that required flexibility, and the thousands of proteins coded for in DNA show just how flexible the sequencing of the chain is. And, BTW, when we go across to the tRNA that actually clicks proteins together AA by AA, the AA is held on the end opposite the anticodon, i.e the correspondence between DNA and AA coded for is not physical- dynamic but informational- algorithmic. The tRNA is a moving-arm device and taxi, so a sequence of tRNAs assembles the AA chain step by step. The actual functioning of the resulting protein is several stages further on: folding, agglomeration, activation, transport to use site. And the mechanism of sequencing AAs is not physically constrained by the particular sequences that fold functionally. In short, fold domains are deeply isolated in AA sequence space. Islands of function, in the heart of the cell. Function coded for in DNA using a language used to indirectly control an assembly machine, the ribosome, with mRNA as the code tape device. All of this is strong evidence of purposeful design, save to those who refuse to see it. GEM of TKI PS: Have a read here, in the IOSE course on these topics. Take time to watch the video. kairosfocus
KF: I see your point. I'm going to have a think about what to say in response. I have to admit I'm a bit skeptical of your statement: "In this case, we have abundant — billions of test cases, growing at literally millions per week thanks to the Internet — on how FSCI is a reliable SIGN of intelligent design. This is known, routine source/cause and reliable sign." BA77: Likewise I am puzzled by your statement: "thus ellazimm you have no justification for your statement because we can see the entire universe, itself, coming into existence, and we can experimentally confirm what kind of event it must have been, thus clearly the ‘Designer’ has certain attributes that lend themselves readily to experimentation." But I shall think on that also before responding. Joseph: ellazimm
Mark- That is OK as I find tryig to discuss these matters with you but as fulfulling as talking to a wall. I don't know why you are here if you cannot produce any evidence to support your position. Attacking ID will not do that. You actually need to provide positive evidence for your position. Also you don't need to worry about CSI. All you need to do is focus on your position and demonstrate that blind, undirected processes can produce what IDists call CSI. IOW the keyn to refuting ID is in demonstrating your position. Good luck with that... Joseph
Joseph - just so you know where I stand. I don't want you to waste time making responses I am not reading. I am finding debating with you too aggressive and will no longer participate. This is my weakness as much as yours - but we do this for pleasure and I am not enjoying the experience. I am sure you will find other willing opponents. Mark markf
MarkF:]
How can science not know how they originated and yet know that all nucleotides are equally probable?
The two have nothing to do with each other. What I said concerns the way the nucleotides are ordered on ONE side of he DNA. And with genes not any seqwuence makes gene- that is te point.
We may not know in detail how the first genes began but we know quite a lot about the processes by which they develop and change – duplication, inversion, replication, transposition, point mutation etc.
So what? What methodolgy was used to determine all of those are blind watchmaker processes? Show us the math- provide a mthematically rigorous definition or get lost.
Does one of your comments deny that when a gene is duplicated then the duplicate is almost identical to the original?
Do you realize that wth a gene duplication it isn't always that te entire gene get duplicated? So in those cases the duplicate will not resemble the original. But there still isn't an evidencen that gene duplictions are blind watchmaker processes. Nice of you to continue to ignore that.
That’s not a claim it is a request!
Dude you claimed there isn't any justification for my assumption. Now you have to step up an support that claim. Or admit that you are lying. Joseph
MathGrrl:
Your unsupported assertion notwithstanding, it is not possible for ev to be demonstrated to be a targeted search because review of the algorithm and inspection of the code proves that there is no target.
My claim has been supported- by Mung- who provd ev is a targeted search- as did Marks and Dembski. IOW you are either lying or just plain ignorant. Joseph
Hi Mark, Thanks very much for that. Seeing your comment to "The Whole Truth" is very reassuring. The main problem with any established forum that attracts regular participants on both sides of a very strong disagreement is the inevitability of a blood-feud breaking out. That's why strict moderation is important because it will filter out those remarks that are likely to escalate the problem. I totally appreciate your concerns about double standards: and I explained why they need to be tolerated over on your blog. For even daring to question evolution on other forums, I (along with others) have been subject to uncensored, horrendous abuse which is in a completely different league to anything you see on here. But we've all got to try and draw a line somewhere and make a fresh start or else constructive debate will cease. Chris Doyle
Onlookers: FOR THE RECORD: Kindly note: at the same time that MF is busily trying to burnish his civility credentials with CD, he is insistently ignoring the author of the post here, on the flimsiest excuses, and tolerated privacy violations at his blog. You will understand why I will have nothing further to do with MF's blog and those of like ilk, save to remark for the record; when strictly necessary. GEM of TKI kairosfocus
F/N: EZ, I hope you understand that I am requiring that -- before you can claim to have an adequate model of the past -- you must show on empirical observation in the present that your claimed causal factors (blind chance and mechanical necessity) are empirically sufficient to trigger the effect in question, FSCI. We have a known and reliable causal factor for that, but it is not chance plus mechanical necessity, it is design. And, your dodge notwithstanding -- the context should have been obvious -- you plainly do not. Or, instead of playing strawman games on what I said, you would have triumphantly announced it. In short, it is what you tip-toed by in silence that is utterly revealing. kairosfocus
#134 Chris I won’t be returning to a blog where people like “The Whole Truth” can make comments like that with the active support of people like “Toronto” and the passive support of all the other banned evolutionists. Anything else you want to say to me or respond to, needs to be said here. I am sorry to hear that. For the record this is the final comment I made to "The Whole Truth" on my blog about his comments: WT – nine of the last ten posts are from you and they are increasing personal and lacking in content. If you want to use up so much bandwidth for these purposes please can you do it somewhere else.. I could hardly put it more strongly. It was preceded by a number of other requests to alter his approach. This one appears to have been successful as he has not commented since. I find it worthwhile commenting here despite a fairly continuous level of insults and such like. Just avoid discourse with those that you cannot get on with for one reason or another (I guess I have to accept that Joseph is one of those). markf
EZ, 133: Pardon, I am very busy just now, so I must be very focussed and selective. So, let me pick a key slice of the cake that shows the vital ingredients in action:
BUT . . . you make a design inference without observational data! You draw conclusions based on the results of events that happened a long time ago . . .
1 --> Are you aware of the difference between operations science and origins science? (Cf discussion here.) 2 --> The former works by direct observation of the facts on the ground, the latter provisionally reconstructs the past by creating a model based on results of operations science that shows the processes in the present capable of causing what we see as traces from the past beyond observation and record. 3 --> So, if you are challenging design inference on such traces and dynamics, you must either challenge the whole system of the reconstruction of the past as similarly fatally flawed -- geology, paleontology, cosmology etc, or else find yourself guilty of selective hyperskepticism, Cliffordian/Saganian evidentialist form; exerting a double standard in warrant, to reject what you do not want to accept. 4 --> In this case, we have abundant -- billions of test cases, growing at literally millions per week thanks to the Internet -- on how FSCI is a reliable SIGN of intelligent design. This is known, routine source/cause and reliable sign. 5 --> We have in addition, no sound counter examples where chance and necessity without intelligent direction, gives rise to CSI (cf my remarks on Ev just above). 6 --> And on the needle in the haystack/infinite monkeys analysis, we see good reason -- closely related to the analysis that warrants the second law of thermodynamics -- to accept that the targets in question are beyond the capacity of the cosmos acting by chance and necessity without intelligence, to find. 7 --> In short, we are well warranted to infer from sign to known routine source, with the backup of known reliability of the sign, and the analysis that shows why that should be. 8 --> On the strength of that, we have every good reason to conclude that FSCI is a good sign of design as most credible cause, once we take the blinkers of a priori imposed materialism off. 9 --> So strong is this, that we have every good reason to then challenge those who would explain the origin of life and of body plans -- both deeply embedded with FSCI -- that to warrant their case, in light of the discoveries about the cellular nature of life since the discovery of DNA in 1953 and its decoding since the 1960's as well as the related discoveries about the nanotech machinery of cell based life, they must now show that blind chance and necessity acting by themselves are capable of creating FSCI, or surrender their claims that imply such. 10 --> The rise of GA's shows that his challenge has been implicitly seen as valid, and that it is serious. The nature of GA's, turns out to further support the point that FSCI is the product of design, as the just above brings out for the case of ev. In short, the attempted counterexample turns out to substantiate the point. 11 --> Going beyond this, macro evo is a major claim in biology. It cuts across a major set of empirical findings and related analyses as just summarised. While such observations and analyses are inevitably provisional -- as is the case with all of science -- the weight of evidence and analysis, especially in light of that tie to the second law of thermodynamics, plainly dramatically shifts the burden of proof, just as it is incumbent on proposers of perpetual motion machines of the 2nd kind, that they show that their contraptions work as advertised. (Unmet to date.) 12 --> So, I am fully warranted to demand:
What you need to do — if you are to think scientifically — is to produce empirical observational, factual data that tests and adequately supports the claims; without ideological blinkers [i.e. a priori evolutionary materialism or its kissing cousins] on.
See my point? GEM of TKI kairosfocus
Onlookers: Here is yet another example of MG's failure to face, recognise or acknowledge basic and evident facts, facts that -- for weeks now -- have been just one clicked link away:
Joseph: It has been demonstrated that ev is a targeted search. MG: Your unsupported assertion notwithstanding, it is not possible for ev to be demonstrated to be a targeted search because review of the algorithm and inspection of the code proves that there is no target.
In fact, in the previous CSI Newsflash thread, you will see in my edit response to Graham at comment no 1, a summary of issues and concerns, including matters for MG to explain herself on. Prominent on this -- right there in the opening paragraph of the comment -- is Mung's summary dissection of ev at comment 180, which DOES reveal beyond any reasonable doubt -- from the horse's mouth (cf his snippets at 182 and some of his initial examination of the Schneider horse race page from 126 on . . . ) -- that it is in fact targetted search, through the target -- the string(s) to be matched -- are allowed to move around a bit. In addition, ev uses in effect a Hamming distance to target metric in selecting the next generation. To see how that conclusion is warranted, cf Mung at 177 on number of "mistakes" and my remarks just following at 179 on how that translates into a Hamming distance to target metric. (Also cf no 178 on the closely related general nature of GA's. BTW, GA's, by virtue of using hill climbing on a fitness function of nice trendy slope that leads to the targets, are inherently targeted searches. The design of such nice trendy fitness functions matched to the underlying config space of the "genome" is a non-trivial, intelligent matter, as can be seen form the online textbook on GA's that was unearthed in the discussion. In this context, this means that GA's operate WITHIN islands of function, i.e. they are models of micro evo, at best. The issue design theory raises -- as I pointed out again yesterday, is the question of getting TO such islands of function in config spaces that by many orders of magnitude, swamp the available resources of the atoms of our observed cosmos.) In short, ev is a slightly more sophisticated version of Dawkins' notorious Weasel. It is plain that MG is either unable or unwilling to examine and properly assess the facts, or -- pardon, this is what she would have to be if she is knowingly making false and misleading assertions as cited at the top of this comment -- she is a brazen rhetor exploiting the fact that often onlookers are not going to examine the true facts for themselves, so can be misled by someone who they think is their champion. (Especially, is such a rhetor uses the tactic of ducking out and waiting until further discussion has in effect buried the relevant facts, so one can then pretend that they do not exist.) Nor, have I forgotten the issue of MG's snide allusion to Galileo's whispered remark after his forced recantation at the hands of the Inquisition, which comes up in the cluster leading up to 180. This is an outrage, that needs to be apologised for, as there is no religious magisterium here imposing its will by threats of the thumbscrews. MG is here guilty of outright slander. MG has some serious explaining to do. Again. GEM of TKI kairosfocus
ellazimm, you are wrong on just about all of the presuppositions and conclusions that you have made; for instance this one you made: 'but postulating the existence of one (a Designer) (which is implied by allowing the inference to be drawn) is less parsimonious because it introduces a process for which there is no independent physical evidence AND, if the designer is considered to be outside of the reach of experimentation and evidence then it’s not an issue that can be addressed by science. ,,,' Yet ellazimm we can look back in time to the beginning of the universe and see the entire universe being brought into existence instantaneously: The Known Universe by AMNH http://www.youtube.com/watch?v=17jymDn0W6U and moreover we know what photons are 'made' of, thus we have a very good picture of what kind of event it must have been; Explaining Information Transfer in Quantum Teleportation: Armond Duwell †‡ University of Pittsburgh Excerpt: In contrast to a classical bit, the description of a (photon) qubit requires an infinite amount of information. The amount of information is infinite because two real numbers are required in the expansion of the state vector of a two state quantum system (Jozsa 1997, 1) --- Concept 2. is used by Bennett, et al. Recall that they infer that since an infinite amount of information is required to specify a (photon) qubit, an infinite amount of information must be transferred to teleport. http://www.cas.umt.edu/phil/faculty/duwell/DuwellPSA2K.pdf Researchers Succeed in Quantum Teleportation of Light Waves - April 2011 Excerpt: In this experiment, researchers in Australia and Japan were able to transfer quantum information from one place to another without having to physically move it. It was destroyed in one place and instantly resurrected in another, “alive” again and unchanged. This is a major advance, as previous teleportation experiments were either very slow or caused some information to be lost. http://www.popsci.com/technology/article/2011-04/quantum-teleportation-breakthrough-could-lead-instantanous-computing Quantum no-hiding theorem experimentally confirmed for first time Excerpt: In the classical world, information can be copied and deleted at will. In the quantum world, however, the conservation of quantum information means that information cannot be created nor destroyed. This concept stems from two fundamental theorems of quantum mechanics: the no-cloning theorem and the no-deleting theorem. A third and related theorem, called the no-hiding theorem, addresses information loss in the quantum world. According to the no-hiding theorem, if information is missing from one system (which may happen when the system interacts with the environment), then the information is simply residing somewhere else in the Universe; in other words, the missing information cannot be hidden in the correlations between a system and its environment. (This experiment provides experimental proof that the teleportation of quantum information in this universe must be complete and instantaneous.) http://www.physorg.com/news/2011-03-quantum-no-hiding-theorem-experimentally.html ,,, thus ellazimm you have no justification for your statement because we can see the entire universe, itself, coming into existence, and we can experimentally confirm what kind of event it must have been, thus clearly the 'Designer' has certain attributes that lend themselves readily to experimentation. etc.. etc.. The Afters - Light Up The Sky - Official Video http://www.youtube.com/watch?v=8LQH6UDi15s bornagain77
Just to be clear: I don't see how you can look at the exiting genome and say that it implies design intervention at some undefined point in the past if your contention is that such events must be observed. We look at the same data and evidence. You say: I wasn't there but I think there is clear indication that some of this is better explained by the intervention of an intelligent designer as opposed to blind, unguided processes. And you say: In addition, the naturalistic explanation is so highly improbable that it's less parsimonious. I say: You can't prove a negative. Even though I wasn't there and I don't understand exactly how all the steps occurred I think the different strands of evidence are all consistent with common descent with modification. And I say: I can't disprove the intervention of a designer but postulating the existence of one (which is implied by allowing the inference to be drawn) is less parsimonious because it introduces a process for which there is no independent physical evidence AND, if the designer is considered to be outside of the reach of experimentation and evidence then it's not an issue that can be addressed by science. You may say: The evidence is also consistent with an intelligent designer who chose to proceed in that fashion. And I would say: True, but a designer with that ability could have done things differently whereas unguided processes have no 'choice'. And, if the designer was limited then you are assuming aspects of the designer which begs the question of the designer's existence. Is that fair? Probably not but I tried. And I've got a full day ahead of me know. And I'm not sure there's an more points to make. But, as always, thanks for the discussion! ellazimm
Hi Mathgrrl, thanks for your responses on this thread. I would still appreciate a response to the bacteria comment I made six months ago, it was not a soliloquy, it was a direct challenge to your claims about evolution in bacteria. I think it made uncomfortable reading for you and you don't know how to respond to it. Am I right? Ultimately, the record is here for all to see whether or not your questions have been answered by kariosfocus. I for one think they have been. The only reason I made any other reference to you was because Mark was pressing me on the subject of Joseph (by comparison to himself and yourself). This despite the fact that I expressed a reluctance to do so. I think my comments about you were fair ones: you do tend to ignore points raised by your opponents that you have no answer to, and go on like a broken record about stuff that has been dealt with several times over a long, long time ago. As I keep on saying, there really is no need for all this unpleasantness. All the best. PS. I won't be returning to a blog where people like "The Whole Truth" can make comments like that with the active support of people like "Toronto" and the passive support of all the other banned evolutionists. Anything else you want to say to me or respond to, needs to be said here. Chris Doyle
KF: "What you need to do — if you are to think scientifically — is to produce empirical observational, factual data that tests and adequately supports the claims; without ideological blinkers on." BUT . . . you make a design inference without observational data! You draw conclusions based on the results of events that happened a long time ago. You can't point to a clear and unambiguous case of genomic design (by an unknown, mysterious designer) that was observed to occur right then. Isn't it inconsistent for you to ask for a type of evidence that you yourself are unable to provide for your own argument? You don't get to have double standards in science. I'm sorry but lots of science progresses by drawing inferences and conclusions based on the evidence and effects from non-observed events. I've never understood why many in the ID community belabour that point. The whole point of archaeology is to draw conclusion based on cultural remains NOT observed events. And yes, in some cases design arguments arise. But only when it's clear there was a possibility of there being a non-transcendental designer available, i.e. independent evidence of a designer. AND, if/when a speciation event or the creation of an organic molecule under blind processes is observed and documented you'll have to give up that defensive position and fall back. ellazimm
Chris Doyle,
I don’t know about Mathgrrl (disrespecting your opponents doesn’t always manifest itself as explicitly incivil remarks: ignoring points that have been raised (for 6 months in my case!) and repeating the same refuted arguments over and over again is very disrespectful and a waste of all our time, for example)
Making side comments about someone without backing up your baseless accusations is considerably more rude than anything I've written online anywhere in the past few years. I explained above why you didn't get a response six months ago. Someone less generous of spirit than myself might come to the conclusion that you scrounged around for an excuse to cast aspersions on someone you disagree with for other reasons. Further, I have not repeated refuted arguments, I have been consistently and patiently attempting to get answers to what I originally thought would be simple questions about a key ID metric. I have yet to receive those answers. You, sir, have no business criticizing the online manners of others. MathGrrl
kairosfocus,
I have read through all of your responses since my comment numbered 60 in this thread . . . . you repeatedly claim that CSI has been rigorously defined mathematically, but nowhere do you provide that rigorous mathematical definition.
Now, I have repeatedly pointed you to and linked 23
Yes, you have. Unfortunately, none of the comments to which you've linked contain either a mathematically rigorous definition of CSI, a detailed example of how to calculate it for my gene duplication scenario, or answers to any of the questions I asked in my comments numbered 59 and 83 which are a direct response to your comment 23. I honestly don't understand why it is so difficult to get answers to these questions. As I have mentioned before, if one of my colleagues were to tell me that she had a metric that could be used to characterize data sets in a way no one had done before and I asked her to define her metric in mathematical detail plus show me exactly how to calculate it for a few examples, she'd fill whiteboard after whiteboard for me. The hardest part would be to get her to stop. In the analogous situation here, I can't get anyone to even rigorously define what the metric is, let alone provide any calculations. MathGrrl
Chris Doyle, By the way, since we're discussing how and why online threads are dropped by some participants, I thought you'd like to know that there are still some open questions regarding your comments on Mark Frank's blog. The subthread starts here: http://mfinmoderation.wordpress.com/2011/05/14/does-uncommon-descent-deliberately-suppress-dissenting-views/#comment-3498 MathGrrl
Chris Doyle,
As you’ve begun a theme of unanswered posts, I wonder if you’d be so kind as to respond to a post I addressed to you 6 months ago. You can find it here: https://uncommondescent.com.....ent-366931
As I noted in that thread, my interest was solely to correct a misconception or two about the nature of genetic algorithms. I explicitly said, earlier in the thread, that I didn't have the time or inclination to engage the topic of the evolution of bacteria. Given that, I don't see why you would expect a response to a post that was more of a soliloquy than a question.
Check out post 23 on this thread for the answers you’re looking for from kairosfocus.
That comment does not contain a rigorous mathematical definition of CSI as described by Dembski nor does it use such a definition to explain how to objectively calculate CSI for the first of my scenarios. It is not an answer.
There’s a difference between you not liking the answer and not being answered at all.
I find that comment . . . odd coming from someone who writes so much about the rudeness of ID critics. Glass houses and all that. MathGrrl
Joseph,
It has been demonstrated that ev is a targeted search.
Your unsupported assertion notwithstanding, it is not possible for ev to be demonstrated to be a targeted search because review of the algorithm and inspection of the code proves that there is no target. The entire ev digital genome is subject to mutation and selection, including those sections that model the binding sites and the recognizers. Those sections co-evolve and have different sequences in different runs of ev. There is no explicit target. The really interesting result of ev, predicted from Schneider's work with biological organisms, is that Rfrequency and Rsequence evolve to the same value. There is nothing in the algorithm or code that would lead one to expect this. This further demonstrates the lack of a target in ev. I have provided this detail before, both here on UD: https://uncommondescent.com/intelligent-design/news-flash-dembskis-csi-caught-in-the-act/#comment-378783 and on Mark Frank's blog when the threads here stopped accepting comments: https://mfinmoderation.wordpress.com/2011/03/14/mathgrrls-csi-thread/#comment-1858 You can read those for more detail. If you want to confirm for yourself that ev is not a targeted search, you can visit Schneider's site and download the papers and source code for yourself. (Now, did you see what I just did there? Instead of simply replying that you are wrong and that I've already addressed the issue, I explained, again, why you are wrong and provided links to where I directly addressed your point and provided sufficient detail for you to understand the topic under discussion. Could you try that yourself sometime, please?) MathGrrl
Joseph, At 7:53 am on 05/19/2011 you wrote:
I have nothing else to say to you- you are a waste of time and bandwidth.
Then at 8:12 am you began a comment with:
And MathGrrl,
What a tease! MathGrrl
Dr Bot: The event, E, is a particular statue of Lincoln, say. The zone of interest or island of function, T, is the set of sufficiently acceptable realistic portraits. The related config space would be any configuration of a rock face. The nodes and arcs structure would reduce to a structured set of strings, a net list. This is very familiar from 3-d modelling (and BTW, Blender is an excellent free tool for this; you might want to start with Suzie). Tedious, but doable -- in fact many 3-d models are hand carved then scanned as a 3-d mesh, then reduced -- there is a data overload problem -- and "skinned." (The already linked has an onward link on this.) The scope of the acceptable island would be searched by simply injecting noise. This will certainly be less than 10^150 configs. [Notice, the threshold set for possible islands of function is a very objective upper limit: the number of Planck-time quantum states for he atoms of our observed cosmos.] At the same time, the net list will beyond reasonable doubt exceed 125 bytes, or, 1,000 bits. That's an isolation of better than 1 in 10^150 of the possible configs. And it is independent of the subjectivity of any given observer. ["The engines are on fire, sir! WE'RE GOING DOWN . . . ] The chi- or X- metrics for Mt Rushmore, will -- unsurprisingly -- easily be in "design" territory. In short, another case of what we see in the metric corresponding with what we see in the world of direct observation. And, one that shows how a subjective view can very rapidly and easily be taken out of the context of mere clashing opinions. Now, in this case, because of the specific situation, the function is sculptural resemblance. That would have to be judged (even though we already have an upper limit), and it should be possible to calibrate a model for how much variation we can get away with -- i.e. recognisability as a portrait of a specific individual is subjective but that is not as opposed to being objective. (And BTW, the coder of the program is using his subjectivity all through the process. As well, the engineer who designs the relevant equipment. Subjectivity is the CONTEXT in which we assess objectivity: credible extra-mental reality, at least on a sufficiently good approximation basis. Subjectivity is therefore not the opposite of objectivity.) Function, however, is in many other cases not a matter of subjective judgement, especially for algorithmic code in an operational context, e.g. for a system controller. Programs that crash and burn are rather obvious, and may have rather blatant consequences. Similarly, even though we probably would have to use observers to decide when garbling of text is out of function, that is much more easily achieved than might be suspected -- cf Axe's work on that. Now, we address the red herring led away to the strawman: why is "function" a question of MATHEMATICAL "rigour"? Especially, in a context where not even mathematical proofs and calculations are usually fully mathematically rigorous? (Cf 34 - 35 above and 23 - 24 above.) The proper issue is whether function is an objective, observable phenomenon. And, plainly it is. We may construct mathematical models, but that will not remove subjectivity int eh process. To see what I mean, is VOLUME of a liquid an objective thing? As in, note Fig 3 above, on how to read a meniscus [and onward, how to read an end-point of a titration with a colour indicator]: there is a judgment, and an inescapable subjectivity involved in many relevant cases, but that does not mean the result is not objective. The objection is misdirected, and based on a conceptual error, probably one driven by insufficient experience with real world lab or field measurements. GEM of TKI PS: "We're going downnnnn . . . !" kairosfocus
KF @ 68:
The way to do that is here, following on from Fig I.2 (cf. also I.1 and I.3) — and has been for many months, i.e reduce to a net list on the nodes-arcs wireframe, with some exploration of the noise to lose portraiture fidelity.
Thanks for the link. I've been too busy to follow the numerous threads on CSI so I appreciate you directing me straight to the pertinent information. The example of CSI in Mt Rushmore brings up a few interesting questions - some of which may be me just misunderstanding. The first thing I should do is check mu assumptions, which are that 'function' in this instance is basically 'looks like Lincoln' (or one of the others but lets stick with Lincoln for the moment). The method you describe, of reducing the relevant portion of the memorial to a wireframe introduces, as you say, some degree of error in interpretation - namely how much granularity is required for the likeness to be recognisable. This introduces the interesting question of whether the CSI is actually measuring Mt Rushmore, or just the minimum information required to convey a likeness of Lincoln in wireframe format - We might find that the minimum likeness is recognisable, but not distinguishable specifically as Rushmore from any other sculpture of Lincoln. And we mustn't forget that we have knowledge, and images of Lincoln to compare to (i.e what if the face was distinct, but the product of the artists imagination and not a real person). Another big factor here is that this is, and always will be, a subjective measure. If the observer is someone like me with mild Prosopagnosia then the granularity must be higher than for someone else. If you happen to have a relative who looks like Lincoln then you might argue that the face at Mt Rushmore actually looks more like your great uncle Fred than Lincoln himself. The upshot is that any CSI calculation in this case would require some large error bars. It gets even more interesting though if the observer - the semiotic agent - is blind! Does it still have function? Maybe, you can touch the face and feel the contours (if you are a skilled climber) but the CSI may be dramatically different when the face is apprehended this way. The general observation that struck me (and which may be in error) is that the measure of CSI in this particular instance depends a great deal on the observer - people could argue over whether a low granularity reproduction actually looks like Lincoln or not. Contrast this with physics - we have an objective measure of force (with the label Newton) and so we don't get stuck trying to decide if there are x or y Newtons of force - we can measure it objectively and the measure does not depend on any personal skills or biases. It is consistent and reproducible. What I am interested in knowing is how you can objectively calculate CSI rather than relying on a subjective assessment. If CSI in general requires a subjective assessment of a property like function (or is it specificity) then how can it be mathematically rigorous? DrBot
MarkF:
Me: How do you know that for any position any of them is possible, much less equiprobable?
Joseph:Science.
This is rather lacking in detail!  You believe that “No one knows how genes originated”.  How can science not know how they originated and yet know that all nucleotides are equally probable?
Me: Although we don’t know a simple law or formula that determines the order, nevertheless genes are created by a process with stochastic influences and limitations.
Joseph: No one knows how genes originated. No one knows how genes originated. And there isn’t any evidence for blind, undirected processes producing one.
We may not know in detail how the first genes began but we know quite a lot about the processes by which they develop and change – duplication, inversion, replication, transposition, point mutation etc.
Me: For example, in the case of gene duplication the nucleotides will almost certainly replicate the pattern of the gene that is being replicated – all other orders are very unlikely.
Joseph: Do you read what I post?
Most of it.  I don’t always understand it.  Does one of your comments deny that when a gene is duplicated then the duplicate is almost identical to the original?
Me: What claim did you make and need to support?
Joseph: This one:
Please explain how you conclude that there are two bits of information per nucleotide using that formula – without simply assuming that the probability of any given base pair is equally likely and independent of every other base pair (because that assumptions has no justification and is patently untrue).
That’s not a claim it is a request!  Or are you referring to the bit in brackets?  markf
Joseph & KF: I'll try and get back to you on your points tomorrow. But I think we're reaching an impasse. It happens. ellazimm
EZ: You can assert (or even believe) anything you want. What you need to do -- if you are to think scientifically -- is to produce empirical observational, factual data that tests and adequately supports the claims; without ideological blinkers on. Which in the case of macroevo resulting from cumulative filtered microevo, you simply do not have. It is not in the fossils -- overwhelmingly, sudden appearance, stasis and disappearance; and it is not in the implications of code or integrated functional, complex structure. After 150 years of trying. You are in effect proposing to move from "See Spot Run" to a book, one tiny functional step at a time. Or from "Hello world" to an operating system, one tiny non-foresighted step at a time. Doesn't work. You would go broke trying to write books or software that way, real fast. All of this goes to underscore the fundamental misconceptions that are blinding people from seeing he significance of functionally specific complex information as a sign of most credible cause. DNA and the wider systems of life are replete with FSCI, and there is exactly one empirically demonstrated, routinely observed cause of such FSCI: intelligence. The chance based engines of variation are not credibly able to generate the required FSCI. Not for first life and not for novel body plans. I include first life as I insist that without a credible root the Darwinian tree of life has no basis. So, until there is a credible, empirically warranted chance plus necessity chem evo scenario that leads to coded DNA based life with genomes beyond 100,000 bases that has in it cells that metabolise and have a von Neumann self-replicating capacity, there is no basis to even discuss macroevolution. Going beyond that, unless you have a similarly empirically warranted mechanism for chance variations and natural selections etc to arrive at novel body plans requiring 10 - 100+ million new functional bases viable from embryogensis on, on Earth dozens of times over in the past 500 - 600 MY, you have no basis for confidence in macroevolutionary models. I hardly need to underscore that here is no sound empirical, observational warrant for such -- macro evo thrives by ideological censorship and lockout of the only observed source of FSCI, intelligence. This is a triumph of ideology over evidence. What we do have, as just pointed out, is the evidence that genetic engineering is real -- cf Venter and colleagues. Indeed, a molecular nanotech lab a few generations beyond Venter would be a sufficient cause for what we see. Beyond that, we do have empirical evidence of adaptation to environmental constraints, at micro-level. Over-extrapolation backed by ideological materialistic a priorism is not a sound basis for science, and never was. GEM of TKI kairosfocus
ellazimm:
the environment and competition favour those who are ‘fitter’.
Bu 'fitness' is determined by whoever leaves the most offspring due heritable genetic variatiion.
The differential reproduction/survival has to do with the ability to survive/exploit the situation BECAUSE of different genomic influence.
There are several reasons why organisms survive and reproduce- better genetics is just one.
I am arguing that macro-evolution is micro-evolution over long periods of time.
There isn't any data to support that claim. Joseph
KF: The mutations add the information in a step-by-step manner with the environment selecting which variations 'make sense'. I suspect you're going to now trounce on me for claiming that random mutations or duplications add information. I am arguing that macro-evolution is micro-evolution over long periods of time. The argument against this is probabilistic: there's not enough time for that many mutations to occur. That's the battle ground. Data is being generated. Lenski's work is pertinent. I'm assuming the recent work reported on ID: the future is pertinent. But, logically, you cannot prove a negative. You cannot prove a highly improbable event didn't occur. You can only say it's extremely unlikely. Which kind of gets us back to a fine-tuning type argument. Such and such is sooooooo highly improbable that it's more reasonable to assume it's by design. I get that. I just don't like making assumptions. And, I agree, I am NOT addressing the origin of the first replicator. I'm not qualified to make those arguments. BUT, given a minimal replicator then I refute the further need to search the entire configuration space. And I accept that's a big given. ellazimm
EZ: Re:The environment/filter carves out the information by selecting the random variation which is more successful from the other random variations. In short, NS is a subtract-er, a culler; not an adder, a creator of info. You are back to an implicit appeal to chance variation as the source of information. And so you are right up against the needle in the haystack problem for first life and for origin of body plans. You have a theory of what is not in serious dispute even by Young Earth Creationists: micro-evo. To extrapolate this to body plan level macro-evo, you have to show much more, and in the teeth of the config space hurdles identified. GEM of TKI kairosfocus
CD: Yup, regulatory networks/circuits and the machinery that makes DNA info work in the living cell, are just as important. Only, much less understood. I gotta clip and respond to EZ, then get out of here to my next appointment. G kairosfocus
Joseph: the environment and competition favour those who are 'fitter'. Different environments and different situations favour different variations. Maybe you prefer the term filter to choose? The differential reproduction/survival has to do with the ability to survive/exploit the situation BECAUSE of different genomic influence. But the test is the 'environment'. That's the filter. KF: The information comes from generations of variation being culled and bred by the situation and environment. A mutation is valueless UNLESS it conveys an advantage. After eons of advantages stacking up you have a compendium of information that is capable of producing a fairly fit individual. My son plays video games. He doesn't like reading manuals and he doesn't spend much time trying to figure out the puzzle. He's 9. But he can remember. He tries things at random and 'dies', a lot. But, eventually, his information database is created and honed by the game environment. And don't start telling me that because the game was designed that argues for design. I'm talking about the way a series of random variation can be guided into a font of information by a filtering environment. This is why natural selection is NOT random. There is a 'memory' of what works. And new variation builds on what's worked in the past. That's how you add information. You start with randomly selecting options. The successful options you remember (i.e. those individuals survive). You add another layer of random tries. Save the winners. Etc. The environment/filter carves out the information by selecting the random variation which is more successful from the other random variations. ellazimm
CD: I hear you. I saw an outing attempt on me and that was enough to tell me I no longer wished to have anything to do with MF's blog. In addition, observe the subtle incivility here where in a thread I have posted, MF manages to studiously ignore me. I find that quite rude. Also, a bit silly as MF -- as just pointed out -- is making simple errors that if he would pay attention, he could correct. GEM of TKI kairosfocus
Excellent point about the destructive influence of natural selection, which is so often misunderstood as a creative influence, kairosfocus. I believe that genetics is so often misunderstood too. It's not all in the genes. There's a more important and bigger epigenetic picture that we've yet to fully understand: a design plan that blows chance explanations out of the water. Chris Doyle
MF: You are trained as a philosopher, and you worked in the computer industry. Surely, you can do the simple research to find out that nucleotide bases take values A/G/C/T (or for RNA U) in any given position, and the sugar-phosphate chain is essentially independent of which is where in any one string -- the complementarity constraint is to key-lock fit across the two helices in the DNA. If the string sequence was strongly physically constrained by necessity, it could not store information, as information depends on the ability to have different possible states along the string based on content, not on constraints of necessity. If we constrained by physical necessity, we would be looking at a crystal, not an informational macromolecule that can and does vary to specify the particular protein. Such a string could also chain at random, but then that brings us straight to the point. At-random chains are such that the functional states are deeply isolated in the config space of possible AA strings; per the code. Also, FYI, 2^2 = 4 Thus, we have two bits storage capacity per base. Going further, the three letter codons used for AA sequencing therefore have 3 * 2 = 6 bits maximum potential storage. they are used for what is essentially a 20-state AA system, giving the 4.32 bits per AA you may see, due to the redundancy in the system. GEM of TKI kairosfocus
EZ: Joseph is correct. As Darwin said, in his peroration to Origin:
It is interesting to contemplate a tangled bank, clothed with many plants of many kinds, with birds singing on the bushes, with various insects flitting about, and with worms crawling through the damp earth, and to reflect that these elaborately constructed forms, so different from each other, and dependent upon each other in so complex a manner, have all been produced by laws acting around us. These laws, taken in the largest sense, being Growth with Reproduction; Inheritance which is almost implied by reproduction; Variability from the indirect and direct action of the conditions of life and from use and disuse [yep, he had Lamarckian elements in his thought . . . ]: a Ratio of Increase so high as to lead to a Struggle for Life, and as a consequence to Natural Selection, entailing Divergence of Character and the Extinction of less-improved forms. Thus, from the war of nature, from famine and death, the most exalted object which we are capable of conceiving, namely, the production of the higher animals, directly follows. There is grandeur in this view of life, with its several powers, having been originally breathed by the Creator into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being evolved. [Origin, 6th edn, Ch 15]
Natural selection boils down to whatever variants are able to find niches in the ecosystem that allow them to reproduce successfully will take root and pass on their genes to future generations. The selection, contrary to popular opinion, is not a source of information, but patently a culler -- a remover -- of information. The variants that do not find niches do not survive to pass on genes. That which subtracts does not add. We have to look at that which supposedly adds, before we can see how subtraction may lead to survivors. By repeating the mantra "natural selection" one does not escape the need for engines of variation, and for specifically non-foresighted engines of variation, for darwinian type evolution. Within an island of function, that can account for hill climbing, but this does not at all account for the increments of information required to get to first life and onward to novel body plans. And, besides, the selection is itself a significantly chance based process: we are talking odds here, not determination. If the beasts with the wonderful new variant get eaten in the nest, or caught in a fire, or an epidemic, or are in a time of horrible drought or catastrophe, their superior genes -- as assumed -- will make very little difference. GEM of TKI kairosfocus
MarkF:
How do you know that for any position any of them is possible, much less equiprobable?
Science.
Although we don’t know a simple law or formula that determines the order, nevertheless genes are created by a process with stochastic influences and limitations.
No one knows how genes originated.
Genes are not formed by throwing a lot of nucleotides into a bucket and selecting them at random.
No one knows how genes originated. And there isn't any evidence for blind, undirected processes producing one.
For example, in the case of gene duplication the nucleotides will almost certainly replicate the pattern of the gene that is being replicated – all other orders are very unlikely.
Do you read what I post? What claim did you make and need to support? This one:
Please explain how you conclude that there are two bits of information per nucleotide using that formula – without simply assuming that the probability of any given base pair is equally likely and independent of every other base pair (because that assumptions has no justification and is patently untrue).
Joseph
Ellazimm:
Natural selection works on the same variance engine as artificial selection but the choices are made by the environment which favours some variations over others.
The environment doesn’t choose. There are a number of reasons why some survive and some do not. For natural selection the differential reproduction has to be due to heritable variation. Joseph
Hello again Mark, Ah, I guess I didn't stick around long enough to see you pulling up "The Whole Truth" repeatedly. What I did see was Toronto defend him as merely a "frustrated commentator" and that just gave me the strong impression that that was a forum I no longer wanted to participate in. I don't know about Mathgrrl (disrespecting your opponents doesn't always manifest itself as explicitly incivil remarks: ignoring points that have been raised (for 6 months in my case!) and repeating the same refuted arguments over and over again is very disrespectful and a waste of all our time, for example) but she has been given at least one blog entry here and so a stronger platform than many of the rest of us! And you admitted yourself Mark, you've been rude and offensive in the past here... maybe that's why you and Joseph are winding each other up (there seem's to be a bit of disturbing witch-hunt going on for Joseph from the evolutionist side too). If we can somehow wipe the slate clean, and then we see evolutionists pulling each other up here (and elsewhere) you will see me and other UD contributors returning the favour I'm sure. Until then, please don't ask me to wade into the middle of a blood-feud! Chris Doyle
Again there isn’t any law nor formula that detemines the odering of nucleotides down one strand of DNA. That means that any one locus any of the 4 nucleotides is possible.
How do you know that for any position any of them is possible, much less equiprobable?  Although we don’t know a simple law or formula that determines the order, nevertheless genes are created by a process with stochastic influences and limitations. Genes are not formed by throwing a lot of nucleotides into a bucket and selecting them at random. For example, in the case of gene duplication the nucleotides will almost certainly replicate the pattern of the gene that is being replicated – all other orders are very unlikely.  Similar considerations apply for insertions, inversions etc.
IOW MarkF do YOU have any evidence to support your claim?
I don’t know what claim you are talking about – can you clarify? markf
Joseph: Natural selection works on the same variance engine as artificial selection but the choices are made by the environment which favours some variations over others. A different environment would favour other variations. Then there's sexual selection, gene drift, geographic distribution and others. It's not my field but I'm sure you can find a decent discussion of all the selection processes without spending much time or efort. Their truth or falsehood is not dependent on my poor ability to elucidate them here. Fortunately. But yeah, basically I agree with you: what ever is good enough survives. But good enough covers a lot of ground. And it doesn't make it random. Genetic mutations look pretty random; they seem to occur at predictable rates but you can't say ahead of time when one will occur. Selection processes favour certain variations over others non-randomly. Otherwise evolution would not occur. ellazimm
ellazimm- Please provide the evidence for these alleged non-random selection processes. (I will give you artificial selection)- but natural selection is blind, mindless and purposeless. Whatever is good enough survives. And that can be any number of traits and allele combinations. Joseph
MarkF:
Please explain how you conclude that there are two bits of information per nucleotide using that formula – without simply assuming that the probability of any given base pair is equally likely and independent of every other base pair (because that assumptions has no justification and is patently untrue).
Again there isn't any law nor formula that detemines the odering of nucleotides down one strand of DNA. That means that any one locus any of the 4 nucleotides is possible. That means my alleged "assmuption" isn't an assumption at all. IOW MarkF do YOU have any evidence to support your claim? Joseph
#101 Chris
In the meantime, don’t you think enough personal remarks have been made here (and over on your blog)? At the same time as I was being assured that evolutionists are the good guys, never rude or offensive, some guy calling himself “The Whole Truth” completely contradicted everything that Toronto and co were saying.
And I repeatedly pointed out to “The Whole Truth” that I thought he was being uncivil and eventually he dropped out. I agree too many personal remarks have been made on this forum - but they still continue. I don't believe Mathgrrl (or I) has made any of them and I thought a comment from a pro-ID supporter who clearly cares about civility might curb them. markf
KF: Sure, there is random variation in the way mutations, duplications, splices, etc occur in the genome. But, once the process gets started, the mutations arise from an existing base and then the very non-random selection processes have their way with them. There's no search. There's no arrival of the fittest, just fitter. Or, even better, more suited, better able to exploit the resources in the proximal environment. Able to out compete the competition. But you've all heard/read/debated these points before so I shan't belabour the points. I know I am NOT discussing how the first replicator arose. As has been pointed out many times here and elsewhere there are lots of notions and hypotheses being promulgated. Sometimes one aspect of a possible procession is deemed more or less likely but . . . no one knows yet. We may never know. But, not knowing doesn't mean it was designed. If you don't know what the first replicator was I don't think you can convincingly argue that it's so highly improbable as to force the design conclusion. And if you don't know what the first replicator was then how can you say it couldn't have arisen from inorganic processes? ellazimm
#99 Joseph #99
You can’t even read what I post. I did NOT say there are 4 bits per nucleotide. And the order of NUCLEOTIDES is not determined by an law or formula.
I apologise – my typo.  Here is the corrected request: I have supplied the formula for Shannon information. Please explain how you conclude that there are two bits of information per nucleotide using that formula – without simply assuming that the probability of any given base pair is equally likely and independent of every other base pair (because that assumptions has no justification and is patently untrue). markf
EZ: A brief point. On causative factors, we need to explain highly contingent phenomena. Necessity does not explain contingency, so the alternatives are chance and/or design. In the usual evolutionary representation we have: Chance variation + natural selection --> descent with modification [at pop level], aka evolution The variation does not come from differential reproductive success of sub-populations, but from the chance variation. All that natural selection -- which is usually headlined as if it did the main job -- does is that we have differences in reproductive success on already existing variants in populations, and the term describes that that happens. That which explains the survival of the [reproductive-success] fittest, does not explain ARRIVAL of the fittest. For that, we need engines of variation, and by definition, the evolutionary materialistic frame is ruling out design as one of those engines. So, however we may categorise them, the engines boil down to chance: the variation is utterly uncorrelated with any foresighted process or goal. So, we are back to chance vs design. More when I have time. GEM of TKI kairosfocus
Hi Mark, I'll be in touch privately regarding SITC. In the meantime, don't you think enough personal remarks have been made here (and over on your blog)? At the same time as I was being assured that evolutionists are the good guys, never rude or offensive, some guy calling himself "The Whole Truth" completely contradicted everything that Toronto and co were saying. I've said everything I need to say about the way people conduct themselves in this debate: on your blog and to Astroboy on a separate thread over here this morning. The subject matter of this discussion is so fascinating and indeed, important, let's not waste it with unimportant, boring and damaging distractions. By the way, how many bits do you think there are in a nucleotide? If you offer us your insight, that might move your discussion with Joseph and co into healthier territory. Chris Doyle
ellazimm:
I agree with MathGrrl (even is she does have weird spelling conventions), no evolutionary process is a random search.
It isn't a search at all. It is all "stuff just happens and what works well enough gets kept". Joseph
MarF:
I have supplied the formula for Shannon information. Please explain how you conclude that there are four bits per nucleotide using that formula – without simply assuming that the probability of any given base pair is equally likely and independent of every other base pair (because that assumptions has no justification and is patently untrue).
You can't even read what I post. I did NOT say there are 4 bits per nucleotide. And the order of NUCLEOTIDES is not determined by an law or formula. Nucleotides Mark- ONE side of the DNA is what we are concerned with. BTW there isn't any evidece for blind, undirected chemical proceses creating a gene from scratch. Joseph
MG: Pardon some direct words, re:
I have read through all of your responses since my comment numbered 60 in this thread . . . . you repeatedly claim that CSI has been rigorously defined mathematically, but nowhere do you provide that rigorous mathematical definition.
Now, I have repeatedly pointed you to and linked 23 - 4 and 34 - 5 [also linked to in my for the record at MF's blog] above, where this drumbeat strawman tactic empty rhetorical misrepresentation is corrected yet once again. I am sorry, your response as cited simply tells me that you are not acting seriously. When, several times, I linked you to the places where I -- again -- dealt with the issues, correcting not only your direct claims but the underlying logical errors and conceptual errors, and you come to me with a chirpy little "I have read through all of your responses since my comment numbered 60 in this thread . . .," all you are telling me is that you waited till the points you need to respond do were buried under further posts and exchanges. Put that with the pattern of willfully stating what you know or should know is false and/or misleading, and I am not impressed. I am in a break for a moment with a client, so I give you the opportunity to respond seriously on merits above, and while you are at it, to explain yourself on the concerns addressed here. Then, we would have a basis for a fresh, serious start. GEM of TKI kairosfocus
KF: I agree with MathGrrl (even is she does have weird spelling conventions), no evolutionary process is a random search. But I was also making the point that IF it was a random search the process would not necessarily continue past a viable solution and that, if the search is random, a workable solution might arise at any time. But they're not random searches so your upper bound is only the most extreme case. Also probabilistic arguments are tricky. I'm sure some of you are aware of the counter-intuitive result when asking the question: How many people do you need to have the for probability of two (or more) of them having the same birthday (date and month NOT date and month and year) be one half? ellazimm
Chris We were discussing the relative civility on ID proponents and opponents on this forum. Would you care to comment on Joseph as opposed to Mathgrrl? markf
#91 Chris I am sorry I didn't get back to you. I have been rather busy but managed to get home early today. I appreciate your invitation to read Meyer's book (and even send a copy to me). I am unwilling to buy it (I have spent too much money on ID books that repeat the same errors in different ways already) and I doubt it will be in the library. Perhaps you could contact me by e-mail: mark dot t dot frank at gmail dot com? markf
#88 What I said is that because there are 4 possible nucleotides that mans, per Shannon, there ae two bits of infrmation per nucleotide. I have explained this several times. Apparently I was correct and you are a waste of time. Joseph I have supplied the formula for Shannon information. Please explain how you conclude that there are four bits per nucleotide using that formula - without simply assuming that the probability of any given base pair is equally likely and independent of every other base pair (because that assumptions has no justification and is patently untrue). markf
And MathGrrl, It has been demonstrated that ev is a targeted search. Sorry but you lose... Joseph
kairosfocus, First I count the bits- via nucleotides- and then I check on the variation tolerance to get the specification via Durston, et al's metric. I provided Durston's paper in MathGrrl's guest post. The point in counting forst is this is going to give me the upper limit of information (that may be specified). It is like resistors in parallel- I look at the values and know the total R will be less than the lowest value resistor in the parallel network. That means if I do the calculation and come up with a number greater than the lowest R I did something wrong. The same goes for SI/ CSI. Once I know the information carrying capacity I know the final number (based on the specification) cannot be greater than that. Joseph
Hello again Mark, I didn't hear back from you regarding "Signature in the Cell". That's a shame, because if you read Chapter 8 'Chance Elimination and Pattern Recognition' it would offer you some answers to the questions you pose about Information Theory. Chris Doyle
Hi MathGrrl, As you've begun a theme of unanswered posts, I wonder if you'd be so kind as to respond to a post I addressed to you 6 months ago. You can find it here: https://uncommondescent.com/evolution/can-you-say-weasel/#comment-366931 Many Thanks, Chris PS. Check out post 23 on this thread for the answers you're looking for from kairosfocus. There's a difference between you not liking the answer and not being answered at all. Chris Doyle
Yes, it is aligned with Dembski’s description and I have explained the mathematical rigor. MathGrrl
Simply asserting this does not make it so.
Strange that I provied Stephen C Meyer to support my claim. And anyone who read and understood NFL knows what I say is not an assertion. I have nothing else to say to you- you are a waste of time and bandwidth. Joseph
MarkF;
You can of course define CSI as 2 bits per nucleotide.
You don't have any idea what you are talking about. What I said is that because there are 4 possible nucleotides that mans, per Shannon, there ae two bits of infrmation per nucleotide. I have explained this several times. Apparently I was correct and you are a waste of time. Joseph
Jospeh #81
me- you cannot just look at a gene or amino acid and work out the amount of Shannon information it contains. Joseph - Yes, you can. me - You need to understand the context in which that gene or amino acid was created to calculate the Shannon information. Joseph - Good luck showing that to be true.
OK. I will give it a try (I have done this many times before - but not recently.) The formula for the Shannon information in any message is: -log2 P(i) where P(i ) is the probability of the observed outcome. The two issues are: 1) How do you define the outcome? 2) How do you calculate the probability of the outcome? The definition of the outcome depends on the specification e.g. if you thow a dice do you define the outcome as a six or as an even number?  If it is a gene are you talking about that exact sequence of nucleotides, any sequence with a similar function, any sequence that would not affect the organism’s fitness, or what? The subjective nature of the specification for a gene or an amino acid is what Heinrich was concerned with. The probability is even harder.  You appear to have simply assumed that all nucleotides are equally likely.  But genes are not created by throwing nucleotides together at random.  They are created by processes such as duplication, transposition, inversion and, of course, point mutation.  To assign a probability to a specific gene would imply knowing in some detail the process by which is arose. You can of course define CSI as 2 bits per nucleotide.  You can define it as anything you like.  But if you do so that is not Shannon information and you have wonder what significance the number has. markf
kairosfocus,
Re: But you don’t have to search the whole search space if you’re lucky. If I lose my keys I rarely have to search ALL the possible places they could be . . .
In short, you admit that you believe in a lucky noise machine/ miracle, similar to a perpetual motion machine of the second kind.
Not at all. ellazimm is simply pointing out that evolutionary mechanisms are not analogous to random search. Evolutionary mechanisms can be modeled, with some caveats, as searching in the immediate vicinity of known functional regions of the fitness landscape. In the real world we observe that the "fitness landscape" (again, treated as a model with caveats) is amenable to this type of search. Once you have provided your rigorous mathematical definition of CSI and demonstrated how to calculate it for the first of my scenarios, we can have a very interesting conversation about how this affects probability distributions. That must wait for your other answers, though. MathGrrl
Joseph,
Your brief description is not a rigorous mathematical definition of CSI and it is not aligned with Dembski’s description.
Yes, it is aligned with Dembski’s description and I have explained the mathematical rigor.
Simply asserting this does not make it so. You need to demonstrate it with references to Dembski's description. When you attempt to do so, you will find that your definition is not aligned with his.
You have also not shown how to calculate CSI objectively for any of the scenarios I’ve described.
I say that I have.
Again, it's easy to make assertions but supporting them requires effort. If you really had shown this, you would be able to copy and paste your calculation or provide a reference to it. You have done neither.
Thus far, no one has shown how to calculate CSI for any of the scenarios I described, nor has anyone provided a rigorous mathematical definition that is consistent with Dembski’s description.
I strongly disagree.
This isn't a matter of opinion. If you have examples of a rigorous mathematical definition of CSI, as described by Dembski, and calculations for my four scenarios, please produce them. The fact that you don't do so in response to my statements is telling.
I would expect someone making such claims to have already performed calculations similar to those I am requesting.
And I say it has been done, just not for your silly examples.
But you just said that it had been done for my "silly" examples (more of the civil discourse expected here, I see). Which is it? MathGrrl
kairosfocus,
The way to do that is here, following on from Fig I.2 (cf. also I.1 and I.3) — and has been for many months, i.e reduce to a net list on the nodes-arcs wireframe, with some exploration of the noise to lose portraiture fidelity.
The calculation related to Hamlet on your referenced site suggests that you calculate CSI as two to the power of the number of bits required to describe the artifact under consideration. Is that correct? MathGrrl
kairosfocus, I have read through all of your responses since my comment numbered 60 in this thread and have yet to see you address the two very direct questions I've asked. Let's try to make some progress by breaking this down into simple questions that can be answered succinctly. First, you repeatedly claim that CSI has been rigorously defined mathematically, but nowhere do you provide that rigorous mathematical definition. You could eliminate the need for your assertions by simply reproducing the definition here in this thread, in a single comment without any extraneous material. Could you please do so? Second, you have yet to reply to my question in comment 59:
CSI, I have explicitly said, many times, is a descriptive concept that describes an observed fact
By this, are you asserting that it is not possible to provide a mathematically rigorous definition of CSI, even in principle? If your answer is yes, I think you have a disagreement with some of your fellow ID proponents. If your answer is no, could you please simply state the mathematically rigorous definition of CSI, as described by Dembski, in a single, stand alone comment, without myriad tangential points, postscripts, and footnotes? It would go a long way to clarifying your position.
With these two questions answered, again as succinctly as possible, I believe we can make some progress in the discussion. Are you willing to work with me on this? MathGrrl
F/N: Jospeh, just above, is using the known storage capacity of DNA as a metric of information, i.e. directly reading the bits of storage used, just like we do for program code, files, computer memory and DVDs or USB sticks. This is obviously a valid metric -- the most widely used and understood one, and it is the easiest one to use. DNA is based on strings of 4-state elements, and that makes it 2 bits per element. Codons for AAs in proteins use three bases, and so 6 bits. A 300 AA protein requires 1,800 bits. Durston et al were using a metric that takes into account a certain degree of redundancy in proteins due to flexibility in AAs being used in the chains. That, too, is valid, and it indicates that a certain fraction of the storage capacity is being used to effect the actual information; notice their example as excerpted, where for a given protein something like 10^-106 percent of the available space is valid for that protein. In my earlier remarks, I made an allowance for that in discussing on the FSCI in something like a Mycoplasma: 100,000+ bases, for a parasitical organism dependent on others for key components of life. You will recall, I wrote off half as "junk -- most unlikely. I then used just 1 bit per base, to allow for redundancy. The results are all going to be of the same order of magnitude, and all of them point well beyond the threshold where a cosmos-scope search on chance based random walks plus trial and error is a viable explanation for what we see. In short, we are in a situation where we can be roughly right, and that gives us orders of magnitude in hand, relative to the threshold for the search resource limit of the solar system or the observed cosmos. The resistance to the conclusion that FSCI beyond that threshold is -- on empirical and analytical grounds -- best explained by intelligence, is plainly ideological -- i.e. a priori materialism imposed regardless of the evidence, not properly scientific. At least, if we understand science as follows:
an unfettered (but ethically and intellectually responsible) progressive pursuit of the truth about our world in light of observation, experiment, theoretical modelling and analysis, testing and discussion among the informed.
kairosfocus
arkF: Joseph (in addition to the points that Heinrich makes above)– as I am sure you know, the Shannon information in a message is relative to a probability model of the possible outcomes. That is why 2 bits per nucleotide and 6 bits per amino acid.
For some reason you seem to be assuming a model that all nucleotides are equally likely in all cases.
There aren't any physio-chemical laws that tell us the order of nucleotides along one side of a DNA sequence.
But that clearly is not true of real biological events such as gene duplication.
You are clearly confused. First there isn't any evidence that a gene duplication is via blind, undirected chemical processes. And second the way the nucleotides are strung is not decided by any physio-chemical law.
Hence the request to do the calculation for real events i.e. you cannot just look at a gene or amino acid and work out the amount of Shannon information it contains.
Yes, you can.
You need to understand the context in which that gene or amino acid was created to calculate the Shannon information.
Good luck showing that to be true. You do realize that your "word" is not a valid reference.
But no doubt you understand all this – what I would like to see is the calculation in a real context.
I would like to see you produce positive eviodence for your position but we both know that ain't going to happen. And I would love to see the methodology used that determined gene duplications are blind watchmaker processes. Joseph
CSI is Shannon information, of a specified complexity, with meaning/ function. The math behind Shannon information tells us, with resect to biology, thee are 2 bits of information per nucleotide as there are 4 possible nucleotides, 4 = 2^2 =2 bits/ nuc. For amino acids it will be 6 bits as there are 64 possible coding codons (including STOP), 64 = 2^6 = 6 bits. Do you understand that bit of math, MathGrrl? Are we OK with the fact that Shannon informationhas been duly defined with mathematical rigor? For the complexity part Dembski provided the math in “No Free Lunch”. And for specification you need to determine the variation tolerance- and the math for that also exists and has been presented to you. Heinrich:
How do you formally define “specified complexity”, and “meaning/ function”?
I told you already- Dembski took care of the complexity part in NFL and he also covered "meaning/ function". In biology specification refers to biological function. IOW "information" as it is used by IDists is the same as every day use. Joseph
F/N: Kindly note my point by point response to MG at 23 above, which she is ignoring in her latest round of drumbeat repetition of already answered talking points. kairosfocus
F/N: Let me cite from NFL, pp. 144 and 148, as well as Dembski's 2005 paper, attention also being drawn to the remarks in 35 above on why the CSI metric in the log reduced form is sufficiently well warranted empirically and analytically to be used with reasonable confidence: ____________________ CSI: >> p. 148: “The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. My aim throughout this book is to dispel that myth . . . . Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology. I submit that what they have in mind is specified complexity [cf. here], or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . . Biological specification always refers to function . . . In virtue of their function [a living organism's subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways [[through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole] . . .” >> Notice, the CSI concept is primarily a description of an empirical reality, commonplace in the technological world, and as highlighted for the biological world by Orgel and Wicken et al, from the 1970's. Dembski's contribution is to have constructed a model and metric for when something is sufficiently complex AND specified jointly that it is reasonable to infer to intelligence as the only empirically and analytically credible source. In effect, he did so by applying the principle of the search challenge of finding a needle in a haystack, a metaphor that he in fact used. If there is sufficient haystack, the search resources accessible to us in the solar system or the cosmos as a while will be grossly inadequate and we would be entitled to infer to the only routinely observed cause of such CSI, on empirically anchored best explanation. Namely, design. EZ's appeal to statistical miracles, above, show just how fundamentally sound the approach is, despite the drumbeat talking point about "not rigorous." (Which is turning into a code word for, to stick with my ideological commitment to a priori materialism, or as a fellow traveller of those who do so, I will exert selective hyperskepticism.) DEFINITION: >> p. 144: [[Specified complexity can be defined:] “. . . since a universal probability bound of 1 [[chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [[the cluster] (T, E) constitutes CSI because T [[ effectively the target hot zone in the field of possibilities] subsumes E [[ effectively the observed event from that field], T is detachable from E, and and T measures at least 500 bits of information . . . ” >> 2005 Eqn (excerpted from cite in 26 - 27, Weak Argument Correctives, top right this and every UD page for years): >> pp. 17 – 24, he argues: define ?S as . . . the number of patterns for which [agent] S’s semiotic description of them is at least as simple as S’s semiotic description of [a pattern or target zone] T. [26] . . . . where M is the number of semiotic agents [S's] that within a context of inquiry might also be witnessing events and N is the number of opportunities for such events to happen . . . . [where also] computer scientist Seth Lloyd has shown that 10^120 constitutes the maximal number of bit operations that the known, observable universe could have performed throughout its entire multi-billion year history.[31] . . . [Then] for any context of inquiry in which S might be endeavoring to determine whether an event that conforms to a pattern T happened by chance, M·N will be bounded above by 10^120. We thus define the specified complexity [?] of T given [chance hypothesis] H [in bits] . . . as [the negative base-2 logarithm of the conditional probability P(T|H) multiplied by the number of similar cases ?S(t) and also by the maximum number of binary search-events in our observed universe 10^120] ? = – log2[10^120 ·?S(T)·P(T|H)]. >> WAC 27 goes on to say: >> Debates over Dembski’s models and metrics notwithstanding, the basic point of a specification is that it stipulates a relatively small target zone in so large a configuration space that the reasonably available search resources — on the assumption of a chance-based information-generating process — will have extremely low odds of hitting the target. So low, that random information generation becomes an inferior and empirically unreasonable explanation relative to the well-known, empirically observed source of CSI: design. >> I add as well the log reduction from the OP above, that turns this into the directly applicable expression: >> ? = – log2[10^120 ·?S(T)·P(T|H)] . . . eqn n1 How about this (we are now embarking on an exercise in “open notebook” science): 1 –> 10^120 ~ 2^398 2 –> Following Hartley, we can define Information on a probability metric: I = – log(p) . . . eqn n2 3 –> So, we can re-present the Chi-metric: Chi = – log2(2^398 * D2 * p) . . . eqn n3 Chi = Ip – (398 + K2) . . . eqn n4 4 –> That is, the Dembski CSI Chi-metric is a measure of Information for samples from a target zone T on the presumption of a chance-dominated process, beyond a threshold of at least 398 bits, covering 10^120 possibilities. 5 –> Where also, K2 is a further increment to the threshold that naturally peaks at about 100 further bits. >> Applying this, we see that in he case of some of Durston et al's 35 protein families: >> Using Durston’s Fits from his Table 1, in the Dembski style metric of bits beyond the threshold, and simply setting the threshold at 500 bits: RecA: 242 AA, 832 fits, Chi: 332 bits beyond SecY: 342 AA, 688 fits, Chi: 188 bits beyond Corona S2: 445 AA, 1285 fits, Chi: 785 bits beyond . . . results n7 The two metrics are clearly consistent, and Corona S2 would also pass the X metric’s far more stringent threshold right off as a single protein. (Think about the cumulative fits metric for the proteins for a cell . . . ) In short one may use the Durston metric as a good measure of the target zone’s actual encoded information content, which Table 1 also conveniently reduces to bits per symbol so we can see how the redundancy affects the information used across the domains of life to achieve a given protein’s function; not just the raw capacity in storage unit bits [= no. of AA's * 4.32 bits/AA on 20 possibilities, as the chain is not particularly constrained.] >> From Taub and Schilling, Princs of Comm Systems, 2nd edn, Taub and Schilling (McGraw Hill, 1986), p. 512, Sect. 13.2 [cf as well my notes here, building on F R Connor] we can see the underlying definition and quantification of information -- which Schneider tried to "correct" Dembski for using, by substituting the far rarer synonym, "surprisal": >> Let us consider a communication system in which the allowable messages are m1, m2, . . ., with [observed, e.g. by studying the proportion of letters in typical text, as printers were long aware of] probabilities of occurrence p1, p2, . . . . Of course p1 + p2 + . . . = 1. Let the transmitter select message mk of probability pk; let us further assume that the receiver has correctly identified the message [My nb: i.e. the a posteriori probability in my online discussion is 1]. Then we shall say, by way of definition of the term information, that the system has communicated an amount of information Ik given by Ik = (def) log2 1/pk (13.2-1) [i.e. Ik = - log2 pk] >> ______________________ That, folks is what is being dodged and obfuscated behind a smoke cloud of selectively hyperskeptical objections, as I showed in 34 - 35 above. I again suggest the interested onlooker look at my discussion here. GEM of TKI kairosfocus
Mr Mark F: I see that you have now managed to make a comment in a thread that I have posted, but have also managed to ignore the substantial matter. I hope that you have by now managed to clean up the behaviour at your own blog, where there was violation of my privacy, in what looks very much like a case of attempted "outing" to harm. Kindly, inform me of such corrective measures as you have taken. When it comes to the substantial matter you tried to address above by cherry picking -- a form of strawman, onlookers; MF being a trained philosopher -- something Joseph said while ignoring my elaboration on it, I note that you have again used the tactic of uncivilly ignoring what I have had to say, in the interests of making debate points off what Jospeh had said in brief. This is instantly utterly revealing on the true want of merits in your claims. I therefore suggest that if you intend to continue commenting in this thread, kindly address the specific matters in the original post (including the post to which this is a footnote), and the relevant matters in no 71 above, which you have artfully dodged. Good day, sir. GEM of TKI kairosfocus
Heinrich: Kindly, start with the definitions cited in NFL, pp. 144 and 148, as were already quoted in the post to which this is a footnote. Kindly, explain -- specifically -- just what part of the definition there is inadequate: an observed event E within an independently identifiable limited island of function or specificity [zone T] coming from a space of 10^150 or more possibilities, i.e. beyond the lifetime, quantum state search resources of our solar system of ~10^55- 57 atoms or so: 10^102 states. Remember, the very fastest chemical reactions take up about 10^30 Planck times. (My citation and discussion here may be helpful.) You may then go to Dembski's 2005 paper on specification, which elaborates. Observe that the objection on the quantification there evaporates once we move the equation for Chi one step forward by doing a log reduction. In each case, your objections must be "rigorously defined," and must pass at least the criterion of inference to best explanation in light of empirical test. In case of mathematical objections, show your working relative to first principles and/or established results. (In short, I am saying that your objection falls apart once you have to meet your own standard. In contrast, onlookers, I invite you to look to 34 - 35 above, as already linked but studiously ignored or brushed aside -- over at MF's blog, with ad hominem attacks, on the subject of "rigour" and scientific work of morelling and metrics.) Heinrich, constantly repeating a false talking point -- "not rigorous" -- does not make it into truth, regardless of how intensely you want to believe the falsehoods. GEM of TKI kairosfocus
EZ:
Re: But you don’t have to search the whole search space if you’re lucky. If I lose my keys I rarely have to search ALL the possible places they could be . . .
In short, you admit that you believe in a lucky noise machine/ miracle, similar to a perpetual motion machine of the second kind. This is a case of inference to the inferior explanation. You are able to find your keys because the scope of search is small enough that it is reasonable to find on trial and error. Once the scope of search becomes large enough, the island of function -- location where you can see the keys in this case -- becomes so isolated that search becomes infeasible. For instance, if you go boating and your key drops off somewhere unknown into a lake of reasonable size, you go get a new lock, you don't try to search for it. No prizes for guessing why. We are dealing here -- for OOL and/or OO novel body plans -- with scopes of search where the resources of the whole cosmos are grossly inadequate, i.e. essentially the same grounds that are the basis for the second law of thermodynamics. I suggest that you read Abel here on the universal plausibility bound, here on the implications of thermodynamics, and here on the problem of relying on lucky noise. Also, Wiki on the infinite monkey theorem, here. (Pay particular attention to the results of actual tests cited at f in the original post above that show searches on a scope of 10^50 being feasible, which is about where Borel put the limit many years ago in a thermodynamics context.) In short, the problem is that any search of a config space of at least 500 - 1,000 bits worth of possibilities [10^150 - 10^301 possibilities] on the scope of the cosmos will round down to effectively zero search. That, BTW is why in hardware and software design contexts, beyond a certain point you not only cannot search by exhaustion, but you cannot sample enough to make a difference if you are looking for a rare condition in the space of possibilities, so you need analytical or heuristic methods that are intelligently directed and much more likely to succeeed than traial and error. The odds of these searches are so remote that they make a search that consists of marking a single atom in our cosmos for one Planck time at random across the lifespan of the cosmos, then moving about by chance to any place and time in the entire history of the cosmos and picking up just one atom -- lo and behold, the marked one at the precise 5* 10^-44 s when it is marked -- look like a sure thing by contrast. In short, to maintain belief in chance and necessity, you are appealing to a statistical miracle. Not once, but dozens and dozens of times over. ___________ So, the bottom-line is plain: when you have a routinely observed, empirically reliable explanation of the source of FSCI -- intelligence -- and you are forced to resort to appealing to many times over repeated statistical miracles to stick with the blind chance and necessity explanation, this is a strong sign that the problem is ideology not scientific reasonableness. GEM of TKI PS: Just in case you want to raise the winnability of lotteries (as I have seen in former times), lotteries are winnable because they are DESIGNED to be winnable. That is the scope of search is very carefully balanced indeed so the sponsors will by overwhelming probability make money, and some few lucky individuals (a proportion very carefully designed) will win enough to encourage enough people to tip in. kairosfocus
KF: "2^50,000 possibilities is a search space of 3.16*10^15,051, far beyond the search resources of the cosmos, allowing for redundancy in the code." But you don't have to search the whole search space if you're lucky. If I lose my keys I rarely have to search ALL the possible places they could be. And I stop searching when I've found them. Even if I don't use my intelligence to help direct the search I would hardly ever have to search all possibilities. ellazimm
Joseph at #61
CSI is Shannon information, of a specified complexity, with meaning/ function. The math behind Shannon information tells us, with resect to biology, thee are 2 bits of information per nucleotide as there are 4 possible nucleotides, 4 = 2^2 =2 bits/ nuc. For amino acids it will be 6 bits as there are 64 possible coding codons (including STOP), 64 = 2^6 = 6 bits.
Joseph (in addition to the points that Heinrich makes above)– as I am sure you know, the Shannon information in a message is relative to a probability model of the possible outcomes.  For some reason you seem to be assuming a model that all nucleotides are equally likely in all cases.  But that clearly is not true of real biological events such as gene duplication.  Hence the request to do the calculation for real events i.e. you cannot just look at a gene or amino acid and work out the amount of Shannon information it contains.  You need to understand the context in which that gene or amino acid was created to calculate the Shannon information. But no doubt you understand all this – what I would like to see is the calculation in a real context. markf
Joseph @61 -
CSI is Shannon information, of a specified complexity, with meaning/ function.
How do you formally define "specified complexity", and "meaning/ function"? Heinrich
Onlookers: To get an idea of what is really going on with these drumbeat repetition falsehood-based objections, let us clip MG at 62:
Given the desire on the part of many ID proponents for ID to be accepted as science
[NB: ID will never be accepted as science so long as the institutions of science are captive to a priori, question begging evolutionary materialism, so the issue is not acceptability but the ending of ideological captivity of science to materialism; once the materialist censorship is ended, it is at once obvious that the design inference is a scientific process]
, I would expect those proponents to be eagerly applying CSI calculations to real world artifacts
[done, MG just refuses to acknowledge this]
. Scientists, in my experience, don’t sit back and ask others to research their hypotheses.
[I must protest this ad hominem laced strawman caricature immediately: Really now, and what did Einstein do with his gravitational lens prediction of General Relativity, of 1916; i.e. I allude to the famous observations of 1919? There is a reason why there is commonly a division of labour in science between experimentalists and theoreticians. So, even if it were true -- and it is not -- that design researchers have not provided real world values of CSI on design inference principles, starting with Dembski's value for the flagellum about 10 years ago in NFL . . . so this was false from the beginning, as MG has been explicitly corrected but as usual has brushed aside and proceeds to tell the false as though it were true . . . and MG claims to be familiar with NFL!]
Now, the above is manifestly false and accusatory, even in the immediate context of this discussion here at UD. (And Joseph is fundamentally correct to raise the simple point that the known storage capacity of DNA is 2 bits per base, so a genome of 100,000 bases, even if we (overgenerously) write off half as "junk," and round down from 2 to 1 bit per base is storing 50 k bits of functionally specific info. Known to be specific, as it is coded and/or serves regulatory functions for the code, starting with Mycoplasma. Let's go for just 1 bit per symbol (as just noted) to take into account redundancies -- cf Durston et al and you will see this is reasonable or even generous. 2^50,000 possibilities is a search space of 3.16*10^15,051, far beyond the search resources of the cosmos, allowing for redundancy in the code. The search space challenge to get to the islands of function for first life are clearly beyond the capacity of the observed cosmos. The only empirically credible , observed cause of such functionally specific complex information is design. So, we have excellent reason to infer that first cell based life is designed, and the reasoning can be extended into seeing that body plans requiring 10+ m bases of further specific info to build the plans, dozens of times over, are also going to be credibly designed. Once we remove materialistic blinkers. On reasoning very closely related to that which warrants the 2nd law of thermodynamics. In short, MG's complaints on want of rigour boil down to being equivalent to objecting to the credibility of the 2nd law of thermodynamics. She is suggesting that the natural world has in it the equivalent of a perpetual motion machine, i.e a lucky noise machine capable of building FSCI-rich first life and body plans our of blind chance and mechanical necessity. Okay, physician heal thyself: provide empirical demonstration of chance and necessity giving rise to the equivalent of first life. We already know that the level of FSCI involved can be built by intelligences, routinely. Look around you onlookers: for years and years, evo mat objectors at UD have been asked to provide a clear example. The most they have been able to do is to point to something like ev, which it turns out is targetted search that works within an island of function, i.e it is at most an illustration of intelligently designed micro evolution. (Cf Mung's dissection of ev here, as just one example in point. this is one of the points that MG is refusing to explain herself on, as linked above and we may as well give it again here.) Conclusion: MG's selectively hyperskeptical objections are plainly motivated ideologically, not scientifically. To see how this has led her to make false accusations that she should or does know are false, compare the clip in the original post above (which one reads or should read before commenting), on the Durston et al metric being fed into Chi_500:
Using Durston’s Fits from his Table 1, in the Dembski style metric of bits beyond the threshold, and simply setting the threshold at 500 bits:
RecA: 242 AA, 832 fits, Chi: 332 bits beyond SecY: 342 AA, 688 fits, Chi: 188 bits beyond Corona S2: 445 AA, 1285 fits, Chi: 785 bits beyond . . . results n7
The two metrics are clearly consistent . . . .one may use the Durston metric as a good measure of the target zone’s actual encoded information content, which Table 1 also conveniently reduces to bits per symbol so we can see how the redundancy affects the information used across the domains of life to achieve a given protein’s function; not just the raw capacity in storage unit bits [= no. of AA's * 4.32 bits/AA on 20 possibilities, as the chain is not particularly constrained.]
Finally, let us observe point 26 from comment 24, where -- in correcting false statements by MG et al [which I specifically drew her attention to] I clipped a previously cited excerpt from the 2007 Durston et al paper, i.e. this is a repeat correction:
Consider that there are usually only 20 different amino acids possible per site for proteins, Eqn. (6) can be used to calculate a maximum Fit value/protein amino acid site of 4.32 Fits/site [NB: Log2 (20) = 4.32]. We use the formula log (20) – H(Xf) to calculate the functional information at a site specified by the variable Xf such that Xf corresponds to the aligned amino acids of each sequence with the same molecular function f. The measured FSC for the whole protein is then calculated as the summation of that for all aligned sites. The number of Fits quantifies the degree of algorithmic challenge, in terms of probability [info and probability are closely related], in achieving needed metabolic function. For example, if we find that the Ribosomal S12 protein family has a Fit value of 379, we can use the equations presented thus far to predict that there are about 10^49 different 121-residue sequences that could fall into the Ribsomal S12 family of proteins, resulting in an evolutionary search target of approximately 10^-106 percent of 121-residue sequence space. In general, the higher the Fit value, the more functional information is required to encode the particular function in order to find it in sequence space. A high Fit value for individual sites within a protein indicates sites that require a high degree of functional information. High Fit values may also point to the key structural or binding sites within the overall 3-D structure.
In short, Durston et al effectively invite slotting in their value of H in fits -- average info per symbol, recall -- as a measure of Ip. That is, as I did for the three examples: Chi_500 = Ip - 500, bits beyond the solar system threshold So, again, MG is repeating a falsehood, in hopes of winning debate points on drumbeat repetition. ________________ This is willful falsehood. Inexcusable. See why I have lost patience with the sort of bland, drumbeat declarations of falsehoods that MG et al know or should know are false? GEM of TKI kairosfocus
ellazimm:
MathGrrl wants to be sure she uses a definition that is one you agree to and consistent with Dr Dembski’s definition. AND she’s asking to see that definition in action. I don’t understand why it’s so hard to give her what she wants. Or why you are casting so many aspersions on her motivations. It’s your chance to make a point. Take it!!
I have made my point. It is all a waste of time now. As I said until MathGrrl can provide something from evolutionary biology with mathematical rigor she will always just move the goal posts and be unsatisfied. Joseph
MathGrrl:
Your brief description is not a rigorous mathematical definition of CSI and it is not aligned with Dembski’s description.
Yes, it is aligned with Dembski's description and I have explained the mathematical rigor.
You have also not shown how to calculate CSI objectively for any of the scenarios I’ve described.
I say that I have.
Thus far, no one has shown how to calculate CSI for any of the scenarios I described, nor has anyone provided a rigorous mathematical definition that is consistent with Dembski’s description.
I strongly disagree.
Your comment on funding strikes me as rather odd for at least two reasons.
Your requests strike me as rather odd.
First, many ID proponents make strong claims about CSI being a clear indicator of the involvement of intelligent agency.
And we have explained why that is- cause and effect relationships.
I would expect someone making such claims to have already performed calculations similar to those I am requesting.
And I say it has been done, just not for your silly examples.
Second, CSI is quite possibly the most clearly testable concept associated with ID.
I don't agree. Joseph
Dr Bot: The way to do that is here, following on from Fig I.2 (cf. also I.1 and I.3) -- and has been for many months, i.e reduce to a net list on the nodes-arcs wireframe, with some exploration of the noise to lose portraiture fidelity. The detailed exercise itself would be non-trivial, but we already know that -- from 3-d computer animation -- we are well beyond the 500 or 1,000 bit FSCI thresholds and the result would affirm what we already directly know. GEM of TKI kairosfocus
Just curious, can anyone give me a figure (and the math behind it) for the CSI in the Mount Rushmore National Memorial? DrBot
Joseph: Please watch your tone. GEM of TKI kairosfocus
PS: In addition, you are now associated with a site that has entertained abusive and uncivil behaviour, such as attempted outing. I have notified that site that I will only notify for the record. The bottomline remains that for weeks you have made no serious attempt to address cogent corrections, nor have you explained yourself in light of points of corrective concern highlighted in summary here. Instead you have tried to spread that which is false or highly misleading, and that in the teeth of what you know or should know. Persisting in such behaviour removes you from the circle of civil discussion. kairosfocus
MG: Why do you keep on repeating already cogently and repeatedly answered objections as though they have merit? Do you not see that you are simply showing a textbook example of hyperskeptical, closed minded objectionism that does not even care to respond to substantial corrections, and brazenly asserts falsehoods [e.g. your four cases were answered to adequately many times and in particular the first one, that you repeated above has been answered yet again and again: a duplicate is a copy, i.e there is no fresh FSCI involved, but the process of duplication once the duplicate is beyond the reasonable threshold, implies duplicating machinery and algorithms, implying a lot of FSCI beyond the threshold. And the direct answer is therefore ZERO, as you received immediately, weeks ago. Just, you seem incapable/unwilling to understand that the question is misconceived and demands an answer that is based on the involved logic.]?
Thus far, no one has shown how to calculate CSI for any of the scenarios I described, nor has anyone provided a rigorous mathematical definition that is consistent with Dembski’s description
This is now willfully false in the teeth of what you know or should know, as has already been addressed above, and in previous threads over the course of two to three months now. All you seem able to do is repeat what you know or should know is false, or distorted, and which is intended to be damaging. Again and again, on being corrected and provided with details, you have simply ignored mere facts and cogent arguments relative to facts, and resorted to drumbeat repetition of selectively hyperskeptical, false, strawman tactic, and accusatory talking points. SHAME ON YOU! You again have several points of explanation to do, on matters outlined and linked on just above. These include questions of definition, rigour [cf here above in this thread on that specific point], and the cases you have put up. At no point have you showed the slightest responsiveness on merits. The conclusion is sadly plain, as your misbehaviour is speaking for itself. Please, do better than this. Good day GEM of TKI kairosfocus
Joseph, MathGrrl wants to be sure she uses a definition that is one you agree to and consistent with Dr Dembski's definition. AND she's asking to see that definition in action. I don't understand why it's so hard to give her what she wants. Or why you are casting so many aspersions on her motivations. It's your chance to make a point. Take it!! ellazimm
Joseph, Your brief description is not a rigorous mathematical definition of CSI and it is not aligned with Dembski's description. You have also not shown how to calculate CSI objectively for any of the scenarios I've described.
That said if you want your little project completed, ie your guest post, then I suggest you fund it or do it yourself with all the information you have been given.
Thus far, no one has shown how to calculate CSI for any of the scenarios I described, nor has anyone provided a rigorous mathematical definition that is consistent with Dembski's description. I'm still very interested in performing some tests of my own once an ID proponent has explained how to actually calculate CSI objectively. Your comment on funding strikes me as rather odd for at least two reasons. First, many ID proponents make strong claims about CSI being a clear indicator of the involvement of intelligent agency. I would expect someone making such claims to have already performed calculations similar to those I am requesting. Without doing so, such claims are baseless. Second, CSI is quite possibly the most clearly testable concept associated with ID. Given the desire on the part of many ID proponents for ID to be accepted as science, I would expect those proponents to be eagerly applying CSI calculations to real world artifacts. Scientists, in my experience, don't sit back and ask others to research their hypotheses. MathGrrl
MathGrrl:
Please note that the rigorous mathematical definition of CSI as described by Dembski is an essential prerequisite to any calculations. Without that definition, the calculations are without context and therefore meaningless.
Your entire position on CSI is meaningless as you have been provided a mathematically rigorous definition of CSI- more mathematically rigorous than anthing the theory of evolution has to offer- and you just handwave it away. So here it is AGAIN: CSI is Shannon information, of a specified complexity, with meaning/ function. The math behind Shannon information tells us, with resect to biology, thee are 2 bits of information per nucleotide as there are 4 possible nucleotides, 4 = 2^2 =2 bits/ nuc. For amino acids it will be 6 bits as there are 64 possible coding codons (including STOP), 64 = 2^6 = 6 bits. Do you understand that bit of math, MathGrrl? Are we OK with the fact that Shannon informationhas been duly defined with mathematical rigor? For the complexity part Dembski provided the math in "No Free Lunch". And for specification you need to determine the variation tolerance- and the math for that also exists and has been presented to you. That said if you want your little project completed, ie your guest post, then I suggest you fund it or do it yourself with all the information you have been given. And if you cannot then change your moniker because you give math a bad name.
Based on my experience with working scientists, I would expect proponents of a non-mainstream hypothesis such as ID to welcome the attention and participation of such people.
Based on my experience dealing with them is as much a waste of time as dealing with you and the "mainstream" scientists should focus on their position as it is in need of a good colonic. Joseph
Joseph,
You have also generated a large amount of text without directly addressing the issue at hand, namely whether or not you can provide a mathematically rigorous definition of CSI (as described in Dembski’s papers and books) and detailed examples of how to calculate it.
I provied that for you- complete with examples.
If that is, in fact, the case, please repeat it here, as succinctly as possible. I do not recall you doing so over the course of our interactions on this topic. Please note that the rigorous mathematical definition of CSI as described by Dembski is an essential prerequisite to any calculations. Without that definition, the calculations are without context and therefore meaningless. MathGrrl
kairosfocus, My original requests to you, in comment 16 of this thread, were to provide a mathematically rigorous definition of CSI as described by Dembski and to show how to calculate CSI so defined for the first of the scenarios I proposed in my guest post:
A simple gene duplication, without subsequent modification, that increases production of a particular protein from less than X to greater than X. The specification of this scenario is "Produces at least X amount of protein Y."
Despite posting literally thousands of words since I made those requests, you have thus far not responded to them. The closest I have seen you come is this statement:
CSI, I have explicitly said, many times, is a descriptive concept that describes an observed fact
in comment 23. By this, are you asserting that it is not possible to provide a mathematically rigorous definition of CSI, even in principle? If your answer is yes, I think you have a disagreement with some of your fellow ID proponents. If your answer is no, could you please simply state the mathematically rigorous definition of CSI, as described by Dembski, in a single, stand alone comment, without myriad tangential points, postscripts, and footnotes? It would go a long way to clarifying your position. MathGrrl
Mung,
By the way, as noted by Toronto on Mark Frank’s blog, a number of the participants there are not allowed to post comments here at UD. In the spirit of open discussion, I hope you will respond there.
You can’t carry your own water here?
I'm more than happy to continue to participate in the discussion here. I'm simply pointing out that there are others who would like to participate but are prevented from doing so.
Like having more monkeys typing on keyboards is somehow going to help you make your case?
Is that another example of the civility expected here at UD? Many of the people discussing this topic at Mark Frank's blog are doing so calmly, rationally, and civilly. They have expertise in a broad range of different scientific disciplines. Based on my experience with working scientists, I would expect proponents of a non-mainstream hypothesis such as ID to welcome the attention and participation of such people. MathGrrl
Mung,
My belief is that MathGrrl felt she was losing the argument here and therefore ran away to some place where she hoped to get some help.
Your belief is unsubstantiated. I have had a ridiculously busy week at work and am just now finding the time to return to the discussion. I apologize to the other participants on this thread for my temporary disappearance; I'm sure you've all had similar real world demands on your time.
She was allowed to guest post here, she owes us the courtesty of remaining here to carry on her argument (if she has one).
I quite agree, although I don't see this discussion as an argument. From my guest post onward I have simply been requesting clarification of the mathematics behind Dembski's CSI metric. I still have yet to receive such clarification. I do take exception to one of your previous comments, apropos of that:
MathGrrl, you’re not interested in moving the debate along. You have nothing to offer beyond repeating ad nauseam the same ttwo demands.
When I get an answer to my requests (not demands) for sufficient information to be able to calculate Dembski's CSI objectively, I fully intend to run some tests of my own. Until someone here can provide that information, it isn't unreasonable to continue to request it. Rather than criticize me for doing so, perhaps you could help move the conversation forward by providing a mathematically rigorous definition of CSI, as described by Dembski, and demonstrate in detail how to calculate it for the four scenarios described in my guest thread? MathGrrl
Still no sign of MathGrrl. Mung
F/N: I have again had to respond correctively for the record at MF's blog, not least to attempted "outing" behaviour. kairosfocus
PS: Kinchin seems to be a Russian Mathematician who wrote on the topics 60 years ago, originally in Russian. So, his work falls under the unfortunate cold war era split in science. kairosfocus
Mung: MG, unfortunately, has been only showing up every so often to toss back in her repeated objections, regardless of corrections that have been worked through over and over. For weeks now. Over at MF's blog, she seems to have found an echo chamber. Somehow, it has not dawned on her that the CSI-FSCO/I concepts are plainly meaningful as descriptions of observed reality, and that the mathematical models and metrics developed in recent years are also reasonable relative to what such set out to be. Nor, that they are in fact effective, as we can again see above. All that is necessary is that the result of such cuts across the expectations of evolutionary materialism. Sad. When it comes to the intersection of information and the four Aristotelian causes, the classic view is a good point to begin. In a handy summary that we can start from:
1] Material cause: “that from which, [as a constituent] present in it, a thing comes to be … e.g., the bronze and silver, and their genera, are causes of the statue and the bowl.” [NB: S. Marc Cohen warns that the underlying word, aiton is being used ambiguously, accor to Ari's warning. So, he seeks to more accurately define sense. Here: x is what y is [made] out of. E.g. The table is made of wood.] 2] Formal cause: “the form, i.e., the pattern … the form is the account of the essence … and the parts of the account.” [sense: x is what it is to be y. E.g. Having four legs and a flat top makes this (count as) a table.] 3] Efficient cause: “the source of the primary principle of change or stability,” e.g., the man who gives advice, the father (of the child). “The producer is a cause of the product, and the initiator of the change is a cause of what is changed.” [sense: x is what produces y. E.g. A carpenter makes a table.] 4] Final cause: “something’s end (telos)—i.e., what it is for—is its cause, as health is [the cause] of walking.” [sense: x is what y is for. E.g. Having a surface suitable for eating or writing makes this (work as) a table. ]
Cohen then sets the matter in the context of our own issues in our day:
Matter and form are two of the four causes, or explanatory factors. They are used to analyze the world statically - they tell us how it is at a given moment. But they do not tell us how it came to be that way. For that we need to look at things dynamically - we need to look at causes that explain why matter has come to be formed in the way that it has. Change consists in matter taking on (or losing) form. Efficient and final causes are used to explain why change occurs . . . . This seems like a plausible doctrine about artifacts : they can be explained both statically (what they are, and what they’re made of) and dynamically (how they came to be, and what they are for) . . . . But what about natural objects? Aristotle (notoriously) held that the four causes could be found in nature, as well. That is, that there is a final cause of a tree, just as there is a final cause of a table. Here he is commonly thought to have made a huge mistake. How can there be final causes in nature, when final causes are purposes, what a thing is for? In the case of an artifact, the final cause is the end or goal that the artisan had in mind in making the thing. But what is the final cause of a dog, or a horse, or an oak tree? . . . . The final cause of a natural object - a plant or an animal - is not a purpose, plan, or “intention.” Rather, it is whatever lies at the end of the regular series of developmental changes that typical specimens of a given species undergo. The final cause need not be a purpose that someone has in mind. I.e., where F is a biological kind: the telos of an F is what embryonic, immature, or developing Fs are all tending to grow into. The telos of a developing tiger is to be a tiger. Aristotle opposes final causes in nature to chance or randomness. So the fact that there is regularity in nature - as Aristotle says, things in nature happen “always or for the most part” - suggests to him that biological individuals run true to form. So this end, which developing individuals regularly achieve, is what they are “aiming at.” Thus, for a natural object, the final cause is typically identified with the formal cause. The final cause of a developing plant or animal is the form it will ultimately achieve, the form into which it grows and develops. References: Physics 198a25, 199a31, De Anima 415b10, Generation of Animals 715a4ff. This helps to explain why “form, mover, and telos often coincide,” as Aristotle says (198a25). I.e., why one and the same thing can serve as three of the causes - formal, efficient, and final . . . . So the final cause of a natural substance is its form. But what is the form of such a substance like? Is form merely shape, as the word suggests? No. For natural objects - living things - form is more complex. It has to do with function. We can approach this point by beginning with the case of bodily organs. For example, the final cause of an eye is its function, namely, sight. That is what an eye is for. And this function, according to Aristotle, is part of the formal cause of the thing, as well. Its function tells us what it is. What it is to be an eye is to be an organ of sight. To say what a bodily organ is is to say what it does - what function it performs. And the function will be one which serves the purpose of preserving the organism or enabling it to survive and flourish in its environment.
The trick in all this is of course the subtle impact of Darwinian evolutionary materialism as a controlling perspective in our day. That is what leads us to ever so often miss the fact that the form that an embryonic organism of type X takes, is empirically known to be based on development regulating programs and "circuits" encoded in its DNA and the process of response to its stage in life and surroundings. The form that that process targets is in-built as in effect a guiding program. As information and linked effecting nanomachinery. In short, a tiger takes that form because of a program built into its zygote, and because of associated effecting machinery. Thus, formal cause is tied to information, and to an in-built information processing system. One that highly specific, and is well beyond the complexity threshold where the empirically warranted best explanation is design. That is, we are back at the point of Paley's stumbled upon self-replicating, AND time-keeping watch, as he discussed in Ch II of his Nat Theol -- which hardly ever comes up in the usual dismissive critiques. Let's hear him, in his own voice, beyond the convenient strawman we so often see set up and knocked over: ______________ >> Suppose, in the next place, that the person who found the watch should after some time discover that, in addition to all the properties which he had hitherto observed in it, it possessed the unexpected property of producing in the course of its movement another watch like itself -- the thing is conceivable; that it contained within it a mechanism, a system of parts -- a mold, for instance, or a complex adjustment of lathes, baffles, and other tools -- evidently and separately calculated for this purpose . . . . The first effect would be to increase his admiration of the contrivance, and his conviction of the consummate skill of the contriver. Whether he regarded the object of the contrivance, the distinct apparatus, the intricate, yet in many parts intelligible mechanism by which it was carried on, he would perceive in this new observation nothing but an additional reason for doing what he had already done -- for referring the construction of the watch to design and to supreme art . . . . He would reflect, that though the watch before him were, in some sense, the maker of the watch, which, was fabricated in the course of its movements, yet it was in a very different sense from that in which a carpenter, for instance, is the maker of a chair -- the author of its contrivance, the cause of the relation of its parts to their use. [[Emphases added. (Note: It is easy to rhetorically dismiss this argument because of the context: a work of natural theology. But, since (i) valid science can be -- and has been -- done by theologians; since (ii) the greatest of all modern scientific books (Newton's Principia) contains the General Scholium which is an essay in just such natural theology; and since (iii) an argument 's weight depends on its merits, we should not yield to such “label and dismiss” tactics. It is also worth noting Newton's remarks that “thus much concerning God; to discourse of whom from the appearances of things, does certainly belong to Natural Philosophy [[i.e. what we now call “science”].” )] >> _______________ Paley, of course was a generation before the computer was conceived by Babbage, and nearly 150 years before the first truly successful ones were built. But the point still stands. Our tendency to refuse to see that functional organisation on a Wicken wiring diagram -- especially in the context of things that serve a function AND replicate themselves -- is telling us that natural entities too can be artifacts, is challenged by this. All the more reason why there is that stubborn refusal to think outside of the materialistic box is an ideological captivity, not a sound framework for science. GEM of TKI PS: No I have never heard of this author. There are many authors, and there is much playing around with the basic ideas, e.g. I gather there are 3 - 4 dozen variants on the entropy concept and related models and metrics, alone. Before we wander off into the tangled bushes and vines of current speculative research, it would be wise to ground on the established, well tested frame of thought that is used in building engines, and in building telephone networks and the Internet. kairosfocus
Well, that's a lot to chew on. But I'll save it to disk and work my way through. I haven't spent much time on the IOSE site, so I'll have to take a closer look. Still waiting for MathGrrl to show back up and demonstrate true sincerity. (Like that's ever going to happen.) In addition to the interests I stated above, I'd also like to explore the relationship between information and A-T formal causes. This is I think an interesting philosophical question, because of what the implications might be for information in the universe apart from biological organisms. I think I have one chapter to go in Information and the Nature of Reality, then it's on to Information and Living Systems. kairosfocus, Have you heard of A.I. Khinchin? One of my recent book acquisitions is his Mathematical Foundations of Information Theory. I think it contains two of his papers, the first of which is titled "The Entropy Concept in Probability Theory." Mung
6: What is information? The UD glossary, clipping and re-organising Wiki, defines: "“ . . that which would be communicated by a message if it were sent from a sender to a receiver capable of understanding the message . . . . In terms of data, it can be defined as a collection of facts [i.e. as represented or sensed in some format] from which conclusions may be drawn [and on which decisions and actions may be taken]." 7: How can information be measured? By reducing it to symbols [if it is not already in that form] and observing the statistics, so that we see the argument in Taub and Schilling -- and BTW, are you repeating for emphasis or is the T & S summary unclear:
Let us consider a communication system in which the allowable messages are m1, m2, . . ., with probabilities of occurrence p1, p2 [generally, detected through statistical studies of messages, e/g E is about 1/8 of typical English text], . . . . Of course p1 + p2 + . . . = 1. Let the transmitter select message mk of probability pk; let us further assume that the receiver has correctly identified the message [My nb: i.e. the a posteriori probability in my online discussion is 1]. Then we shall say, by way of definition of the term information, that the system has communicated an amount of information Ik given by Ik = (def) log2 1/pk (13.2-1) [i.e. Ik = - log2 pk, in bits]
8: What is the relationship between information and Intelligent Design theory? Right from the beginning in the 70's, we see Orgel and Wicken observing, in the context of OOL and OO body plans: ___________ Orgel, 1973: >> . . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [[The Origins of Life (John Wiley, 1973), p. 189.] >> Wicken, 1979: >> ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. >> ____________ Design theory in effect draws from this challenge the conclusion that the central question is the causal origin of Wicken wiring diagram [which starts with s-t-r-i-n-g-s] functionally specific complex organisation/info, or more broadly of specified complexity similar to what is manifested in life forms. Empirically and analytically, the best explanation is intelligent design, as the FSCI in this post demonstrates. 9: What is entropy? A measure of microscopic freedom of state, which constrains how much work can be harvested from a hot body, and which is equivalently a measure of what we do not know about the specific arrangement of particles, momentum and energy at micro level if what we know is the lab level aggregate variables such as P, V, T, mass etc. Cf my B/N APP 1, from the beginning. Notice what happens with the marbles and pistons model once energy is pushed into it. 10: How can entropy be measured? Macroscopically, from the view of work and heat flows, ds >/= d'Q/T, increment in entropy is at least equal to the increment in heat flow divided by the absolute temp of the relevant body. From this, and considering the exchange of heat between a hot and a cold body within an isolated system, the second law drops out as a direct consequence. It is also implied that when a body has energy pushed into it, it tends to INCREASE its entropy, though of course if there is a coupling mechanism, some of the injected energy can be turned into work. Work being orderly or organised motion imposed by forces. At micro level, this is linked to the number of ways energy and mass etc may be specifically distributed conssitent with a given set of macro-level conditions: s = k log w 11: Why does entropy change in only one direction? This is loose terminology. In an exchange, entropy can increase or decrease for components, but at macro level overall entropy of an isolated system will rise as already discussed. At micro level, the issue is that spontaneous change will move to the more probable cluster of microstates associated with a given macro-level state. As already discussed. 12: What is the relationship between entropy and information? Cf Jaynes as already cited, and previous posts in this thread and elsewhere. 13: What is disorder? I this context, a more simple term for the sort of most likely, i.e equilibrium state at micro level. It turns out that the spontaneously most likely states are the ones where things at this level are most obviously "random," and that he states that are functional in interesting ways are strongly constrained and isolated in the space of possible configs. they come from MG's un-favourite: ISLANDS OF FUNCTION. 14: How can disorder be measured? By moving tot the more formal and defined term, entropy. 15: What is the relationship between disorder and entropy? Disorder is in effect a loose term for a high entropy condition, familiar form what happens when say a tornado passes through town. the fucntional configs of the structures is spontaneously rare but it is set up by design through doing organised work. Along comes a spontaneous force of nature and it moves towards equilibrium. A very expensive mess occurs. And, if the tornado passes through Seattle, it is utterly unlikely to assemble a flyable 747 by chance form the parts in the local junkyards. Sir Fred Hoyle was dead right on that. of course our busy little evo mat rhetors have tried to turn this into a "fallacy." One they love to brush aside. But in fact Sir Fred was a Nobel-equivalent prize winning astrophysicist, who knew his thermodynamics. So, one should take pause before dismissing what he has to say. In truth, the same tornado would be utterly unlikely to assemble an instrument on the 747's dashboard by chance, and for the same reason. Just as, if it were to hit the printer's shop in town, it would be utterly unlikely to print the manual for the 747, by splashing ink across paper. But, for the same reason, it would be utterly unlikely to print just one page from the manual either by the same means. Sir Fred used an extreme case to make his point, but they strawmannised his point instead of facing the core issue: once you go past about 143 ASCII characters worth of info, the resources of the observed cosmos are utterly inadequate to explain FSCO/I. And, 1,000 bits or 125 bytes worth of info, are utterly trivial if you are going to be doing something practical like write a control program. The only known, routinely observed and adequate cause of such FSCO/I is intelligence. So, on inference to best, empirically grounded and analytically grounded explanation, FSCO/I is a reliable sign of intelligence. We must not allow ourselves to be distracted by red herrings and strawman caricatures. _______________ Okay, I hope this is helpful. But now it is your turn, to interact with the above, and take it forward step by step, to see if you make sense of it and can use it in thinking and acting. As for the IOSE, it is a draft for m for a community level course, DV in good time the course manual will be published, as a print form to the online info. GEM of TKI kairosfocus
Mung: I will clip and answer from further comments. But, I am beginning to realise that part of what is happening is that there is an implicit context and perspectives issue that creates gaps. Brillouin and Jaynes are physicists, and there is a lot of implicit background on how statistical thermodynamics analyses work. For instance, we are looking at a body, from the lab level macroscopic scale, where there are clusters of aggregate variables that specify the state of an object at that level. Consistent with that, are a great many microscopic moment to moment arrangements of micro-scopic elements [masses -- particles] and the ways energy [and recall, a moving mass carries energy by virtue of that, kinetic energy], momentum [a measure of motion, the rate of change of which is force, and which -- per Newton's 3rd law on how bodies interact with equal, oppositely directed forces] is a conserved quantity, etc are arranged. Knowledge of the lab level observables is in the context of ignorance of the micro-particles, save for certain distributions identifiable under certain conditions. In living systems, however, we move down a level, as the functionality -- a macro-observable [a mere optical microscope here does not peer down to the level of what is happening with 10^20+ atoms, from moment to moment] -- constrains sharply the state of molecules consistent with that function. And, we are looking in the first instance of wanting to get to that state from the start-point of organic monomers in some warm little pond with clay beds and electrification or the equivalent. (Remember, the main molecules of life are polymers.) It might help to look at my micro-jets thought exercise here. Then, to move up to the complex multicellular body plan organisms, we need to move from general purpose cells to co-ordinated networks forming integrated tissues, organs, systems and organisms, beginning with unfolding from the zygote after fertilisation. That requires not only protein coding but regulatory networks and trigger factors governing expression and development. Credibly this last is 10 - 100 mn + bases, dozens of times over within our solar system, in a context where 1,000 bases is beyond the reasonable search capacity of the observed cosmos. With that backdrop (and I suspect more is needed that I am not spotting yet) in mind: 1: Do you mean the higher the entropy in some absolute sense, or the higher the entropy with respect to something else, for example the number of possible macrostates [microstates compatible with a macrostate]. At this stat thermodynamic level, entropy is basically tracing to the absolute value identified by Boltzmann (and Gibbs . . . the other pioneer): s = k log w The entropy metric is a measure of the number of ways of micro-level arrangement consistent with a macro-level state, log transformed. 2: Does that question even make sense? Yes, and I have emphasised the micro-macro state description distinction. As does Jaynes, as does Brillouin, and so on. It is int hat context of micro-level freedom to vary that we are ignorant or uncertain of the specific microstate at given moments, and are constrained to speak in terms of functions of distributions of states, e.g. the Partition Function, Z. (Z is the holy grail of stat thermodynamics analysis of a given system. Once you have it, you can tie micro to macro much more specifically.) Hence, Jaynes' remark cited in the OP, point q:
“. . . The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its [lab level] thermodynamic state. This is a perfectly ‘objective’ quantity . . . it is a function of [those variables] and does not depend on anybody’s personality. There is no reason why it cannot be measured in the laboratory.”
3: I’m trying to think of a simple way to work out examples. Coins, dice, whatever. Start with the example in a book you cannot get [long out of access, it is from the USSR], as I cite in my B/N, app I point 4. Let's first try to make a sort of diagram from letters, using p for white to make an even diagram, as we cannot access Courier: ================ || p p p p p p p p p p || ---------------------------------- || b b b b b b b b b b || ================= That is, we here have 10 W (shown as p) on top of 10 B:
as we consider a simple model of diffusion, let us think of ten white and ten black balls in two rows in a container. There is of course but one way in which there are ten whites in the top row; the balls of any one colour being for our purposes identical. But on shuffling, there are 63,504 ways to arrange five each of black and white balls in the two rows, and 6-4 distributions may occur in two ways, each with 44,100 alternatives. So, if we for the moment see the set of balls as circulating among the various different possible arrangements at random, and spending about the same time in each possible state on average, the time the system spends in any given state will be proportionate to the relative number of ways that state may be achieved. [hence, thermodynamic probability and the unobservability of clusters of possible states that are sufficiently rare relative to states of overwhelming statistical weight] Immediately, we see that the system will gravitate towards the cluster of more evenly distributed states. [this is the equilibrium macrostate] In short, we have just seen that there is a natural trend of change at random, towards the more thermodynamically probable macrostates, i.e the ones with higher statistical weights. So "[b]y comparing the [thermodynamic] probabilities of two states of a thermodynamic system, we can establish at once the direction of the process that is [spontaneously] feasible in the given system. It will correspond to a transition from a less probable to a more probable state." [p. 284.] This is in effect the statistical form of the 2nd law of thermodynamics. [if a sysrem is free to change its microstate, energy and mass distributions will tend to the states with greater weight, i.e to more chaos; in the living cell, there are active, programmed algorithmic constraints that can keep this at bay, for a time sufficient to live, grow, eat and reproduce, but in the end we know what wins out . . . ]
Nash has a nice short book on statistical thermodynamics, that begins with coins and heads/tails. The binomial distribution on 50/50 odds, once we move from one or two coins to say 1,000 coins, moves to a sharply peaked distribution, clustered on the same more or less even heads and tails we expect from the layman's "law of averages." the statistical weight of the near-equal H/T microstates so dominate that if a system is free to come out as it wills, it will overwhelmingly likely be there. You have to constrain contingency to keep it away from that, in practice. For clusters of dice, the best thing would be sot see the sum of the values, With 1 die, we have a flat random distribution., For two, it peaks at 7. As more and more dice are brought in, the sum of the values runs to a sharp peak, and we can expect that average to dominate as the numbers of dice increase. It is possible to use strings of dice to make an information system, and encode information. On the assumption of such a code, to see a specific message, say algorithmic instructions and associated data structures, would be maximally unlikely on chance. The same holds for a Hard Disk. The magnetic particles in the disk naturally would have a random scatter, but we impose an organised, Wicken wiring diagram pattern. To see such emerge by random chance on the gamut of our observed cosmos, would be a statistical miracle. DNA and the support machinery in the cell are the equivalent of that programmed hard disk. What best explains it, given the patterns we just saw: blind chance and mechanical necessity, or a designer. Put in those terms, the answer is blatantly obvious, save to the already committed. And, we must never underestimate our willingness to cling to an absurdity if that is the dominant view in power institutions. The reason it is so hard to see the obvious is that we are blinded by that power and its programming. Let us cry: stop the madness!!!!! 4: I sense he is a teacher And curriculum developer. My problem is that the process of being brought up to speed in the relevant areas is such that it makes it hard to spot the gaps that those meeting the info for the first time are likely to have. Hence the draft status of the IOSE. The briefing note, well that is backup for those who will challenge, incrementally built up over years. That is why it has naked mathematics in it, even at 101 level. We must recognise that one of the lessons of the exchanges over the past weeks is that even a High School level log reduction is hard for many to follow. And, if we look back, let us ask why didn't Dr Dembski simply do the reduction in 2005? The answer is that it probably was not his focus. And we have all been looking at and being hung up about where the derivation comes from ever since. This is one of those odd cases where moving forward simplifies and brings out the issues more clearly. He did hint at it by speaking of how the 10^150 limit could be seen as a static limit, but since everyone was looking back, we all did not spot the effect of moving FORWARD. 5: The more people we have spouting off about information, and entropy, and disorder, and the second law of thermodynamics without knowing what they are talking about the more likely it is that ID will come into disrepute. Judging by what happened with ev, the same holds on the other side, save for he balance of power to dominate what people hear. We are treading into deep shark infested waters here. And, as one who had to go through the painful rigours of addressing the subjects as a student and surviving, I testify that it is very hard to come back and see how to help at 101 level. One's conceptual structures change, and what now looks obvious, was not at the first. So, there will be things assumed as a background context that will not be obvious to those encountering the materials first time up. Things said that are loaded with meaning, will not seem to be important, and may be read from the wrong context, leading to confusion. Worse, there is a definite confusion of terminology, where related and overlapping terms, considered from various viewpoints are being tossed around. You will see that int he IOSE, I duck away from delving on information theory. Now that I have been forced by the line the rhetorical objections -- some of which IMHO are calculated to sow confusion -- are taking, you will see that I have now put the issue right in the beginning in the intro-summary, citing, apologising for the need to go slow and go over points several times, and explaining. That is going to make the price of admission stiffer, but given MG et al's tactics, that is what is now needed. Do I succeed, at least in part? What more is needed? [ . . . ] kairosfocus
Dr Rec: I have updated Comment 5 to respond to further aspects of your comment at 3. GEM of TKI kairosfocus
As far as I know, other available literature on information theory is either too simple or too difficult to help the diligent but inexpert reader beyond the easiest parts of this book. I might note also that some of the literature is confused and some of it is just plain wrong. By this sort of talk I may have raised wonder in the reader's mind as to whether or not information theory is really worth so much trouble, either on his part, for that matter, or on mine. I can only say that to the degree that the whole world of science and technology around us is important, information theory is important., for it is an important part of that world. To the degree to which an intelligent reader wants to know something both about that world and about information theory, it is worth his while to try to get a clear picture. Such a picture must show information theory neither as something utterly alien and unintelligible nor as something that can be epitomized in a few easy words and appreciated without effort. : John R. Pierce : An Introduction to Information Theory: Symbols, Signals and Noise : Second, Revised Edition
MedsRex, thanks for your support. I can only hope that this will turn out to be fruitful for both of us. I am going to keep on with kairosfocus until he'll have no more of me, lol. But I sense he is a teacher, and as such will do his utmost to bear with me as long as he senses my interest is real and that the effort is there. There's no reward quite like seeing someone else "get it" as a result of your guidance. Information is now front and center in the debate over Intelligent Design, both from the work done by Dembski and also the fairly recent publication of Signature in the Cell. Yet it appears to me to also be the least understood and often misapplied aspect. The more people we have spouting off about information, and entropy, and disorder, and the second law of thermodynamics without knowing what they are talking about the more likely it is that ID will come into disrepute. We need to understand the argument and know how to make it and our voice should be as one. This just isn't an area of science where uncertainty (another pun!) will work in our favor. My goals are to understand the following: What is information? How can information be measured? What is the relationship between information and Intelligent Design theory? What is entropy? How can entropy be measured? Why does entropy change in only one direction? What is the relationship between entropy and information? What is disorder? How can disorder be measured? What is the relationship between disorder and entropy? Hopefully, from these, I can piece together a coherent picture and argument. Mung
Mung, Excellent. Its punbelievable! And I am series that Kairos should find a publisher for the contents of his site...assignment kit and all. MedsRex
How's this for a title: Mathematics for the Functionally Illiterate Think folks would get the pun? Sorry, kf, don't mean to hijack your thread, but my expectation that we'll see MathGrrl posting in it is pretty low. If she does I'll quickly get back on topic. Here's another title: Learn About Entropy with Minimal Effort Mung
mung @44. I would gladly trade graphic design for the cover of that "for dummies"...all I would ask is a copy of said book! :) MedsRex
Your clips are a bit confusing, as they mix what I have said, what Brillouin said (notice his term “negentropy”), and what others are saying too, IIRC.
Right, sorry about that. Do those authors not present a consistent message, or have they added to the confusion, lol?
The bigger the number of ways mass and energy can be arranged compatible with a given macrostate, the higher its entropy.
Do you mean the higher the entropy in some absolute sense, or the higher the entropy with respect to something else, for example the number of possible macrostates. Does that question even make sense? I'm trying to think of a simple way to work out examples. Coins, dice, whatever. Trust me, I'm working hard on this subject. Just got in a box of books today! Four books on Information Theory, one book on entropy and the second law of thermodynamics, and one book on "The Emergence of a Scientific Culture." ok, that last one is a bit out of place :) I really do want to understand, even to the point that I can explain to others, and I seriously appreciate your time and patience. Maybe we could work out a "for dummies" version of the info on your site, lol! Mung
PS: Your clips are a bit confusing, as they mix what I have said, what Brillouin said (notice his term "negentropy"), and what others are saying too, IIRC. It is even relevant that you snipped out the point where I cited Boltzmann's expression for entropy s = k log w, w being the statistical weight of a given macrostate. The bigger the number of ways mass and energy can be arranged compatible with a given macrostate, the higher its entropy. So, for instance if a hot sub-body passes a quantum of heat to a cold one, both being in an isolated system, the loss of ways in the hotter body is overcompensated by the gain in ways by the colder one, and so the net entropy of the isolated system rises. And as noted, all this is going on without our having a practically useful access to the specific states, so we work with the aggregates. kairosfocus
Mung: A bit tangential, and a bit repeating of what was addressed on the weekend, but I will again respond on your points. This stuff does take time to soak in: _____________ >> 1] Does Shannon Information = Information? ANS: No. Shannon Info is a particular metric, average info per symbol, per the equation: H = - [SUM on i] pi log pi The most common base metric for info is: Ik = - log pk Other metrics can be composed for particular tasks, and of course the Dembski and Durston et al work does that. There is no one size fits all purposes metric. 2] Is Shannon Information = Shannon Entropy? These are used almost synonymously, though in fact we should recall the point that on being informed our uncertainty/ ignorance is reduced, and we are surprised to the extent that the info is the improbable. The H metric averages this out on a per symbol basis, suitable for use in say the carrying capacity of a telephone cable of a given bandwidth and noise profile. You want an overall view, not something that is specific to what happens with a particular message or symbol string. To begin with H has the same mathematical form as one of the expressions for Entropy in stat thermodynamics, and so the name was carried over. Subsequently, it has been shown that the two are closely related, much as Jaynes summed up (and as is cited above in the OP). This was still controversial until recently, but that is apparently now settling down. Wiki has a useful summary, which I cited in my always linked:
At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann's constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing. But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon's information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics [like pressure, temp etc]. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more" . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate. >>]) Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).
3] Is Shannon Entropy a measure of Missing Information? In the context of statistical thermodynamics, yes. As cited. That is how the mathematical identity of form was understood, as the clip shows. In the context of comms systems it is probably best to see the entropy metric as what we don't know about the source's state until it "speaks." Then, we have been informed and our uncertainty about its state has been reduced. But instead of getting tangled up in debates over entropy, uncertainty and information, it is best to first see H as a weighted average of the info per symbol transmitted [which is literally what the equation is saying], and as being focussed on the transmission rates of channels. If you go back to the previous thread [I forget the name just now], you will see where I clipped from I think it was section 6 of Shannon's paper where he uses the terms in ways that promote the near-synonym usage. In practice, once you sort out the weighted average, that is your safest guide to making sense. And bear in mind the more basic definition of info; which is what Schneider failed to do -- he sought to "correct" Dembski for using that more basic definition, by saying no it is not info it is surprisal; but in fact the two terms are synonymous. There is a lot of overlapping of terms, and we just have to get used to it, seeking to understand how any one person is speaking in any one context. (Sort of like how the term "theory" is used by scientists, never mind the general public.) 4] Does Information = Missing Information? In the context of the thermodynamics of bodies, that is so, the thermodynamic entropy can be seen as the missing info on microstates that locks us out of getting more than the Carnot limit of work. We have to work on the average/ aggregate macro-level behaviour and properties as manifested in pressure, volume, temp etc, not the specific behaviour of the individual molecules. When the macro-state of a body is given, there are a great many possible microstates [specific distributions of masses, energy, momentum etc -- remember we are easily dealing with 10^20 - 10^26 particles here, on human technology scales . . . if gas molecules at reasonable temps they will be flying around at about 100 m/s] compatible with it. (Think about what a computer to access and process that much info that fast would look like.) The lack of info on specific state is a measure of the ignorance of the particular state, and it restricts how much work you can get out of a body. If you DID know more, you could get more work [like how we harvest work very efficiently from a rotating water-wheel], but like the analysis of Maxwell's Demon shows, the effort expended to get the information undoes what you would have hoped for. Ah so de cookie crumble. >> _____________ GEM of TKI kairosfocus
hi kf, I have some questions about material posted on your web page. On Thermodynamics, Information and Design (Section 3)
The point is, that there are as a rule a great many ways for energy and matter to be arranged at micro level relative to a given observable macro-state. That is, there is a "loss of information" issue here on going from specific microstate to a macro-level description, with which many microstates may be equally compatible.
Information is a function of the ratio of the number of possible answers before and after, and we choose a logarithmic law in order to insure additivity of the information contained in independent situations...
We prove that information must be considered as a negative term in the entropy of a system; in short, information is negentropy.
Entropy measures the lack of information; it gives us the total amount of missing information on the ultramicroscopic structure of the system.
This point of view is defined as the negentropy principle of information...It is then possible to compare the loss of negentropy (increase of entropy) with the amount of information obtained. The efficiency of an experiment can be defined as the ratio of information obtained to the associated increase in entropy.
A remarkably simple and clear analysis by Shannon [1948] has provided us with a quantitative measure of the uncertainty, or missing pertinent information, inherent in a set of probabilities [NB: i.e. a probability should be seen as, in part, an index of ignorance] . . . .
The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities...
Does Shannon Information = Information? Is Shannon Information = Shannon Entropy? Is Shannon Entropy a measure of Missing Information? Does Information = Missing Information? This is the point I was attempting to make the other day, perhaps poorly. Perhaps we can revisit it now. Thank you. p.s.
The essential point is to show that any observation or experiment made on a physical system automatically results in an increase of the entropy of the laboratory.
This was the AHAH! moment I had the other night, BEFORE even reading this on your site. How to tie information to entropy. Mung
Mung: Repeat {"the end"} n times, is a specification. And, it is the Orgel criterion, not the Dembski. [A crystal is made up from: (i) set up unit cell, (ii) repeat in 3-D array, n times.] By contrast, something like a slab of granite -- just pulled a piece of polished black granite sitting next to me [I use it as a hone] to look at -- is a mish-mash of different sized, different orientation crystals of various minerals, forming a random pattern, and an organic tar is a random, highly complex blend of various polymeric molecules. (Cigarette smoke is not the only way to get a tar.) A random and complex "pattern" [like snow on a TV screen] -- there is no correlation between digits and there is no organising principle that specifies values of digits other than the roll of the die came up that way or the equivalent [recall the 100 - side die used in D & D] -- meets no specification other than itself, and what is significant is that any other pattern that turned up would have sufficed. Radical, stochastically controlled contingency. Of course, having generated a given pattern, we can use it to specify say a code reference -- some codes use sky noise to make a one time message pad -- or the combination for a bank vault. The otherwise meaningless phrase of symbols now has become an island of function. A singleton island too: hit, or miss. (And of course truly random numbers are so hard to remember that people tend to cheat, which is one of the tricks for breaking into systems: look for personally relevant significant information.) By contrast the string of alphanumerical digits in this post, while contingent, is highly specific and meaningful. It is of course also highly complex. And, it is yet another test case where FSCI of known provenance is designed. One of the millions of test cases that will be created this week. GEM of TKI kairosfocus
She needs to take a long hard look at what she has done and set it straight.
Let's never forget her opening post:
As someone with a strong interest in computational biology, evolutionary algorithms, and genetic programming, this strikes me as the most readily testable claim made by ID proponents. For some time I’ve been trying to learn enough about CSI to be able to measure it objectively and to determine whether or not known evolutionary mechanisms are capable of generating it.
Anything we post elsewhere about this topic ought to include that quote and a statement of how empty that claim turned out to be. Mung
[Class 1:] An ordered (periodic) and therefore specified arrangement: [Class 2:] A complex (aperiodic) unspecified arrangement:
I'm not sure I agree with this, or that it meets the Dembski criteria for specificity. For example, just because something exhibits a repeating pattern, does that mean it's specified? And just because we cannot detect a pattern (other than a pattern of randomness) does that means something is not specified? Doesn't the latter actually require the highest degree of specificity? But intuitively, I understand the point the authors are trying to make. My questtion is more along the lines of, if they could re-write this segment today, would they? Mung
Mung: You have raised a serious issue. The truth is, that over the run of about two months, MG has yet to provide a single substantial contribution, where talking point objections ado not count. When I have gone to the other site, I find that the same talking points and many others that have long been cogently answered are being circulated as though there is not a duty to be truthful and fair in reasoning. I have decided that I will address the issue here, and only notify for the record there; for those who may wander in and wonder if there is another side to the story. Unfortunately, we will be hearing the mantra as to how CSI is meaningless and not "rigorous" for years to come. And, MG's failure to address serious matters seriously, to provide even the smallest response to the request of Dr Torley, to explain evident blunders such as confusing a log reduction with a probability calculation, making some nasty snide suggestions, and her stunt of trying to brush aside the very foundational issues that led to the CSI concept have made her behaviour sink ever further in my estimation. She needs to take a long hard look at what she has done and set it straight. GEM of TKI kairosfocus
My belief is that MathGrrl felt she was losing the argument here and therefore ran away to some place where she hoped to get some help. She was allowed to guest post here, she owes us the courtesty of remaining here to carry on her argument (if she has one). She ought NOT be allowed to just drop in every so often and assert that her challenge hasn't been met, and oh, by the way, if anyone wants to try they can come to her chosen sanctuary and try to make their case there. That's cowardice. Mung
22 --> Ever since Hartley's suggestion [1928 IIRC], a log measure of information has been on the table, and such a log measure is the basis for the common unit, the bit, the base for logs there being 2. Specifically, we may again cite Taub and Schilling as a good short summary that gives the basic commonly used definition of information that Dembski (and Durston et al) built on:
Let us consider a communication system in which the allowable messages are m1, m2, . . ., with probabilities of occurrence p1, p2 [generally, detected through statistical studies of messages, e/g E is about 1/8 of typical English text], . . . . Of course p1 + p2 + . . . = 1. Let the transmitter select message mk of probability pk; let us further assume that the receiver has correctly identified the message [My nb: i.e. the a posteriori probability in my online discussion is 1]. Then we shall say, by way of definition of the term information, that the system has communicated an amount of information Ik given by Ik = (def) log2 1/pk (13.2-1) [i.e. Ik = - log2 pk, in bits]
23 --> Average information per symbol (aka Shannon info, aka entropy, aka uncertainty, up to rough synonym status as commonly used in discussions) may be deduced by taking the weighted average: H = - [SUM on i] pi log pi 24 --> this is of course related to thermodynamic entropy as Jaynes pointed out and as recent work has supported. Citing Jaynes through Robertson, as in the OP:
“. . . The entropy of a thermodynamic system is a measure of the degree of ignorance [a near-synonym for uncertainty int eh sense above] of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its thermodynamic state. This is a perfectly ‘objective’ quantity . . . it is a function of [those variables] and does not depend on anybody’s personality. There is no reason why it cannot be measured in the laboratory.” [NB, as Fig 3 in the OP shows, subjectivity is not the logical denial of objectivity, once the latter is understood to mean that truth that we credibly discover and warrant about the world]
25 --> In this context, we can then see -- if we are willing -- that search challenge is a legitimate way to identify and measure more precisely what "complexity" and "specificity" as joint criteria is a valid approach to modelling and quantification of what CSI is about. 26 --> Namely, when we measure info in bits, we see that for a string of bits b1, b2, b3 . . . each additional bit DOUBLES the number of possible configurations. One bit takes two values, 1/0. Two bits take four values: 00/01/10/11, and so on. 27 --> Such can be mapped to a configuration space [and we can measure distances in that space following Hamming as bit differences, number of 1-bit flipping steps to transform one config into another]. 28 --> Such is plainly informational as the bits can be seen as the number of yes/no decisions to arrive at a specified config. The bit pattern is then descriptive of the configuration of interest, in a particular context of discussion. [And it should be evident that I am making a logical-mathematical argument in words on glorified common sense intuitively logical steps, much as I used to insist that my students be able to say in words what their derivations were doing, step by step.] 29 --> Now, some configurations work in a given context and others do not. Plug in the wrong Air Flow Sensor, and the engine will not start, never mind that the part numbers are supposed to be equivalent or the same, and the connectors and form factor are the same. In short, function is an observable, comparable criterion, and it may have a discrete threshold, and/or vary beyond that in steps or along a continuum. It may also have a saturation level, a peak or plateau value. And so on. 30 --> In fact, that was one of the puzzles Einstein solved in addressing the photo electric effect. Below a certain frequency, no matter how intense the light, no emission. Above it, no matter how weak, emissions, and then as intensity rises, rate of emissions rises. That is where the threshold -- the work function -- entered the expression: E = h*f - w. [Using substitutes for the Greek letters in the OP.] 31 --> All of these feed into the reduced Chi metric, which I here present in the form where S is a dummy variable denoting specificity, S = 1/0, and is based on observation of the "island of function" effect in the config space: Chi_500 = Ip*S - 500, bits beyond the solar system threshold 32 --> To pass the threshold, Chi_500 must be positive, must be specific, and must have at least 500 bits worth of complexity. 33 --> That is, observed cases of a phenomenon, E1, E2, E3, etc must come form an island of function (or a comparable definable and observable restriction), T, such that not just any function would do, i.e. T must be quite small relative to the space of possible configs. 34 --> Since the number of Planck-time atomic states of the atoms of our solar system since the time of its credible origin, will not exceed 10^102, WE HAVE 48 ORDERS OF MAGNITUDE WORTH OF CONFIGS TO PLAY WITH. 35 --> As long as T is sufficiently small that a random walk sample or search of 1 in 10^48 [a sample that in this context is beyond the actual feasible resources of our solar system] is maximally unlikely to find it, it is unreasonable to suggest that the best explanation for cases E is chance and natural selection on spontaneous trial and error. This is what the infinite monkeys type needle in the haystack analysis supports. 36 --> If we want to scale up the scope of search, Chi_1000 covers the resources of our observed cosmos. The only observed cosmos we have. (Multiverse suggestions, as addressed in 14 above for Koonin, move beyond science to speculative philosophy and simply move the issues up one level, ending back up at the same conclusion.) 37 --> Now, there is one empirically well supported and analytically credible explanation of instances of FSCI/CSI beyond the threshold: design. That is, across billions of examples [cf the Internet, libraries and the world of technology around us] in every case where we directly know the cause, the source of such specified complexity is design. 38 --> That is, on analytical grounds and on empirical observation, it is warranted to infer that cases of CSI beyond the threshold are best explained on design. 39 --> The significance of this analysis is therefore telling when we compare MG's impatient dismissal in 15 above:
discussions of islands of functionality, the computational power of the universe, presumed failures of modern evolutionary theory, Durston’s calculations, etc. are not relevant to answering these questions. The issue is whether or not CSI is a useful metric.
40 --> MG knows, or full well should know -- having been repeatedly informed, step by step -- that it is precisely these considerations that make CSI a useful metric. So, to brush them aside so brusquely as "not relevant" is to close her mind to the material facts and steps in reasoning. 41 --> the problem then is not lack of "rigour" or want of adequate definition, models and metrics etc, but that MG is refusing to follow the steps that present why the Chi metric and related metrics and models are legitimate and useful. this is the fallacy of the closed mind, and is grossly irresponsible and disrespectful; especially when one of her assertions that she has needed to explain -- for weeks now -- is her unwarranted projections of dishonesty on the part of design thinkers. 42 --> By way of utter contrast, let us scoop from the original post the way that the Chi metric easily incorporates the Durston et al values of FSC in Fits, and yields the following results:
RecA: 242 AA, 832 fits, Chi: 332 bits beyond SecY: 342 AA, 688 fits, Chi: 188 bits beyond Corona S2: 445 AA, 1285 fits, Chi: 785 bits beyond . . . results n7
43 --> In short, the Chi metric, as applied to protein families, renders the verdict that the best explanation for the information in these proteins is design, and beyond that, that the aggregate information in such proteins is best explained by design. That is, cell based life is -- on the Chi metric -- best explained as an artifact of design. _______________ It is clear that the Chi metric, the CSI and FSCI concepts are empirically anchored, are based on well known bodies of scientific work and are developed through reasonable extensions to that work. They are coherent, they are based on simple but not simplistic premises, and they are accurate to the relevant facts. The resulting metric is plainly eminently empirically testable, and on the face of it is successful on literally billions of tests, with millions more being added weekly on the Internet. The "there is no mathematically rigorous definition" objection is selectively hyperskeptical and -- as is cited from 15 above -- plainly rests on a closed minded refusal to consider the facts, history of ideas, and the chain of reasoning and mathematical modelling that leads to the Chi metric in its most useful form. The reason for that ideological refusal, is not hard to identify: a priori commitment of evolutionary materialism and/or its stalking horse, so-called methodological naturalism. To such, Philip Johnson has issued an apt reply ever since 1997:
For scientific materialists the materialism comes first; the science comes thereafter. [[Emphasis original] We might more accurately term them "materialists employing science." And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) "give the appearance of having been designed for a purpose." . . . . The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [[Emphasis added.] [[The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]
GEM of TKI kairosfocus
F/N: Some remarks on measurement [to be posted here and linked at MF's blog by way of being for the corrective record], and on modelling, metrics and rigour, by way of a summary of the reasonableness of the steps taken to arrive at the sort of metric expressed as: Chi_500 = Ip - 500, in bits beyond a threshold The rhetorical pivot of MG's objection is that the Dembski-type metric and associated models of CSI lack "Rigour." It is therefore helpful to put some issues in context: 1 --> Fundamentally, the concept of CSI is a description of an observed reality, as we may see from Orgel's original description of organised as opposed to ordered as opposed to random systems, in 1973:
‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65.
2 --> So, the key issue is how (a) complex functional organisation may be distinguished from (b) order, and (c) randomness. Then, if possible, a mathematical model amenable to making measurements, is to be constructed and empirically validated. 3 --> An example of distinct string data structures provided by Thaxton et al in TMLO in 1984, Ch 8, provides a convenient point of departure in this task [one used in my always linked App 3 -- i.e. it has been two clicks away all along, in an appendix titled: "On the source, coherence and significance of the concept, Complex, Specified Information (CSI)" . . . ], when they distinguish:
1. [Class 1:] An ordered (periodic) and therefore specified arrangement: THE END THE END THE END THE END Example: Nylon, or a crystal . . . . 2. [Class 2:] A complex (aperiodic) unspecified arrangement: AGDCBFE GBCAFED ACEDFBG Example: Random polymers (polypeptides). 3. [Class 3:] A complex (aperiodic) specified arrangement: THIS SEQUENCE OF LETTERS CONTAINS A MESSAGE! Example: DNA, protein.
3 --> Since any describable object can be reduced to a structured pattern of strings [i.e to a nodes, arcs and interfaces structure], an analysis in terms of strings is without loss of generality. 4 --> Now, TBO give key contrastive examples of diverse types of strings, noting how periodic ordered patterns are distinct from at-random aperiodic ones, and these are again distinct from aperiodic, organised ones that bear functionally specific information. 5 --> They give polymer examples, similar to the examples Orgel provided; and went on to a thermodynamic analysis (which Bradley later converted to informational terms, as I have discussed in my always linked, app 1; of course, from Brillouin and Jaynes on, we have had good reason to understand that there is a link from entropy to information). 6 --> So, it is reasonable to see that order, randomness and [functional] organisation may be distinguished and to ask onward whether that distinction may be reducible to measurement values on a metric. 7 --> Already, Trevors and Abel provide a 3-dimensional model of the distinction in Fig 4 of their 2005 paper on three types of sequence complexity, OSC -- order, RSC -- randomness, FSC -- funcitonal organisation (discussed and shown here in App 3 my always linked). 8 --> That paper invites quantification of the diagram in Fig 4, and in 2007 Durston, Chiu, Abel and Trevors provided such a metric, giving 35 values for protein families, based on Shannon's H metric as applied to null, ground and functional states of amino acid sequences. In so doing, they remark (as was excerpted previously and above):
The measured FSC for the whole protein is . . . calculated as the summation of that for all aligned sites. The number of Fits quantifies the degree of algorithmic challenge, in terms of probability [info and probability are closely related], in achieving needed metabolic function. For example, if we find that the Ribosomal S12 protein family has a Fit value of 379, we can use the equations presented thus far to predict that there are about 10^49 different 121-residue sequences that could fall into the Ribsomal S12 family of proteins, resulting in an evolutionary search target of approximately 10^-106 percent of 121-residue sequence space. In general, the higher the Fit value, the more functional information is required to encode the particular function in order to find it in sequence space. A high Fit value for individual sites within a protein indicates sites that require a high degree of functional information.
9 --> But, we are getting a bit ahead of ourselves. What is a model, what are measurements, and what are metrics that allow us to apply mathematical models to making measurements? 10 --> Wiki gives a useful summary:
A mathematical model is a description of a system using mathematical language. The process of developing a mathematical model is termed mathematical modelling . . . A mathematical model usually describes a system by a set of variables and a set of equations that establish relationships between the variables. The values of the variables can be practically anything; real or integer numbers, boolean values or strings, for example. The variables represent some properties of the system, for example, measured system outputs often in the form of signals, timing data, counters, and event occurrence (yes/no). The actual model is the set of functions that describe the relations between the different variables.
11 --> Reasonable criteria for such a model are -- from worldview evaluation points of comparison on comparative difficulties -- that it should be (a) reliably empirically accurate for a relevant region of interest, that it should be (b) coherent [not self-contradictory], and that it should be (c) elegantly simple, neither simplistic nor an ad hoc patchwork. 12 --> Models that meet criteria a, b and c, can be trusted, will not confuse us, and are not prone to break down. 13 --> Accuracy also suggests that giving measurable and observable values is a desirable feature of such a model. 14 --> Wiki, likewise, defines measurement usefully, so I will cite it as a point of reference, expanding on the classic "the act or result of comparing an amount of a quantity with an agreed standard amount for the quantity, its unit":
Measurement is the process or the result of determining the magnitude of a quantity, such as length or mass, relative to a unit of measurement, such as a meter or a kilogram . . . . With the exception of a few seemingly fundamental quantum constants, units of measurement are essentially arbitrary; in other words, people make them up and then agree to use them. Nothing inherent in nature dictates that an inch has to be a certain length, or that a mile is a better measure of distance than a kilometre. Over the course of human history, however, first for convenience and then for necessity, standards of measurement evolved so that communities would have certain common benchmarks.
15 --> In this context, we can see that a metric is a mathematical framework for a particular measurement. In effect a metric identifies relevant observable variables and allows us to standardise relevant values, typically by fitting them into a model framework of variables and relationships on scales. The key variables need to be empirically connected so the relevant states can be observable (in the control system sense) and compared to standard values to yield measured values. Relevant classes of scales typically fit into the RION framework: ratio, interval, ordinal, nominal.
(RION categorisation is debated of course but is widely used and in my experience very helpful. I would take a digital state variable as ordinal and/or nominal, as there are gaps between values that are not defined and there may not be a meaningful "distance" between values. This extends to the Rasch polytomous model, whereby entities can be assigned to points on a stepwise scale, and where statistical methods can be applied to the possibility of being in one of a neighbourhood of points. Going beyond this, the configuration space state concept is based on the assignment of objects to states in an n-dimensional space, often illustrated by the idea of a vast ocean with islands sitting on zones of interest in it. This is a cut down version of phase space modelling [cf also state space in control system engineering], and it is linked to much thought on so-called fitness functions, where fitness values are assigned to points in a config space, and where having well-behaved trends is a key constraint for hill-climbing optimising algorithms. A pivotal issue and contention in design theory is that biologically relevant config spaces are credibly based on islands of function in a vast sea of non-function, posing the central search challenge to models of origin of life and/or origin of body plans. (Cf here on the related fossil record of sudden appearance, stasis, disappearance.) This also extends to the origin of a fine tuned cosmos in the space of possible cosmological parameters and laws. In each case, operating points are isolated/rare in the spaces, and credibly sit in clusters we could term islands of function. The key analytical point of ID is that such isolated islands of function are maximally hard to find by chance plus necessity, on the infinite monkeys type analysis, but are routinely produced by intelligently directed configuration, i.e. design.)
16 --> Can such a process be "rigorous"? Especially "mathematically rigorous"? Again, let us excerpt Wiki as a testimony against known interest:
An attempted short definition of intellectual rigour might be that no suspicion of double standard be allowed: uniform principles should be applied. This is a test of consistency . . . . Mathematical rigour is often cited as a kind of gold standard for mathematical proof. It has a history traced back to Greek mathematics, in the work of Euclid. This refers to the axiomatic method . . . . Most mathematical arguments are presented as prototypes of formally rigorous proofs. The reason often cited for this is that completely rigorous proofs, which tend to be longer and more unwieldy, may obscure what is being demonstrated. Steps which are obvious to a human mind may have fairly long formal derivations from the axioms. Under this argument, there is a trade-off between rigour and comprehension. Some argue that the use of formal languages to institute complete mathematical rigour might make theories which are commonly disputed or misinterpreted completely unambiguous by revealing flaws in reasoning.
17 --> We immediately see the key problems. For, plainly, if mathematics itself is subject to a tradeoff between being comprehensible and being "rigorous," with the issue of informed intuition allowing for reasonable steps in inference, then such must apply to explanatory and/or quantitative modelling. Otherwise, we are simply playing at selective hyperskeptical games; which is demonstrably inherently self-contradictory on matters of fact. If you trust intuitive insights and inferences sufficiently to cross the road and make other momentous decisions, then there is no reason to blanket dismiss such in scientific work. 18 --> We may properly insist on framing discussions and models on reasonable and accepted principles of mathematical reasoning and that fresh departures should reason from what is the body of accepted positions, whether to build on or to demolish and rebuild, but that is a commonplace. Nor are such approaches conspicuously missing in the context of this discussion. 19 --> In effect, if a model is empirically reliable and sufficiently accurate for relevant decisions to be made, is coherent and is based on reasonable assumptions, terms, and relationships [in turn tied back into the common pool of relevant thought], it should be acceptable to reasonable persons. 20 --> Immediately, the "not sufficiently mathematically rigorous" objection that we have seen so much for the past two months collapses. 21 --> Conceptually, CSI and FSCI relate to known, observed contrasts that have been discussed in the literature for decades. Specified complexity manifested in complex functional organisation is a fact of life, and one commonly associated with the world of technology. It is also recognisable in the living cell, especially the fact of digitally coded functional information. [ . . . ] kairosfocus
She's managed to grind the axe to a dull point and now seeks to use it as a bludgeon. Mung
UB: Sadly, MG is coming across more and more as one with an ideological axe to grind, not a serious participant in discussion towards mutual understanding, if not agreement. GEM of TKI kairosfocus
es58: Euler's identity is indeed probably the most beautiful of equations, but alas, it is a purely mathematical equation. I have responded to Toronto's attempted dismissal by putting up some of the most striking simple but powerful empirically oriented equations. Equations with significant history behind them. And, in the case of the two macroeconomic equations,equations where a lot of subjectivity and judgement are involved in their application. (I remember my Father once going to a bus station to judge from the cross border traffic, a term in the import figure, to feed balance of payments estimates.) G kairosfocus
KF: sorry if I missed this, but if you were looking for elegant equations, isn't there euler's identity: e^(i*Pi) + 1 = 0 es58
KF, I have used far stronger language than you in relation to Mathgrrl. Her strategy was obvious from the start, but no less obvious than her tactics - to repeat her demands until our noses bleed, all to the slavish applause of those to whom evidence doesn't matter. StephenB summed her up very well, and to his credit refused to play along. Upright BiPed
NOTE: Above, I have at length been forced by the evidence to use some strong language that I wish I did not have to use. Language that I do not lightly use. I think I need to explain myself, first by slightly modifying Finney's classic definition of a lie: "any species of willful deception, intended or successful." Given that:
a] there is a duty of care to the truth and to fairness, then b] to insistently propagate false, misleading and potentially damaging claims that c]one KNOWS or SHOULD KNOW are false or misleadingly half-truthful, is d] to be willfully deceptive.
I speak this, not to brand MG, but instead to call her back from the brink. MG, please, please -- PLEASE! [this is a shout, with clapped hands, not just an emphasis . . . ] -- do your duty of care to the truth and fairness. GEM of TKI kairosfocus
By the way, as noted by Toronto on Mark Frank’s blog, a number of the participants there are not allowed to post comments here at UD. In the spirit of open discussion, I hope you will respond there.
You can't carry your own water here? Like having more monkeys typing on keyboards is somehow going to help you make your case? Why would we want to listen to people that have apparently been banned from UD? Have they all of a sudden changed their ways because they aren't posting here? If people there have relevant comments and can't post here you can copy and paste what they say. I, for one, am still waiting on you to make a meaningful case. Why not start there? Mung
...is some manipulation of logarithms with numbers of unknown provenance.
One does not "manipulate" logarithms. Why do you call yourself MathGrrl? Mung
Please provide a mathematically rigorous definition of CSI in response to this comment. Please show how to calculate CSI, in detail, for the first of the scenarios I proposed in my guest thread
Deja vu all over again. Broken record. Boring. MathGrrl, you're not interested in moving the debate along. You have nothing to offer beyond repeating ad nauseam the same ttwo demands. If you were truly interesetd, you would do, or at least attempt to do, what vjtorley requested you to do. You've asserted that ev can generate CSI without offering a shred of evidence, and Schneider seems to know enough about CSI to make the same claim on his web site. So are you now retracting your claim about ev? Mung
[MG:] Note that if you want to use Durston’s work as an example of CSI, you must first demonstrate that his calculations are consistent with the description of CSI provided by Dembski, 20 --> This is in the teeth of analysis and citations already presented [cf points 10 - 11 in the Original CSI Newsflash Post onlookers], which show that Durston et al provided a metric of Information as actually used by building on the H-metric, average information per symbol based on a weighted average: H = - [SUM on i] pi log pi 21 --> Once we have this value in hand, it can easily be substituted into the Ip slot in the log reduced Dembski expression, and as was again excerpted in the original post, it yields values of information beyond the threshold for some of the values in the Table 1 of protein families. 22 --> If you are too closed-minded or lazy to read the information given and respond on its merits, instead of indulging in selectively hyperskeptical and strawman tactic dismissals, that is not my fault. since that definition is the basis for claims by ID proponents that CSI is an indicator of intelligent agency. 22 --> MG, you here acknowledge that Dembski has provided a definition. Of course, in your view it is not "rigorous," so you need to provide an explanation of why it is not. 23 --> Going further, the inference to design on CSI or more usually FSCI, is not a matter of toeing Dembski's line, it is a matter of inference to best explanation of an observed phenomenon remarked on by say Orgel and Wicken, that Dembski and others have provided relevant mathematical models for. 24 --> Can you kindly give us the best explanation for the text of your post: lucky noise or MG, a blog commenter? (And onlookers,this is exactly what I immediately r4esponded to MG's guest post at UD on, which she has never cogently responded to.) From my perusal of both authors, I don’t believe such a reconciliation is possible. 25 --> Scroll up to this thread's original post and see just how easily the two can be integrated, once you apply the log reduction and get to information in specified bits beyond a threshold. then follow the link already given to see the citation from Durston et al that supports that insertion. 26 --> Just for completeness, let me clip from the 2007 FITS metric paper, as cited in eh CSI Newsflash OP:
Consider that there are usually only 20 different amino acids possible per site for proteins, Eqn. (6) can be used to calculate a maximum Fit value/protein amino acid site of 4.32 Fits/site [NB: Log2 (20) = 4.32]. We use the formula log (20) – H(Xf) to calculate the functional information at a site specified by the variable Xf such that Xf corresponds to the aligned amino acids of each sequence with the same molecular function f. The measured FSC for the whole protein is then calculated as the summation of that for all aligned sites. The number of Fits quantifies the degree of algorithmic challenge, in terms of probability [info and probability are closely related], in achieving needed metabolic function. For example, if we find that the Ribosomal S12 protein family has a Fit value of 379, we can use the equations presented thus far to predict that there are about 10^49 different 121-residue sequences that could fall into the Ribsomal S12 family of proteins, resulting in an evolutionary search target of approximately 10^-106 percent of 121-residue sequence space. In general, the higher the Fit value, the more functional information is required to encode the particular function in order to find it in sequence space. A high Fit value for individual sites within a protein indicates sites that require a high degree of functional information. High Fit values may also point to the key structural or binding sites within the overall 3-D structure.
The closest that I have seen you come to actually providing a calculation for (the yet to be rigorously defined) CSI is some manipulation of logarithms with numbers of unknown provenance. 27 --> Another lie, the sources of the numbers clipped for illustrative purposes were given. And, where I made an error [it was 22 BYTES], I have corrected it with a strike through. In at least one case, YOU were the source. I therefore propose that we clear the air and try to make some progress by two means. 28 --> Sorry, it is you who need to clear the air by providing some serious explanations [for weeks now], as already pointed out this morning. THIS IS A TURNABOUT AND FALSE ACCUSATION. First, please provide a mathematically rigorous definition of CSI in response to this comment. 29 --> This talking point has already been adequately rebutted. Adequate conceptions, descriptions and mathematical metrics have long been provided, just hey will never be acceptable to closed minded strawman tactic objectors. Second, please show how to calculate CSI, in detail, for the first of the scenarios I proposed in my guest thread: 30 --> This is an OUTRAGE. The answer to this case has been given right from the outset, on first encountering it; and it has been often repeated since on seeing he point over and over. Just, as is plainly the rhetorical agenda, it has been brushed aside. A simple gene duplication, without subsequent modification, that increases production of a particular protein from less than X to greater than X. The specification of this scenario is “Produces at least X amount of protein Y.” 31 --> The duplicate itself provides no additional information, just a copy [similar to how a mould fossil is just a copy and a copy of software that you download is just a copy, not an original creation]. In short, the base error is conceptual not mathematical. 32 --> But also, copies of digitally coded information [and genetic information is just that], where the scope of FSCI in the copy is beyond the FSCI threshold, are not credibly produced by chance. So, this points to a complex, functionally organised system and process of duplication. A further index of information tracing to design. 33 --> So, the word "simple" is a strawman tactic in itself. Note that discussions of islands of functionality, the computational power of the universe, presumed failures of modern evolutionary theory, Durston’s calculations, etc. are not relevant to answering these questions. 34 --> In short, having dismissed the cogent issues, I insist on my original opinion. This is blatant closed mindedness. The issue is whether or not CSI is a useful metric. Please demonstrate why you think it is. 35 --> Long since done (over and over), just ignored in the rush to push closed minded ideological talking points.>> ______________ In short, as of now, unless some very serious rhetorical gambits are taken back, MG, this will go nowhere. You have long had some serious explaining to do, now including on how you have treated not only the arguments of others, but how you have strawmannised and by implication willfully misrepresented them -- at the very least, by deliberately refusing to engage what they have actually had to say on the merits, then superciliously dismissing and deriding them. GEM of TKI kairosfocus
MG: By now it is quite apparent that you are simply repeating the same cluster of misleading, false and in some cases accusatory strawman tactic talking points, despite having been corrected repeatedly for the past two months or so. You know or should know better. So, please, do better; or we will be warranted to conclude that we are dealing with closed minded, fundamentally dishonest rhetorical talking points. As, is all too common on the part of objectors to design theory and thought. (Just cf the UD Weak Argument Correctives, top right this and every UD page to see what I mean.) Anyway, one last time, I will respond, point by point, to the objections you have clipped and put up above. Now, too, I have no intention to make any further comments at MF's blog (remember the underlying attitude problem by MF . . . ) save brief notes for the record, and if there is anything of substance, I am sure that this can be reproduced here by those interested to find out what a cogent answer looks like. Clipping and interleaving responses on points: _______________ >> You repeatedly claim, in the thread previous to this one, that CSI has been rigorously defined mathematically, 1 --> This is a strawman caricature. Not a promising start. 2 --> CSI, I have explicitly said, many times, is a descriptive concept that describes an observed fact, one that Orgel and Wicken have aptly summarised and which Dembksi and others have subsequently provided mathematical analyses and models and metrics for. 3 --> Whatever objections one may have to the various models, the empirical reality still needs to be addressed. 4 --> And, given that the metric models build on a STANDARD metric for information, they are fundamentally valid. That standard metric, tracing to Hartley and Shannon [cf Taub and Schilling as repeatedly cited here at UD and at MF's blog, and my discussion on Connor's derivation, here in my always linked briefing note], is: Ik = log(1/pk) = - log pk 5 --> As the Original Post again excerpts, Dembski's metric boils down to a measure of functionally specific [self-]information beyond the threshold of sufficient complexity where the empirically and analytically warranted best explanation is intelligence. It therefore imposes a reasonable threshold of complexity, e.g. in reduced form: Chi_500 = Ip - 500, bits beyond the solar system resources threshold 6 --> Equations of values "beyond a threshold" have won at least one Nobel Prize in Physics, that of Einstein, as the Original Post notes. 7 --> In short, the Dembski approach is a reasonable one and provides a metric in general accord with standard usage and metrics of information. It does focus the information on especially functional specificity, but that is a matter of his interest being on that, instead of Shannon's on carrying capacity of telephone lines. And the OP has an addition from Axe on the subject of how such specificity can be observationally demonstrated. Durston et al provided a way to measure functionally specific information for protein families, so it is biologically relevant. 8 --> I am pretty sure that most of us are interested in information that is meaningful and functional, and therefore specific by means of that function and meaning according to the rules of particular codes. 9 --> So, any reasonable person would accept that Dembksi and others have provided useful metrics and models that can be used in SCIENTIFIC -- empirically tested -- investigations. but nowhere do you provide that rigorous mathematical definition. 10 --> Why do you insist on strawman tactic talking points in the teeth of repeated, cogent correctives? 11 --> I am sure you are aware that Calculus was developed form the 1600's on, and was in routine use for 200 or so years before the "rigorous" foundations for it were worked out from the 1800s on. It turns out that had there been an insistence on such foundations beforehand, the difficulty of getting to that stage would have blocked the road at the outset. The pioneers were correct to use intuitive concepts and practical tests of effectiveness and reliability. In short, they worked on Calculus as a scientific toolkit that was effective and they were right to do so. 12 --> I repeat, this talking point is a strawman tactic. I have insisted that the concept comes first, and is a commonplace of an engineering civilisation: complex, specified information, especially functionally specific complex information, is a characteristic feature and an empirically observable reality, for many, many systems, starting with computers, cars, cell phones, and posts in this blog. Libraries are repositories of CSI and more particularly FSCI. 13 --> Dembski and others have provided useful models and metrics that can be used in empirical investigations, building on a line of work that is 60 years old, and that is all they need to do. You could eliminate the need for your assertions by simply reproducing the definition in response to the challenges you are receiving. 14 --> You have received definitions [what do you think that Ik = - log pk is?], discussions and explanations, repeatedly, only to insist on repeating the same strawman tactic talking points. 15 --> The message you are now communicating is that you are making ideologically motivated closed minded and strawman tactic talking point objections, and will continue to do so regardless of correction or patient explanation. You have also generated a large amount of text without directly addressing the issue at hand, 16 --> This is now an outright slander-filled lie. 17 --> For the record: I -- and many others -- have provided analyses, citations, derivations/ reductions, and successful applications. At every point we have met the same talking point, with ZERO indication that you are interacting with the material provided. namely whether or not you can provide a mathematically rigorous definition of CSI (as described in Dembski’s papers and books) and detailed examples of how to calculate it. 18 --> This is a lie, based on the trick of selective hyperskepticism: "rigour" means that anything you want to object to will be deemed not rigorous and you will simply demand "rigour" when in fact YOU have blatantly blundered by confusing a log reduction for a probability calculation, and when asked to explain yourself, have dodged aside for a time only to come back to repeat the same tired talking points. After two months, we can find nowhere in the exchanges at UD any demonstration of your capacity to engage the substantial empirical and mathematical matters at stake. 19 --> As someone with a mind of his own, I also reserve the right to adjust or develop Dembksi's work, along lines that suit my interest. I am not a slave or robot of Dembski, locked into whatever he has said in some paper wherever. the issue is what is empirically credible and well warranted, not what Dembski may or may not have said whenever or wherever on whoever's interpretation. [ . . . ] kairosfocus
Alex:
However, how will one know where in the mighty big stream of characters do the words of the great bard start? Of course, you need the works of Shakespeare in your hand before the experiment starts, so that you can make comparisons. With other words, the monkeys fail to produce new information, they only reproduce an existing work through a very inefficient method.
That is, you are seeing the fatal flaws of the process. Unintelligent processes are simply not configured to create functional information, and, if they happen to throw out relevant configs of entities, they are utterly unlikely to have correlated systems to put the symbol strings to good use. The rhetorical metaphor was more effective in the days when life was thought to be a sort of simple jelly, called protoplasm. Now that we know we are dealing with molecular nanotechnology and digital information processing, that is a very different kettle of fish indeed. GEM of TKI kairosfocus
Posted on Mark Frank's blog: CSI- AGAIN- CSI is Shannon information of a certain complexity with meaning/ function. Using Shannon we see that there are 2 bits of information per nucleotide and 6 bits per amino acid (4 possible nucleotides = 2^2 = 2 bits- 64 possible codons for amino acids and STOP = 2^6 = 6 bits per amino acid) That said part of the “specification” is figuring out the variation tolerance, which is what Durston did. What that means is if we have a functional protein of 100 amino acids- a protein that cannot suffer any variation- then it has 606 bits of specifid information, which means it has CSI. Now if that protein will function the same no matter what the amino acid sequence is then it doesn’t hae ay specifiecation at all. And then there is everything in between and that is what needs to be determined. That said there isn’t any justification for the claim that gene duplications is a blind watchmaker process. added: These people are so intellectually dishonest it in't worth the effort Joseph
MG: I think the astute onlooker will see that the excerpted objection answered in the original post above, suffices to show that my characterisation of the continued objections at MF's blog is materially accurate. I will respond on points later on, DV, but for now, please note that -- as long since pointed out here and at MF's blog -- for weeks now, you have some fairly serious explaining to do, on issues summarised here. In particular, you need to explain your persistent resort to repeated talking points in the teeth of cogent replies, e.g. on the alleged meaninglessness of CSI (and by extension, FSCI, as it is a subset). In that context, you need to explain your attempt to dismiss a log reduction to info beyond a threshold as a "probability" calculation. While you are at it, kindly explain Schneider's attempt to "correct" Dembski in identifying that the Hartley-suggested quantification is a quantitative definition of information: Ik = - log pk In addition, you need to explain the implications of such claimed "meaninglessness" in the light of the usage by Orgel and Wicken. You need to explain your four "examples," and in particular to respond to the information on the nature of ev unearthed by Mung and posted in the already linked thread. And, last but not least, you need to explain your resort to the suggestion of persecution of science, by citing Galileo's remark when he had been forced to publicly recant of his theories. Especially, in light of the evidence that we are seeing an imposition of a priori materialism on science, especially origins science, that needs to be corrected. GEM of TKI kairosfocus
MathGrrl, As I am not all that well versed in math, it seems to me, as a outside observer, that you are making two claims. One, you are claiming that ID does not have a rigid mathematical foundation, and Two, by default of your first claim, you are claiming that neo-Darwinism does have a rigid mathematical definition.??? Now it seems to me, as a outside observer, that your claims are not bore out by the empirical evidence in the least. You have claimed Schneider's evolution algorithm as 'proof' that the universal probability bound for functional information generation has been violated. Now I appreciate such confidence in a woman, but perhaps you can see my skepticism in that I don't see how in the world that a program that is 'designed' to converge on a solution within well set, predefined, boundaries has anything to do with the grand neo-Darwinian claim that purely random, natural, processes can generate the unmatched levels of information we find in life. i.e. If neo-Darwinism truly is capable of generating the unmatched levels of information we find in life, which is far, far, more advanced than anything we have ever devised in our most advanced computer programs, should not you be able to set RM/NS to the task of generating computer programs in the first place, that exceed what we have done, indeed what Schneider has done, instead of programming a computer to 'converge on a solution'? i.e. Why not open up Schneider's O/S to mutations and see how far his algorithm will go towards improving what he himself has designed??? notes: MathGrrl, seeing your great concern for mathematical rigidity, perhaps you can pull the plank out you own eye first before you worry about the splinter in a other eye??? Perhaps you would care to apply for this job at Oxford which is seeking to supply a mathematical foundation for Darwinism??? Oxford University Seeks Mathemagician — May 5th, 2011 by Douglas Axe Excerpt: Grand theories in physics are usually expressed in mathematics. Newton’s mechanics and Einstein’s theory of special relativity are essentially equations. Words are needed only to interpret the terms. Darwin’s theory of evolution by natural selection has obstinately remained in words since 1859. … http://biologicinstitute.org/2011/05/05/oxford-university-seeks-mathemagician/ ---------------- further notes: Whale Evolution Vs. Population Genetics - Richard Sternberg PhD. in Evolutionary Biology - video http://www.metacafe.com/watch/4165203 Waiting Longer for Two Mutations, Part 5 - Michael Behe Excerpt: the appearance of a particular (beneficial) double mutation in humans would have an expected time of appearance of 216 million years, http://behe.uncommondescent.com/2009/03/waiting-longer-for-two-mutations-part-5/ Experimental Evolution in Fruit Flies (35 years of trying to force fruit flies to evolve in the laboratory fails, spectacularly) - October 2010 Excerpt: "Despite decades of sustained selection in relatively small, sexually reproducing laboratory populations, selection did not lead to the fixation of newly arising unconditionally advantageous alleles.,,, "This research really upends the dominant paradigm about how species evolve," said ecology and evolutionary biology professor Anthony Long, the primary investigator. http://www.arn.org/blogs/index.php/literature/2010/10/07/experimental_evolution_in_fruit_flies Michael Behe on Falsifying Intelligent Design - video http://www.youtube.com/watch?v=N8jXXJN4o_A MathGrrl after you get through cleaning your own house, perhaps you would care to address this, Quantum Information/Entanglement In DNA & Protein Folding - short video http://www.metacafe.com/watch/5936605/ It is very interesting to note that quantum entanglement, which conclusively demonstrates that ‘information’ in its pure 'quantum form' is completely transcendent of any time and space constraints, should be found in molecular biology on such a massive scale, for how can the quantum entanglement 'effect' in biology possibly be explained by a material (matter/energy) 'cause' when the quantum entanglement 'effect' falsified material particles as its own 'causation' in the first place? (A. Aspect) Appealing to the probability of various configurations of material particles, as Darwinism does, simply will not help since a timeless/spaceless cause must be supplied which is beyond the capacity of the material particles themselves to supply! To give a coherent explanation for an effect that is shown to be completely independent of any time and space constraints one is forced to appeal to a cause that is itself not limited to time and space! i.e. Put more simply, you cannot explain a effect by a cause that has been falsified by the very same effect you are seeking to explain! Improbability arguments of various 'special' configurations of material particles, which have been a staple of the arguments against neo-Darwinism, simply do not apply since the cause is not within the material particles in the first place! Yet it is also very interesting to note, in Darwinism's inability to explain this 'transcendent quantum effect' adequately, that Theism has always postulated a transcendent component to man that is not constrained by time and space. i.e. Theism has always postulated a 'eternal soul' for man that lives past the death of the body. Traveling At The Speed Of Light - Optical Effects - mathematical model video http://www.metacafe.com/watch/5733303/ The NDE and the Tunnel - Kevin Williams' research conclusions Excerpt: I started to move toward the light. The way I moved, the physics, was completely different than it is here on Earth. It was something I had never felt before and never felt since. It was a whole different sensation of motion. I obviously wasn't walking or skipping or crawling. I was not floating. I was flowing. I was flowing toward the light. I was accelerating and I knew I was accelerating, but then again, I didn't really feel the acceleration. I just knew I was accelerating toward the light. Again, the physics was different - the physics of motion of time, space, travel. It was completely different in that tunnel, than it is here on Earth. I came out into the light and when I came out into the light, I realized that I was in heaven.(Barbara Springer) bornagain77
Matrrl:
You have also generated a large amount of text without directly addressing the issue at hand, namely whether or not you can provide a mathematically rigorous definition of CSI (as described in Dembski’s papers and books) and detailed examples of how to calculate it.
I provied that for you- complete with examples. MathGrrl:
Note that if you want to use Durston’s work as an example of CSI, you must first demonstrate that his calculations are consistent with the description of CSI provided by Dembski, since that definition is the basis for claims by ID proponents that CSI is an indicator of intelligent agency. From my perusal of both authors, I don’t believe such a reconciliation is possible.
What you "believe" is irrelevant. As Dembski hs written specified complexity and (C)SI refer to biological function whicis what Durston was refferring to- biological function. Part of the specification is how much variation is allowed- that is what Durston was doing. Joseph
kf, I have a problem with the monkeys typing as an example of random sources generating information. The usual argument goes like this: Large enough number of monkeys during long enough time will type all the works of Shakespeare. However, how will one know where in the mighty big stream of characters do the words of the great bard start? Of course, you need the works of Shakespeare in your hand before the experiment starts, so that you can make comparisons. With other words, the monkeys fail to produce new information, they only reproduce an existing work through a very inefficient method. Now, if it is true for Shakespeare, am I not right saying that our monkeys do not produce new information at all? Consequently, if typing monkeys are somewhat analogous to DNA copying with random mutations, as it is claimed, is it not so that random mutations just plainly do not produce new information, but destroy or distort existing ones? Alex73
kairosfocus, By the way, as noted by Toronto (http://mfinmoderation.wordpress.com/2011/05/07/mathgrrls-csi-thread-cont/#comment-2138) on Mark Frank's blog, a number of the participants there are not allowed to post comments here at UD. In the spirit of open discussion, I hope you will respond there. MathGrrl
kairosfocus,
Over at MF’s blog, there has been a continued stream of objections to the recent log reduction of the chi metric in the recent CSI Newsflash thread.
That doesn't reflect my understanding of the issues being raised there. The topic being discussed on Mark's blog go more to the fundamental concept of CSI and its application. Here is my latest comment on that thread (http://mfinmoderation.wordpress.com/2011/05/07/mathgrrls-csi-thread-cont/#comment-2102), which I hope provides some clarification: [ begin copied comment ] kairosfocus, You repeatedly claim, in the thread previous to this one, that CSI has been rigorously defined mathematically, but nowhere do you provide that rigorous mathematical definition. You could eliminate the need for your assertions by simply reproducing the definition in response to the challenges you are receiving. You have also generated a large amount of text without directly addressing the issue at hand, namely whether or not you can provide a mathematically rigorous definition of CSI (as described in Dembski’s papers and books) and detailed examples of how to calculate it. Note that if you want to use Durston’s work as an example of CSI, you must first demonstrate that his calculations are consistent with the description of CSI provided by Dembski, since that definition is the basis for claims by ID proponents that CSI is an indicator of intelligent agency. From my perusal of both authors, I don’t believe such a reconciliation is possible. The closest that I have seen you come to actually providing a calculation for (the yet to be rigorously defined) CSI is some manipulation of logarithms with numbers of unknown provenance. I therefore propose that we clear the air and try to make some progress by two means. First, please provide a mathematically rigorous definition of CSI in response to this comment. Second, please show how to calculate CSI, in detail, for the first of the scenarios I proposed in my guest thread:
A simple gene duplication, without subsequent modification, that increases production of a particular protein from less than X to greater than X. The specification of this scenario is "Produces at least X amount of protein Y."
Note that discussions of islands of functionality, the computational power of the universe, presumed failures of modern evolutionary theory, Durston’s calculations, etc. are not relevant to answering these questions. The issue is whether or not CSI is a useful metric. Please demonstrate why you think it is. [ end copied comment ] I do hope you'll return to that thread to address these points. MathGrrl
F/N: Koonin is appealing to the cosmic inflation form of the multiverse, in order precisely to overcome the search resources challenge that is discussed above. As he observes:
Recent developments in cosmology radically change the conception of the universe as well as the very notions of "probable" and "possible". The model of eternal inflation implies that all macroscopic histories permitted by laws of physics are repeated an infinite number of times in the infinite multiverse. In contrast to the traditional cosmological models of a single, finite universe, this worldview provides for the origin of an infinite number of complex systems by chance, even as the probability of complexity emerging in any given region of the multiverse is extremely low. This change in perspective has profound implications for the history of any phenomenon, and life on earth cannot be an exception.
Of course this raises the point that there is but one actually observed cosmos, and so this is a resort to speculative metaphysics; so also it should sit to the table of comparative difficulties -- on factual adequacy, coherence and explanatory simplicity vs simplistic-ness and/or ad hoc-ery patchworks -- with live options, without censorship. Including, that the cosmos is designed. Also, he glides over the point that the "cosmos bakery" to produce the relevant cluster of possible worlds in a distribution that is happily clustered on a zone in which life-permitting sub-cosmi are possible, is fine-tuned. Moreover, such radical expansion of contingency demands necessary being capable of such fine tuning as the causal root. That -- and recall we have now been in metaphysics not physics for the past several minutes -- strongly points to a necessary being with purpose and power to create a multiverse style cosmos. Multiverses with sub-cosmi fine-tuned for C-chemistry, cell based intelligent life point to a cosmos designer. Which immediately drastically undermines the reason to infer to such worlds -- inflation of material resources as imagined, so that the sort of probabilistic or search space, needle in haystack hurdles as the original post points out, are surmounted. So, the multiverse "solution" to the search resources challenge is self-undermining. But, it has this significance, those who advocate it are at least willing to face the infinite monkeys challenge. In Koonin's words:
Origin of life is a chicken and egg problem: for biological evolution that is governed, primarily, by natural selection, to take off, efficient systems for replication and translation are required, but even barebones cores of these systems appear to be products of extensive selection. The currently favored (partial) solution is an RNA world without proteins in which replication is catalyzed by ribozymes and which serves as the cradle for the translation system. However, the RNA world faces its own hard problems as ribozyme-catalyzed RNA replication remains a hypothesis and the selective pressures behind the origin of translation remain mysterious. Eternal inflation offers a viable alternative that is untenable in a finite universe, i.e., that a coupled system of translation and replication emerged by chance, and became the breakthrough stage from which biological evolution, centered around Darwinian selection, took off. A corollary of this hypothesis is that an RNA world, as a diverse population of replicating RNA molecules, might have never existed. In this model, the stage for Darwinian selection is set by anthropic selection of complex systems that rarely but inevitably emerge by chance in the infinite universe (multiverse).
This of course begs the question of the vastly more immense needle in haystack challenge of getting novel body plans, dozens of times over, in the compass of a single solar system. But at least, it admits the significance of the search space problem for spontaneous origin of a metabolising, vNSR self-replicating automaton. Against that backdrop, the simplistic bare bones model for first life highlights the scope of the challenge, for recall, just 125 bytes worth of info capacity for the requisite systems overwhelms the search capacity of the only actually observed cosmos. Clipping:
The origin(s) of replication and translation (hereinafter OORT) is qualitatively different from other problems in evolutionary biology and might be viewed as the hardest problem in all of biology. As soon as sufficiently fast and accurate genome replication emerges, biological evolution takes off [i.e. K fails to understand the body plan origination challenge -- looks like we need an infinity of life originating worlds to get to one with what we see, on top of the infinity of worlds to get to just one life originating one, we are looking at reductio ad absurdum] . . . . The crucial question, then, is how was the minimal complexity attained that is required to achieve the threshold replication fidelity. In even the simplest modern systems, such as RNA viruses with the replication fidelity of only ~10-3, replication is catalyzed by a complex protein replicase; even disregarding accessory subunits present in most replicases, the main catalytic subunit is a protein that consists of at least 300 amino acids [20]. The replicase, of course, is produced by translation of the respective mRNA which is mediated by a tremendously complex molecular machinery. Hence the first paradox of OORT: to attain the minimal complexity required for a biological system to start on the path of biological evolution, a system of a far greater complexity, i.e., a highly evolved one, appears to be required. How such a system could evolve, is a puzzle that defeats conventional evolutionary thinking . . . . The MWO model dramatically expands the interval on the axis of organizational complexity where the threshold can belong by making emergence of complexity attainable by chance (Fig. 1). In this framework, the possibility that the breakthrough stage for the onset of biological evolution was a high-complexity state, i.e., that the core of the coupled system of translation-replication emerged by chance, cannot be dismissed, however unlikely (i.e., extremely rare in the multiverse). The MWO model not only permits but guarantees that, somewhere in the infinite multiverse – moreover, in every single infinite universe, – such a system would emerge. The pertinent question is whether or not this is the most likely breakthrough stage the appearance of which on earth would be explained by chance and anthropic selection. I suggest that such a possibility should be taken seriously . . .
An infinity of unobserved infinities! The ultimate speculative complex-ification of the explanation. Without empirical basis on observational tests. (Apart from, the implicit, on evo mat assumptions, this is the sort of thing we need to get to what we see. In short, an implicit acknowledgement of the search space challenge implied by the Chi metric and the observed complex functional organisation of life -- the only biological life we do observe.) Reductio. Even, with the sort of simplifications of suggested biological life suggested by BA's clip above. I sing a song To weave a spell . . . Of needles And, haystacks . . . With infinities Of monkeys Pounding On keyboards . . . GEM of TKI kairosfocus
thanks kairosfocus
Kairos: I don't know about the specific site, but here is the paper they are talking about: The cosmological model of eternal inflation and the transition from chance to biological evolution in the history of life - Koonin http://www.biology-direct.com/content/2/1/15 of note; I have not heard materialists talk too much about the many world's hypothesis, save for Koonin in this paper and I believe one more paper. Yet with 'quantum information' now found on a massive scale in molecular biology, whether they realize it or not, they must appeal to the 'science destroying' many-worlds scenario, since quantum information is not reducible to a material basis (A.Aspect) bornagain77
BA: An interesting simple model. Turns out, though that the source is now banned from where I am, so could you give the onward source. (Don't you ever think the Internet is censorship-free.) GEM of TKI PS: I have decides to fill in some equations and their contexts, so that we can get a better understanding of what they are about and how they become meaningful and useful above and beyond niceties of abstract Mathematics. On this point, it bears noting that calculus was developed and in routine use for nearly 200 years before its rigorous underpinnings were identified and worked out. Some of the objectors in recent weeks know or should know that. kairosfocus
,,, This may be of interest; Even the low end 'hypothetical' probability estimate given by evolutionist, for life spontaneously arising, is fantastically impossible: General and Special Evidence for Intelligent Design in Biology: - The requirements for the emergence of a primitive, coupled replication-translation system, which is considered a candidate for the breakthrough stage in this paper, are much greater. At a minimum, spontaneous formation of: - two rRNAs with a total size of at least 1000 nucleotides - ~10 primitive adaptors of ~30 nucleotides each, in total, ~300 nucleotides - at least one RNA encoding a replicase, ~500 nucleotides (low bound) is required. In the above notation, n = 1800, resulting in E < 10^-1018. That is, the chance of life occurring by natural processes is 1 in 10 followed by 1018 zeros. (Koonin's intent was to show that short of postulating a multiverse of an infinite number of universes (Many Worlds), the chance of life occurring on earth is vanishingly small.) http://www.conservapedia.com/General_and_Special_Evidence_for_Intelligent_Design_in_Biology bornagain77
F/N: Me ca'an believe it! I forgot to put in Einstein's Nobel Prize- winning threshold metric equation in points m and n. Duly corrected! kairosfocus
It is funny that a atheistic materialist would choose to use E = MC^2 as his example to 'unwisely' try to challenge you on this point of information kairos. For E = MC^2, by itself, actually points to a higher 'eternal' dimension that is above this 3-Dimensional material dimension, which should be a fairly unnerving thing for materialists!?! Please note in the following video how the 3-Dimensional material world 'folds and collapses' into a tunnel shape around the direction of travel as an observer moves towards the 'higher dimension' of the speed of light, Traveling At The Speed Of Light - Optical Effects - video http://www.metacafe.com/watch/5733303/ As well, Please compare the similarity of the optical effect, noted at the 3:22 minute mark of the preceding video with the 'light at the end of the tunnel' reported in very many Near Death Experiences: The NDE and the Tunnel - Kevin Williams' research conclusions Excerpt: I started to move toward the light. The way I moved, the physics, was completely different than it is here on Earth. It was something I had never felt before and never felt since. It was a whole different sensation of motion. I obviously wasn't walking or skipping or crawling. I was not floating. I was flowing. I was flowing toward the light. I was accelerating and I knew I was accelerating, but then again, I didn't really feel the acceleration. I just knew I was accelerating toward the light. Again, the physics was different - the physics of motion of time, space, travel. It was completely different in that tunnel, than it is here on Earth. I came out into the light and when I came out into the light, I realized that I was in heaven.(Barbara Springer) As well, traveling at the speed of light gets us to the eternal, 'past and future folding into now', framework of time. This higher dimension, 'eternal', inference for the time framework of light is warranted because light is not 'frozen within time' yet it is shown that time, as we understand it, does not pass for light. "I've just developed a new theory of eternity." Albert Einstein - The Einstein Factor - Reader's Digest "The laws of relativity have changed timeless existence from a theological claim to a physical reality. Light, you see, is outside of time, a fact of nature proven in thousands of experiments at hundreds of universities. I don’t pretend to know how tomorrow can exist simultaneously with today and yesterday. But at the speed of light they actually and rigorously do. Time does not pass." Richard Swenson - More Than Meets The Eye, Chpt. 12 Light and Quantum Entanglement Reflect Some Characteristics Of God - video http://www.metacafe.com/watch/4102182 It is very interesting to note that this strange higher dimensional, eternal, framework for time, found in special relativity, also finds corroboration in Near Death Experience testimonies: 'In the 'spirit world,,, instantly, there was no sense of time. See, everything on earth is related to time. You got up this morning, you are going to go to bed tonight. Something is new, it will get old. Something is born, it's going to die. Everything on the physical plane is relative to time, but everything in the spiritual plane is relative to eternity. Instantly I was in total consciousness and awareness of eternity, and you and I as we live in this earth cannot even comprehend it, because everything that we have here is filled within the veil of the temporal life. In the spirit life that is more real than anything else and it is awesome. Eternity as a concept is awesome. There is no such thing as time. I knew that whatever happened was going to go on and on.' Mickey Robinson - Near Death Experience testimony 'When you die, you enter eternity. It feels like you were always there, and you will always be there. You realize that existence on Earth is only just a brief instant.' Dr. Ken Ring - has extensively studied Near Death Experiences further note of interest is that atoms have been found to be reducible to quantum information; Ions have been teleported successfully for the first time by two independent research groups Excerpt: In fact, copying isn't quite the right word for it. In order to reproduce the quantum state of one atom in a second atom, the original has to be destroyed. This is unavoidable - it is enforced by the laws of quantum mechanics, which stipulate that you can't 'clone' a quantum state. In principle, however, the 'copy' can be indistinguishable from the original (that was destroyed),,, http://www.rsc.org/chemistryworld/Issues/2004/October/beammeup.asp Atom takes a quantum leap - 2009 Excerpt: Ytterbium ions have been 'teleported' over a distance of a metre.,,, "What you're moving is information, not the actual atoms," says Chris Monroe, from the Joint Quantum Institute at the University of Maryland in College Park and an author of the paper. But as two particles of the same type differ only in their quantum states, the transfer of quantum information is equivalent to moving the first particle to the location of the second. http://www.freerepublic.com/focus/news/2171769/posts Double-slit experiment Excerpt: In 1999 objects large enough to see under a microscope, buckyball (interlocking carbon atom) molecules (diameter about 0.7 nm, nearly half a million times that of a proton), were found to exhibit wave-like interference. http://en.wikipedia.org/wiki/Double-slit_experiment Dr. Quantum - Double Slit Experiment & Entanglement - video http://www.metacafe.com/watch/4096579 bornagain77
Mung, point. I would like to see observationally anchored evidence that it is possible to spontaneously assemble a metabolising, self-replicating automaton with a built in von Neumann self replicator facility, by blind chance and necessity. (Cf here.) And, that the required codes, algorithms and the like for the vNSR credibly can come about spontaneously. Further to this, I note that the 1,000 bit needle in the too big haystack threshold comes at 125 bytes worth of info. I think the others who have experience of assembly language coding of controllers will back me in strong doubts that any significant controller can be set up in that space, much less a vNSR. GEM of TKI kairosfocus
...most of the inferred primordial polypeptide folding units are proposed to be only 40-60 amino acids in length.
Well, obviously it's not because polypeptides with longer amino acid sequences are not possible. We know they are. So why infer shorter lengths? Does it have anything to do with probabilities, as in the longer the sequence, the more unlikely it is, or the larger the search space? IOW, it's an important exercise even for origin of polypeptide theories. Mung
Dr Rec: The cases you have in view are of MICRO evo, i.e. adaptations within an island of function. That is not in dispute by anyone, including the Young Earth Creationists. [F/N: Pardon my partial misreading. DR in no 3 is in part addressing the origin of proteins, using hypothetical short polypeptides as folding units, his linked abstract in part saying: "gene duplication and fusion” is the evolutionary mechanism generally hypothesized to be responsible for their emergence from simple peptide motifs." The problem here is that functional proteins for life -- required in clusters -- are not going to be 60 AA's long, so the overall protein sits in a fold domain that is deeply isolated and there must be a large number of functional proteins from the outset for life to start as a metabolising automaton with an integral von Neumann Self-Replicator. Proteins must fold stably, must fit the key-lock role,and must function chemically or structurally or in whatever way. Not just one at a time but in interactive clusters in a viable organism that starts form a fertilised ovum and unfolds embryologically into a body plan. That brings right back on the table the issue of origin of large quantities of functionally specific, complex info, the core challenge for OOL and for macro evo. In the latter case, the idea of genes for proteins assembling themselves by chance in 60-AA blocks, then these blocks coming together -- across a cluster of required proteins and regulatory circuits, by happy coincidence to form an embryologically feasible organism, dozens of times over, is so utterly beyond the search capacity of the observed cosmos as to be a reductio ad absurdum on its face. And yet, that seems to be what is being put forward.] The Chi metric's target is MACRO-evo, the arrival at islands of function for novel body plans. The DNA complement of the first cellular life was credibly about 300 - 1,000 k bases or so as smaller genome organisms are incomplete and parasitical. We are looking at over 100 k bits worth of info there. The config space is well beyond the thresholds. And, to get to embryologically feasible novel body plans we are looking at 10 - 100+ M bases of DNA, a major challenge for the evo mat view. As tothe notion that there is a smoothly branching tree of life from unicellular organisms to life forms as we see them, there is no credible evidence for that, and every evidence against it. In short, there is a reason why the observed sudden appearances, stasis and disappearances or continuity to the modern world that dominate the fossil record are there. GEM of TKI kairosfocus
Mung: Chi is simply a Greek letter that Dembski chose. The units for CSI, are in bits beyond the threshold. I had a Physics prof once who when he ran out of Latin and Greek, would resort to one of the Indian scripts. And, Cantor, a Jew, used Aleph in his famous result on transfinite numbers. Entropy is a measure, of micro-scale disorder, that is reflected up in the macroscale through two micro-related variables, as Clausius used: ds >/= d'Q/T, which gives units as J/K in the SI system. Joules per Kelvin. [Degrees K were dropped decades ago.] (Heat is an increment in random motion due to radiation, conduction or convection, and Temperature is a measure of average random energy per microscopic degree of freedom; often translation, rotation and vibration.) The ignorance in question is that about the specific distribution of masses and momentum and energies etc at micro-level, given that there are a great many specific microstates consistent with a lab level macrostate of given Temp, Pressure, Volume, Mass, magnetic moment, etc etc etc. Shannon information is a weighted average information per symbol: H = - [SUM on i] pi log pi, in bits if the log is base 2 It is connected to thermal entropy as Jaynes pointed out and as others have now substantiated so that the hot controversy (over subjectivity in the hallowed halls of physics) is dying off. GEM of TKI kairosfocus
kairosfocus, I think this is an important and interesting exercise. However, from an evolutionary standpoint, most of the inferred primordial polypeptide folding units are proposed to be only 40-60 amino acids in length. Using your calculation these (and many modern proteins in the table) seem to fall well under the threshold. See for example here: Experimental support for the evolution of symmetric protein architecture from a simple peptide motif www.pnas.org/content/early/2010/12/15/1015032108.short DrREC
Hi kf, Thanks for the new posting. Why do you call it a "chi expression," or "chi metric?" Is it just because of the Greek letter on the left, or is there some other significance? Also, I see you've even answered a question I had about entropy, which is whether it is a measure, and what is it a measure of. So from that I then ask, is Shannon Information also a measure, and if so what is it a measure of? Is it also a measure of the ignorance of a "receiver" about something? I think I had an "ah hah!" moment last night, but I need to follow up on it and make sure it wasn't an "ah oops" moment. ________________ ED: Mung, cf the discussion in a previous thread here, on what H -- avg info per symbol -- is about, and how information received reduces uncertainty (concerning the source's state), which implies reduction of ignorance in the case of a potentially knowing subject. Mung
How could I have forgotten Axe's exercise on islands of function! kairosfocus

Leave a Reply