Uncommon Descent Serving The Intelligent Design Community

An attempt at computing dFSCI for English language

Categories
Intelligent Design
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

In a recent post, I was challenged to offer examples of computation of dFSCI for a list of 4 objects for which I had inferred design.

One of the objects was a Shakespeare sonnet.

My answer was the following:

A Shakespeare sonnet. Alan’s comments about that are out of order. I don’t infer design because I know of Shakespeare, or because I am fascinated by the poetry (although I am). I infer design simply because this is a piece of language with perfect meaning in english (OK, ancient english).
Now, a Shakespeare sonnet is about 600 characters long. That corresponds to a search space of about 3000 bits. Now, I cannot really compute the target space for language, but I am assuming here that the number of 600 characters sequences which make good sense in english is lower than 2^2500, and therefore the functional complexity of a Shakespeare sonnet is higher than 500 bits, Dembski’s UPB. As I am aware of no simple algorithm which can generate english sonnets from single characters, I infer design. I am certain that this is not a false positive.

In the discussion, I admitted however that I had not really computed the target space in this case:

The only point is that I have not a simple way to measure the target space for English language, so I have taken a shortcut by choosing a long enough sequence, so that I am well sure that the target space /search space ratio is above 500 bits. As I have clearly explained in my post #400.
For proteins, I have methods to approximate a lower threshold for the target space. For language I have never tried, because it is not my field, but I am sure it can be done. We need a linguist (Piotr, where are you?).
That’s why I have chosen and over-generous length. Am I wrong? Well, just offer a false positive.
For language, it is easy to show that the functional complexity is bound to increase with the length of the sequence. That is IMO true also for proteins, but it is less intuitive.

That remains true. But I have reflected, and I thought that perhaps, even if I am not a linguist and not even a amthematician, I could try to define better quantitatively the target space in this case, or at least to find a reasonable higher threshold for it.

So, here is the result of my reasonings. Again, I am neither a linguist nor a mathematician, and I will happy to consider any comment, criticism or suggestion. If I have made errors in my computations, I am ready to apologize.

Let’s start from my functional definition: any text of 600 characters which has good meaning in English.

The search space for a random search where every character has the same probability, assuming an alphabet of 30 characters (letters, space, elementary punctuation) gives easily a search space of 30^600, that is 2^2944. IOWs 2944 bits.

OK.

Now, I make the following assumptions (more or less derived from a quick Internet search:

a) There are about 200,000 words in English

b) The average length of an English word is 5 characters.

I also make the easy assumption that a text which has good meaning in English is made of English words.

For a 600 character text, we can therefore assume an average number of words of 120 (600/5).

Now, we compute the possible combinations (with repetition) of 120 words from a pool of 200000. The result, if I am right, is: 2^1453. IOWs 1453 bits.

Now, obviously each of these combinations can have n! permutations, therefore each of them has 120! different permutation, that is 2^660. IOWs 660 bits.

So, multiplying the total number of word combinations with repetitions by the total number of permutations for each combination, we have:

2^1453 * 2^660 = 2^2113

IOWs, 2113 bits.

What is this number? It is the total number of sequences of 120 words that we can derive from a pool of 200000 English words. Or at least, a good approximation of that number.

It’s a big number.

Now, the important concept: in that number are certainly included all the sequences of 600 characters which have good meaning in English. Indeed, it is difficult to imagine sequences that have good meaning in English and are not made of correct English words.

And the important question: how many of those sequences have good meaning in English? I have no idea. But anyone will agree that it must be only a small subset.

So, I believe that we can say that 2^2113 is a higher threshold for out target space of sequences of 600 characters which have a good meaning in English. And, certainly, a very generous higher threshold.

Well, if we take that number as a measure of our target space, what is the functional information in a sequence of 600 characters which has good meaning in English?

It’s easy: the ratio between target space and search space:

2^2113 / 2^ 2944 = 2^-831. IOWs, taking -log2, 831 bits of functional information. (Thank you to drc466 for the kind correction here)

So, if we consider as a measure of our functional space a number which is certainly an extremely overestimated higher threshold for the real value, still our dFSI is over 800 bits.

Let’s go back to my initial statement:

Now, a Shakespeare sonnet is about 600 characters long. That corresponds to a search space of about 3000 bits. Now, I cannot really compute the target space for language, but I am assuming here that the number of 600 characters sequences which make good sense in english is lower than 2^2500, and therefore the functional complexity of a Shakespeare sonnet is higher than 500 bits, Dembski’s UPB. As I am aware of no simple algorithm which can generate english sonnets from single characters, I infer design. I am certain that this is not a false positive.

Was I wrong? You decide.

By the way, another important result is that if I make the same computation for a 300 character string, the dFSI value is 416 bits. That is a very clear demonstration that, in language, dFSI is bound to increase with the length of the string.

Comments
GP, I think the problem here is that on evolutionary materialism contemplation must reduce to computation, but the deterministic mechanistic side of algors is not creative and the stochastic side is not creative enough and powerful enough to account for FSCO/I and particularly spectacular cases of dFSCI such as the sonnets in question. KFkairosfocus
November 13, 2014
November
11
Nov
13
13
2014
10:34 AM
10
10
34
AM
PDT
Barry, since you're likely relying on this: "(b) trying to give the false impression that the victim trying to defend himself is the one who started the quarrel.", maybe you can show that kairosfocus and other ID-creationists are the victims and didn't "start the quarrel"? I have been commenting here for a short time. kairosfocus has been spewing his malicious, mendacious, sanctimonious, libelous, hypocritical, falsely accusatory attacks against "evomats" and their "ilk" and "fellow travelers" for a long time. kairosfocus, you, and the other ID-creationists have been starting and perpetuating quarrels (and worse) from the moment that you and your "ilk" first tried to 'wedge' your theocratic religious agenda into science, public education, and politics.Reality
November 13, 2014
November
11
Nov
13
13
2014
10:08 AM
10
10
08
AM
PDT
you can analytically deduce log (p(T|H) and see that it is an information metric</blockquote? What? It's a log transform of probability. Are you really saying that every time a statistician works in log-space (because it's easier to take sums than products, and it can prevent underflow) they start working on "information"?
wd400
November 13, 2014
November
11
Nov
13
13
2014
10:08 AM
10
10
08
AM
PDT
PS: If you had bothered to consider context, you would have seen that I have not made empty assertions but can back up every point I have made. Your turnabout based on snip and snipe is revealing. Especially as the point being defended is a schoolyard taunt mockingly dismissive twisting of FSCO/I, a descriptive term that I happened to highlight as an observable fact just a few hours ago, here.kairosfocus
November 13, 2014
November
11
Nov
13
13
2014
10:05 AM
10
10
05
AM
PDT
Zachriel: I am not sure what to say. I agree on many of your last comments addressed to me, about Shakespeare and similar. To be more clear about my personal position on the role of consciousness in algorithmic cognition, I want to say that I absolutely recognize that Shakespeare had a lot of information coming from his environment, his personal history, his experiences, and so on. Much of that experience can certainly be elaborated in algorithmic ways, and there is no doubt that our conscious processes use many algorithmic procedures to record and transform many data. My point is different. My point is that being conscious, and having the conscious experience/intuition of meaning (for example, the intuition that something exists which can be considered true, and the basic intuitions of logic, and many other things) and of purpose (the subjective experience that things can be considered desirable or not, and that each conscious representation has a connotation of feeling) and of free will (that we can in some mysterious way influence what happens to us and to the world about us in alternative ways according to some inner choices), all that has a fundamental role in our ability to cognize, to build a map of reality, to output our intuitions to material objects, to design. So, there is no doubt that Shakespeare used a lot of data and of data processing, like any of us, but what he did with those data would have never been possible as a simple algorithmic processing of the data themselves. It was the result of how he represented those data in his consciousness, of what he intuited about them, of how he reacted inwardly to them, of how he reacted outwardly as a consequence of his inner representations and reactions. All those steps depend on the simple fact that in conscious beings data generate conscious representations and that those conscious representations generate new data outputs. A non conscious algorithm lacks those steps, and is therefore confined to algorithmic processing of data.gpuccio
November 13, 2014
November
11
Nov
13
13
2014
10:02 AM
10
10
02
AM
PDT
Reality, enough has been said to show the point as just again outlined to KS, which holds for you too. KFkairosfocus
November 13, 2014
November
11
Nov
13
13
2014
10:01 AM
10
10
01
AM
PDT
KS< you can analytically deduce log (p(T|H) and see that it is an information metric. Information being observable through various means can then feed in back ways. Where also stochastic patterns can be used to project back to underlying history, statistical factors and dynamics at work. Indeed, that is how info in English text considered as a stochastic system, is estimated. For simple case E is about 1/8 or typical English text. KFkairosfocus
November 13, 2014
November
11
Nov
13
13
2014
10:00 AM
10
10
00
AM
PDT
Reality: Please do some homework on dynamic-stochastic systems, observability of systems and the issue of inferring path in phase space from observable variables, and more. Think about brownian motion as an observable and then about random walk of molecules in a body of air that is drifting along as part of a wind as what may be inferred, and inde3ed ponder how Brownian Motion contributed to acceptance of the atom as a real albeit then invisible entity. KFkairosfocus
November 13, 2014
November
11
Nov
13
13
2014
09:56 AM
9
09
56
AM
PDT
DNA_Jock at #336: Thank you for you good comments about that paper. Of course, I don't agree with all that you say, and I really want to discuss that paper in detail with you, but I think that I need some time and serenity to do that, so I will not answer your points immediately. I will try however to take the discussion as soon as possible. For the moment I have not much time, and I still want to monitor the general discussion in this thread, until it is still "hot". :) Any thoughts on my #323? I ask in a very open manner, because I have tried there to outline some very general points which are certainly very open to discussion, but IMO extremely important. I just wondered if you have specific opinions on some of them.gpuccio
November 13, 2014
November
11
Nov
13
13
2014
09:47 AM
9
09
47
AM
PDT
KF:
KS:going to information metrics, through log reduction, that opens up a world of direct and statistical info metrics, as you full well know or should know. Game over.
KF, you can't take the logarithm of P(T|H) without calculating P(T|H). Game on.keith s
November 13, 2014
November
11
Nov
13
13
2014
09:43 AM
9
09
43
AM
PDT
Thank you Reality @ 345 for your demonstration of Darwinian Debating Devices #2: The “Turnabout” Tactic.Barry Arrington
November 13, 2014
November
11
Nov
13
13
2014
09:30 AM
9
09
30
AM
PDT
kairosfocus, is your gibberish supposed to mean something? And can you show where I appealed to ANY authority? You've been challenged to "calculate a true P(T|H) for a biological phenomenon — one that takes “Darwinian and other material mechanisms” into account, to borrow Dembski’s phrase." Why are you so afraid to "Deal with the substance"?Reality
November 13, 2014
November
11
Nov
13
13
2014
09:28 AM
9
09
28
AM
PDT
R: FYI, appeal to was it in a peer reviewed journal article (actually closely linked terms are and the concept is routine in engineering) is in fact an appeal to authority as gate-keeper. KFkairosfocus
November 13, 2014
November
11
Nov
13
13
2014
09:05 AM
9
09
05
AM
PDT
kairosfocus, you play your malicious, mendacious, libelous, schoolyard bully mental level taunt games: "...never mind what evo mat ideologues in lab coats and their fellow travellers want to decree. KF" "The resort to such at this late date is a mark of patent desperation. KF" "So, while it is fashionable to impose the ideologically loaded demands of lab coat clad evolutionary materialism and/or fellow travellers..." "I no longer expect you to be responsive to mere facts or reasoning, as I soon came to see committed Marxists based on their behaviour..." "...uniformity reasoning cuts across the dominant, lab coat clad evolutionary materialism and its fellow travellers..." "Not quite as bad as Judge Jones viewing Inherit the Wind to set his attitude on the blunder that this accurately reported the Scopes affair, but too close for comfort. And don’t try to deny, I went through that firsthand back in the day and have seen the abuse continue up to uncomfortably close days. Do you really want to go to a rundown of infamous manipulative icons of evo mat ideology dressed up in a lab coat? KF" Yet you hypocritically spewed this "...playing the schoolyard taunt game simply shows a mental level akin to schoolyard bullies — especially when that is to try to dismiss an observable fact as simple as an Abu 6500 fishing reel. I suggest you avoid such in future. KF" And this: "Personalities via loaded language only serve to hamper ability to understand; this problem and other similar problems have dogged your responses to design thought for years, consistently yielding strawman caricatures that you have knocked over." I suggest you avoid such in future.Reality
November 13, 2014
November
11
Nov
13
13
2014
09:05 AM
9
09
05
AM
PDT
Adapa, you full well know you resorted to a schoolyard taunt tactic, as all can see by scrolling up. Twisting terms to create mocking taunts -- and here in the teeth of a direct demonstration of the described reality -- speaks volumes and not in your favour. Now you have resorted to the brazen denial when called out. Please think about the corner you are painting yourself into. KS: By going to information metrics, through log reduction, that opens up a world of direct and statistical info metrics, as you full well know or should know. Game over. KFkairosfocus
November 13, 2014
November
11
Nov
13
13
2014
09:00 AM
9
09
00
AM
PDT
KF, As we keep telling you, it is utterly trivial to go from P(T|H) to log P(T|H) and back again. Logarithms and antilogarithms are easy. P(T|H) is hard. If you can't calculate P(T|H), you can't take its logarithm. You need to show that you can calculate a true P(T|H) for a biological phenomenon -- one that takes "Darwinian and other material mechanisms" into account, to borrow Dembski's phrase. You say you can do it. Let's see you back up your claim.keith s
November 13, 2014
November
11
Nov
13
13
2014
08:55 AM
8
08
55
AM
PDT
Kairosfocus, I take your response @340 as an assertion that you can, in fact calculate log p(T|H) for a biological. Care to demonstrate? Note that not one of your numerous comments-closed-FYI-FTR posts do this.DNA_Jock
November 13, 2014
November
11
Nov
13
13
2014
08:16 AM
8
08
16
AM
PDT
kairosfocus Adapa, playing the schoolyard taunt game simply shows a mental level akin to schoolyard bullies — especially when that is to try to dismiss an observable fact as simple as an Abu 6500 fishing reel. I suggest you avoid such in future. KF All I did was point out that the parameter you claim is "amenable to observation and even quantification" has never been used in the scientific community. Not once, not ever. I would have pointed that out to you on one of your many identical threads crowing about how wonderful FSCO/I is but you bravely closed comments in every one.Adapa
November 13, 2014
November
11
Nov
13
13
2014
08:06 AM
8
08
06
AM
PDT
DJ:Actually not, as it is fairly easy to get information numbers for DNA, RNA and even proteins, as has been done. That is not the full info content of life forms, but it is a definite subset and gives the material result already. Believe you me once I saw the power of transformations to move you out of a major analytical headache, that was a lesson for life. Of course evaluating Lapalace transforms is itself a mess but the neat thing is that this is reduced to tables that can be applied, and integrals and differentials have particularly simple evaluations. Indeed, in evaluating diff eqn solutions using auxiliary eqns, you are using such transforms in disguise -- why didn't they just use the usual s or p and done. Similarly, going to operators form is the same thing. (I love the operator concept, the Russians make some real nice use of it.) The transformation to information is similarly though much less spectacularly, a breakthrough. For info is amenable to both evaluation on storage capacity of media and by application of statistics of messages. The statistics of the messages, whether text in English or patterns of AA residues for proteins etc, can then tell us a lot about the real world dynamic-stochastic process and the adaptations to particular cases involved. (That is what I was hinting at in talking on real world Monte Carlos. Down that road, systems analysis.) KFkairosfocus
November 13, 2014
November
11
Nov
13
13
2014
08:03 AM
8
08
03
AM
PDT
kairosfocus said: "Descriptive terms linked to observables and related analyses and abbreviations do not gain their credibility or substance from appeals to authority. Deal with the substance..." LOOK WHO'S TALKING! I DID NOT and DO NOT make ANY appeals to authority. YOU, on the other hand, CONSTANTLY make appeals to authority, and YOU portray YOURSELF as THE AUTHORITY ON EVERYTHING. And YOU are AVOIDING the "substance" of the NUMEROUS, SOLID REFUTATIONS of your DICTATORIAL, INCORRECT, and FALSELY ACCUSATORY logorrhea.Reality
November 13, 2014
November
11
Nov
13
13
2014
08:00 AM
8
08
00
AM
PDT
kairosfocus: Z, the facts of how Weasel was used manipulatively for literally decades speak for themselves. We read Dawkins. He doesn't say it's a complete model of evolution. You didn't answer. Instead of what you consider non-functional steps, we could have a population of words that are ruthlessly selected for function, no close matches allowed. Do you think we could evolve some long words by this process?Zachriel
November 13, 2014
November
11
Nov
13
13
2014
08:00 AM
8
08
00
AM
PDT
kf @ 333 Fascinating stuff. But you accused me thus "Notice how D-J persistently leaves off he inconvenient little log p(T|H)" Here's my point: if you can calculate p(T|H), you can calculate log p(T|H). and vice versa. Pointing out that you, kairosfocus, CANNOT calculate p(T|H) is utterly equivalent to pointing out that you CANNOT calculate log p(T|H). For any biological. The log transformation brings me no inconvenience whatsoever: it is utterly irrelevant. Regaqrding your use of fits to derive log p(T|H), see my comment re Durston in 336 above.DNA_Jock
November 13, 2014
November
11
Nov
13
13
2014
07:53 AM
7
07
53
AM
PDT
gpuccio, Thank you for the very interesting Hayashi 2006 PLoS ONE reference. I had seen their figure 5 before, but I did not realize the extent to which they had experimental support for their view of the landscape. This paper is quite the show-stopper for two assertions that are repeatedly made at UD. 1) There are islands of function. Apparently not:
The evolvability of arbitrary chosen random sequence suggests that most positions at the bottom of the fitness landscape have routes toward higher fitness.
I reckon that "most" smacks of mild over-concluding here, but we can say, conservatively, that over ~1% of random sequences have routes towards higher fitness. So much for "islands". 2) We can use Durston's measures of fits to estimate probabilities, as kairosfocus does in his always-linked... No, we can do no such thing. Per Hayashi, once we move to higher fitness, there are large numbers of local optima with varying degrees of interconnectedness. These local optima are constrained in a way that differs dramatically from the lower slopes of the hill. This is a total killer for any argument that tries to use extant, optimized proteins to estimate the degree of substitution allowed within less-optimized proteins. Bottom-up approaches are the only valid technique. It turns out that I was far more right than I thought I was... F/N: I note in passing that k=20 deep-sixes another ID-trope: "overlapping functionality or multiple constraints prevents evolution". Here each residue interacts with, on average, 20 others. Evolution, unlike a human designer, is unfazed.DNA_Jock
November 13, 2014
November
11
Nov
13
13
2014
07:41 AM
7
07
41
AM
PDT
Z, the facts of how Weasel was used manipulatively for literally decades speak for themselves. Not quite as bad as Judge Jones viewing Inherit the Wind to set his attitude on the blunder that this accurately reported the Scopes affair, but too close for comfort. And don't try to deny, I went through that firsthand back in the day and have seen the abuse continue up to uncomfortably close days. Do you really want to go to a rundown of infamous manipulative icons of evo mat ideology dressed up in a lab coat? KFkairosfocus
November 13, 2014
November
11
Nov
13
13
2014
07:04 AM
7
07
04
AM
PDT
Adapa, playing the schoolyard taunt game simply shows a mental level akin to schoolyard bullies -- especially when that is to try to dismiss an observable fact as simple as an Abu 6500 fishing reel. I suggest you avoid such in future. KFkairosfocus
November 13, 2014
November
11
Nov
13
13
2014
07:00 AM
7
07
00
AM
PDT
D-J: That is actually fairly frequent in modelling and analysis. An abstraction or situation in one form is not very amenable to calculation or visualisation, but with a transformation, you are in a different zone where doors open up. Not totally dissimilar to integration by substitutions. Once we know something is information, we have ways to get reasonable values. And oddly, that then enables an estimate of the otherwise harder value by inverting the transformation in this case. (Coming back through an integration procedure is often a bit harder.) For instance, working with complicated differential equations can be a mess. Reduce using Laplace Transforms and you are doing algebra on complex frequency domain variables. Push another step and you are doing block diagram algebra. A bit more and you are looking at pole-zero heavy stretchy rubber sheet plots and wireframes, which allow you to read off transient and frequency response. A similar transform gets you into the Z domain for discrete state analysis with the famous unit delay function and digital filters with finite and infinite impulse responses, with their own rubber sheet analysis . . . just watch out for aliasing. (Did you forget that I spent years more in that domain than the time domain?) As would be obvious, save for the operating hyperskepticism that is in the driving seat. But then in the policy world over the past few weeks, I have been dealing with a few cases like that . . . and what drives me bananas there is the, I don't like diagrams and graphs retort to an infographic that reduces a monograph worth of abstruse reasoning to a poster-size chart. Adapa: Why are you drumbeat repeating what has been adequately answered long since by something open to examination? When a fact can be directly seen, there is no need for peer review panels to affirm it. And in this case, FSCO/I and dFSCI are simply abbreviations of descriptive phrases, and in fact they trace to Wicken's wiring diagram, functionally rich organisation discussion of 1979 and Orgel's specified complexity discussion of 1973 as you full well should know. The phenomenon is a fact of observation as blatant as the difference between a volcano dome pushing out ash including sand into a pile, and a few miles away, a child on a beach made from that same dome, building a sand castle. KFkairosfocus
November 13, 2014
November
11
Nov
13
13
2014
06:58 AM
6
06
58
AM
PDT
Evolutionists still can't provide any probabilities for their position which relies solely on probabilities. And then, like little children, they try to blame ID for their FAILures.Joe
November 13, 2014
November
11
Nov
13
13
2014
06:56 AM
6
06
56
AM
PDT
Adapa:
If FIASCO is so amenable to observation and even quantification then why has no one ever observed or quantified it in any real world biological cases?
We have provided one peer-reviewed paper that does so. AGAIN Crick defined biological information and science has determined it is both complex and specified.Joe
November 13, 2014
November
11
Nov
13
13
2014
06:51 AM
6
06
51
AM
PDT
Wrong again Zachriel- Weasel shows how a TARGETED search is faster than a random walk.Joe
November 13, 2014
November
11
Nov
13
13
2014
06:49 AM
6
06
49
AM
PDT
fifthmonarchyman: If fact the paper I linked demonstrates that humans are quite good at it. Hasanhodzic shows that people are good at distinguishing order. Market returns are not random, but chaotic. fifthmonarchyman: Complex Specified information is not computable That's the question, not an answer. If you have such a proof, we'd be happy to look at it. fifthmonarchyman: In fact I’m not sure the majority of critics have fully grasped that Darwinian evolution is simply an algorithm and is fully subject to any and all the mathematical limitations thereof. While models of evolution are algorithmic, that doesn't mean evolution is algorithmic. In particular, evolution incorporates elements from the natural environment. A simple example may suffice. Algorithms can't generate random numbers. However, an algorithm can incorporate information from the real world, including randomness. fifthmonarchyman: Actually what is objective is the number. By definition, a value is not objective if it depends on the individual making the measurement. fifthmonarchyman: Don’t be offended if I don’t respond to you as much as you would like. You're under no obligation to defend your position. Readers can make of that what they will. gpuccio: Because we know well that no existing designed algorithm, at least at present, can generate a piece of text of that length which has good meaning. Unless the text is already in the algorithm itself. It isn't necessary to have the text in the algorithm, though you do have to have a dictionary, rules of grammar, rhyming, scansion, poetic structure, word relationships, etc. No more than what Shakespeare had in his own mind. Let's say we had an oracle that can recognize whether a string of words has a valid meaning in English. "How camest thou in this pickle?" What the heck does that mean? Nevertheless, it got plenty of laughs on the Elizabethan theater. "I will wear my heart upon my sleeve." Anyway, let's say we have such an oracle. We might put our phrases before an Elizabethan audience and measure the applause, the same oracle that guided Shakespeare in his writing. Also given that phrases such as "the king" has more meaning than "king" as it is more specific. This is our gargantuan encyclopedia of phrases. Now, to make this fit into a computer, let's reduce our encyclopedia to a subset of this gargantuan encyclopedia. Certainly, it would be even harder on the algorithm, but easier on our memory. fifthmonarchyman: I can always make a designed algorithm which can output dFSCI, if I put enough complexity in the algorithm. Shakespeare had plenty of 'dFSCI' in his mind before writing any sonnets. fifthmonarchyman: When we say that algorithms are incapable of producing CSI. It is always assumed that cheating is not permitted. You permit the Shakespeare sonnet writer what you won't permit to the computer algorithm. fifthmonarchyman: Yet every proposed algorithm that yields false positives does just that. No, not every. Some generate solutions to external problems. gpuccio: The important point is: any algorithm which generates meaningful complex language must have that language in itself, either in the oracle or in the rest of the algorithm. Sort of like Shakespeare did. gpuccio: I think the most important point of all, which goes beyond the discussion about weasel or similar, is: what are the intrinsic limitations of an algorithm, however complex? If Shakespeare didn't know words and rhyme, he wouldn't have written sonnets. gpuccio: And the real meaning of meaning and purpose cannot be coded, because they are conscious, subjective experiences, and only those beings who have those experiences can recognize them. Sure. So an unfeeling algorithm could either mimic those feelings, or simply write about something else. "hate began here if a heart beat apart" kairosfocus: In short, here cumulative selection “works” by rewarding non-functional phrases that happen to be closer to the already known target. This is the very opposite of natural selection on already present difference in function. Dawkins’ weasel is not a good model of what evolution is supposed to do. It's not supposed to be a model of evolution. What it shows is that evolutionary search is much faster than random search. Instead of what you consider non-functional steps, we could have a population of words that are ruthlessly selected for function, no close matches allowed. Do you think we could evolve some long words by this process?Zachriel
November 13, 2014
November
11
Nov
13
13
2014
06:39 AM
6
06
39
AM
PDT
1 18 19 20 21 22 31

Leave a Reply