Uncommon Descent Serving The Intelligent Design Community

Isolated complex functional islands in the ocean of sequences: a model from English language, again.

Categories
Intelligent Design
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

A few days ago, Denyse published the following, very interesting, OP:

Laszlo Bencze offers an analogy to current claims about evolution: Correcting an F grade paper

Considering that an example is often better than many long discussions, I have decided to use part of the analogy presented there by philopsopher and photographer Laszlo Bencze to show some important aspects of the concept of isolated islands of complex functional information, recently discussed at this OP of mine:

Defending Intelligent Design theory: Why targets are real targets, probabilities real probabilities, and the Texas Sharp Shooter fallacy does not apply at all.

and in the following discussion.

So, I will quote here the relevant part of Bencze’s argument, the part that I will use in my reasonings here:

You stand in for evolution and your task is to convert a poorly written “F” paper to an essay that can be published in Harper’s Magazine. This is reasonably analogous to fish evolving into an amphibians or a dinosaurs into a birds.

However, your conversion of the inept essay must proceed one word at a time and each word substitution must instantly improve the essay. No storing up words for future use is allowed.

After changing a few obvious one-word mistakes, you will run into a brick wall. It doesn’t matter how clever you are or how many dictionaries and writers’ guides you have at your disposal. Only by deleting entire paragraphs and adding complete sentences would you have any chance of getting to a better essay. But that would be equivalent to a small dinosaur sprouting functional wings or a fish being able to breathe air in a single mutation. Changing one word at a time and expecting that to result in better writing is hopeless.

Well, I will reshape a little this analogy, so that it fits my purposes. The aim is to show realistically the meaning of some concepts and ideas related to funtional information. I have already done something similar in an old OP, that I will refer to when necessary:

An attempt at computing dFSCI for English language

Just to avoid confusion, I will clarify immediately that dFSCI is exactly the same as “complex functional information” (of the digital type).

Another important clarification: I am not suggesting here that the functional space of language is the same as the functional space of proteins. They are, of course, different. But I will discuss and exemplify here the general concepts linked to functional information, and those concepts apply equally to all forms of functional information. Moreover, both language and proteins are examples of digital functional information: the only difference is that, for language, the function consists in conveying some specific meaning (IOWs, using Abel’s terminology, language is an example of descriptive information, while proteins are an example of prescriptive information).  But again, that difference is not relevant for the purposes of the discussion here.

So, my model goes this way. We start form an essay, written in English language. Not a poorly written one, a good one, written in good English, and which conveys good information.

As an example, I will quote here a few paragraphs from the Wikipedia page about “History of combinatorics”:  (OK, it’s a little self-referential, may be! 🙂 )

The earliest recorded use of combinatorial techniques comes from problem 79 of the Rhind papyrus, which dates to the 16th century BCE. The problem concerns a certain geometric series, and has similarities to Fibonacci’s problem of counting the number of compositions of 1s and 2s that sum to a given total.

In Greece, Plutarch wrote that Xenocrates of Chalcedon (396–314 BC) discovered the number of different syllables possible in the Greek language. This would have been the first attempt on record to solve a difficult problem in permutations and combinations. The claim, however, is implausible: this is one of the few mentions of combinatorics in Greece, and the number they found, 1.002 × 10 12, seems too round to be more than a guess.

The Bhagavati Sutra had the first mention of a combinatorics problem; the problem asked how many possible combinations of tastes were possible from selecting tastes in ones, twos, threes, etc. from a selection of six different tastes (sweet, pungent, astringent, sour, salt, and bitter). The Bhagavati is also the first text to mention the choose function. In the second century BC, Pingala included an enumeration problem in the Chanda Sutra (also Chandahsutra) which asked how many ways a six-syllable meter could be made from short and long notes. Pingala found the number of meters that had n long notes and k short notes; this is equivalent to finding the binomial coefficients.

The ideas of the Bhagavati were generalized by the Indian mathematician Mahavira in 850 AD, and Pingala’s work on prosody was expanded by Bhāskara II and Hemacandra in 1100 AD. Bhaskara was the first known person to find the generalised choice function, although Brahmagupta may have known earlier. Hemacandra asked how many meters existed of a certain length if a long note was considered to be twice as long as a short note, which is equivalent to finding the Fibonacci numbers.

The ancient Chinese book of divination I Ching describes a hexagram as a permutation with repetitions of six lines where each line can be one of two states: solid or dashed. In describing hexagrams in this fashion they determine that there are  2^6=64 possible hexagrams. A Chinese monk also may have counted the number of configurations to a game similar to Go around 700 AD. Although China had relatively few advancements in enumerative combinatorics, around 100 AD they solved the Lo Shu Square which is the combinatorial design problem of the normal magic square of order three. Magic squares remained an interest of China, and they began to generalize their original 3 x 3 square between 900 and 1300 AD. China corresponded with the Middle East about this problem in the 13th century. The Middle East also learned about binomial coefficients from Indian work and found the connection to polynomial expansion. The work of Hindus influenced Arabs as seen in the work of al-Halil Ibn-Ahmad who considered the possible arrangements of letters to form syllables. His calculations show an understanding of permutations and combinations. In a passage from the work of Arab mathematician Umar al-Khayyami that dates to around 1100, it is corroborated that the Hindus had knowledge of binomial coefficients, but also that their methods reached the middle east.

In Greece, Plutarch wrote that Xenocrates discovered the number of different syllables possible in the Greek language. While unlikely, this is one of the few mentions of Combinatorics in Greece. The number they found, 1.002 × 10 12, also seems too round to be more than a guess.

Abū Bakr ibn Muḥammad ibn al Ḥusayn Al-Karaji (c.953-1029) wrote on the binomial theorem and Pascal’s triangle. In a now lost work known only from subsequent quotation by al-Samaw’al, Al-Karaji introduced the idea of argument by mathematical induction.

This is a rather complex piece of information. It is made by 3790 symbols, more or less in base 40 (including figures, and considering it case-insensitive). That amounts to about 20170 bits of total information in the sequence.

Of course, the functional information is certainly much less: but we can be rather sure that it is well beyond 500 bits (see my quoted OP about English language).

But my purpose here is not to infer design for that essay. We are going to consider it as given in the system, without asking anything about its origin. Let’s call it our state “A”, our starting state.

What RV and NS can do

Now, let’s see what RV and NS could realistically do. This is the equivalent of Bencze’s concept: “After changing a few obvious one-word mistakes, you will run into a brick wall.”

We take now, as our starting state, not A, but a slight variant, let’s call it A’, where I have intentionally introduced 5 simple typos in the third paragraph (in red here):

The Bhagavati Sutra had the first mention of a combinatorics problem; the problem asked how many possible combinations of tastes were possible from selecting tastes in ones, twos, threes, etc. from a selection of six different tastes (sweet, pungent, astringent, sour, salt, and bitter). The Bhagavati us also the first text to mention the choose function. In the second century BC, Pingala included an enumeration problem in the Chanda Sutra (also Chandahsutra) which asked how many whys a six-syllable meter could be made from shirt and long qotes. Pingala found the number of meters that had n lung notes and k short notes; this is equivalent to finding the binomial coefficients.

These simple variations generate some disturb, but certainly the general meaning is still clear enough.

Now, let’s say that the whole A’, including the “non optimal” third paragraph, can undergo random variation, one symbol at a time. Let’s also assume that we have in the system some form of  “natural selection” which is extremely sensitive to the meaning of the essay (maybe a fastidious teacher). Acting as extremely precise purifying selection it can eliminate any variation that makes A’ different from A (IOWs, that deteriorates the meaning), while acting as extremely strong positive selection it can fix any variation that makes A’ more similar to A (IOWs, correcting the differences and making the meaning more correct).

That would be some “natural” selection indeed! Not really likely. But, for the moment, let’s assume that it exists. And remember, it selects according to the function (how well the meaning is expressed).

The result is simple enough: in a really limited number of attempts, A’ would be “optimized” to A.

This is the real role of NS acting on RV, in biology. As said many tiems, it has two fundamental limitations:

a) The function must already be there, even if not completely optimized.

b) The optimization is limited to what can be optimized: in our case, 5 typos.

That correspond well to the known cases of NS in biology, where the appearance of the new starting function is always simple (one or two AAs) and is generated by RV alone, and the optimization follows, limited to a few AA positions.

See also here:

What are the limits of Natural Selection? An interesting open discussion with Gordon Davisson

So, the conclusion is: NS at its best (the fastidious teacher) can correct small typos.

What RV and NS cannot do

Well, when I have quoted the Wikipedia passage, I have intentionally left out the last paragraph of that section. Let’s call it paragrah P. Here it is:

The philosopher and astronomer Rabbi Abraham ibn Ezra (c. 1140) counted the permutations with repetitions in vocalization of Divine Name. He also established the symmetry of binomial coefficients, while a closed formula was obtained later by the talmudist and mathematicianLevi ben Gerson (better known as Gersonides), in 1321. The arithmetical triangle— a graphical diagram showing relationships among the binomial coefficients— was presented by mathematicians in treatises dating as far back as the 10th century, and would eventually become known as Pascal’s triangle. Later, in Medieval England, campanology provided examples of what is now known as Hamiltonian cycles in certain Cayley graphs on permutations.

Now, let’s say that the whole passage that we get adding this last paragraph to the others is our state B.

The simple question is: how can we go from state A to state B? The answer is apparently simple: by adding paragraph P to state A.

But what is paragraph P? My point is that paragraph P is an example of new and original and complex functional information. Let’s see why.

Functional information

Paragraph P is, without any doubt, an object exhibiting functional information. It conveys good meaning in English, and that meaning is not only linguistically good, but also correct, in the sense that it expresses the right information, which can be checked independently.

New

Why is it new?

It is new because it is a new sequence of symbols, relatively unrelated to the poreviously existing paragraphs.

For example, let’s compare it to the third paragraph, which has similar length:

Third paragraph (“The Bhagavati Sutra”):  683 symbols

Paragraph P (“The philosopher and astronomer”):  713 symbols

Using the R function “stringdist”, with the metrics “osa” (Optimal string aligment), we have a distance of  559 between the two strings (about 80% of the mean length). Therefore, the two strings are mostly unrelated.

Of course, there is some distant relationship between the two. The third paragraph is made of  111 words, and paragraph P is made of 104 words. Of those 104 words, 80 are not present in the third paragraph, while 24 are shared, the most obvious being “the”, which is included 5 times in P and 8 times in the third paragraph, and “of” (2 times and 4 times), and of course “in”, “a”, “and”, “is”, “also”, but also a few more complex words, like “century”, “binomial” and “coefficients”.

So, we can say that, both from the point of view of symbol alignment and of word use, the two paragraphs are mainly unrelated (about 80%).

Original

Why is it original?

Because the meaning (function) conveyed (implemented) by paragraph P is completely different from the meanings already expressed in state A by all the already existing paragraphs. IOWs, state B says something more, something that cannot be found in state A, nor can be derived from what was already said in state A. For example, about the count of the permutations in the Divine Name. Nothing about that in the previous paragraphs. That is original information, original meaning. It is something original that is being added to what was already known.

How complex?

OK, but how complex is paragraph P? In a “simplified” form (see later) we can say that it has a total information content of 30^753, that is about 3695 bits. But how much of it is functional information?

Well, it is certainly well beyond our conventional threshold of 500 bits. Indeed, in my OP:

An attempt at computing dFSCI for English language

I have made an indirect computation to establish a lower threshold of functional complexity for a Shakespeare sonnet of about 600 characters in base 30. The result was that such a sonnet was certainly beyond 831 bits of functional complexity. And that is only a lower threshold.

Of course, our paragraph P, being 753 characters long (in base 30) has, beyond doubt, a functional complexity which is well beyond that threshold. Probably higher than 1000 bits, maybe nearer to 2000 bits.

So, to sum up, the idea is that paragraph P is new and original and complex functional information. Therefore, RV and NS cannot generate it. Only design can do that.

Let’s see why, in more detail.

First scenario: a transition from an existing functional paragraph.

Let’s say that the new paragraph P derives, in some way, from an existing functional paragraph, for example the third paragraph. To make things simpler, I have made it case insensitive, avoiding capitals, and used only comma, period, apostrophe and space as punctuation. Expressing mumbers as letters, we have a base 30 alphabet. The third paragraph has, therefore, a total complexity of 30^683:

the bhagavati sutra had the first mention of a combinatorics problem. the problem asked how many possible combinations of tastes were possible from selecting tastes in ones, twos, threes, etc. from a selection of six different tastes (sweet, pungent, astringent, sour, salt, and bitter). the bhagavati is also the first text to mention the choose function. in the second century bc, pingala included an enumeration problem in the chanda sutra, also chandahsutra, which asked how many ways a six syllable meter could be made from short and long notes. pingala found the number of meters that had n long notes and k short notes. this is equivalent to finding the binomial coefficients.

Paragraph P, instead, has now a total complexity of 30^753:

the philosopher and astronomer rabbi abraham ibn ezra, c. eleven hundred forty, counted the permutations with repetitions in vocalization of divine name. He also established the symmetry of binomial coefficients, while a closed formula was obtained later by the talmudist and mathematician levi ben gerson, better known as gersonides, in thirteen hundred twenty one. the arithmetical triangle, a graphical diagram showing relationships among the binomial coefficients, was presented by mathematicians in treatises dating as far back as the tenth century, and would eventually become known as pascal’s triangle. later, in medieval england, campanology provided examples of what is now known as hamiltonian cycles in certain cayley graphs on permutations.

So, can we go from the third paragraph to paragraph P by RV + NS?

I can’t see how that could be possible.

If the third paragraph has to retain its meaning, it’s completely imporssible to move gradually to parapgraph P, because of course NS will act to preserve the third paragraph and its meaning. Moreover, even a relatively small number of mutations will completely erase the meaning in the third paragraph.

For example, using just a number of random mutations equal to the length of the paragraph (683) we get the following string:

cge bhcgavuek sifra’dad q,cnfirxt ovfti sgoi’.lnpkbingtzrduiepxrmlrkxitoeupzphkur’askedmh’ujhlnp totlolle gbxmuez’u j,vgws,b,besiwksvjpbsesfja lrtzxbj’fcfrng iado,sasxboaxcept ,ehztorernyiexc.smrom cis,lecdagn olvwsntdlftjrqgbaxeigei,vsmttt. ‘uus’gvysasgaiksesgckaousy dsltb chn jxvzull.xpze muacaftywbvyhfl.pmt yq qmwo tqs, io’memoaqny hqtcnk’hl ductvx.n. cmxei’ zkgcylvcrgtlasntcc wijpelujiny dred jgqe. wcyati caihoj’oj ‘. tyeichancoasurrt.jztspdlhaud’d’ytra, pygghbalme. ho.usaacify’siamlis,wylx.bebbsetfa cnclu,be mabe qso ‘xbgrsbt dslwhfmnstom.rfhkgal ytued quqbjumber pw fgthsslkb.tgh,iht.um z ytteqga.c kulhosy.roues’otoi,uikqjeai aledtwko ywnuingtgfelbiixmc.neoxejgbsiesyjq

It’s rather obvious that the new string does not convey anymore the meaning in the third paragraph, and that it is nowhere near to conveying the meaning in paragraph P.

Indeed, it does not convey any meaning at all.

Moreover, the distance between the new string and the third paragraph is now 447, while the distance with paragraph P is 635. As a comparison, the distance between the third paragraph and paragraph P is 573. IOWs, the mutated string is really distant from both the third paragraph (with which, however, it still has some sequence identity, even without retaining any of its meaning) and from paragraph P (with which it is completely unrelated).

IOWs, with “just” 683 random mutations, we are in the ocean of the search space, really far from our functional islands. We are lost, completely and forever.

What if we had proceeded with small steps? That’s even worse.

Here is the result of 5 random mutations (in red):

the bhagavati sutra had the first mention of a combinatorics problem. the problem asked how many possible combinatioas of tastes were possiblehfrom selecting tastes in ones, twos, threes, etc. from a selection of six different tastes (sweet, pungent, astringent, sour, salt, and bitter). the bhagavati is also he first text to mention the choose function. in the second century bc, pingala included an enumeration problem in the chanda sutra, also chandahsutra, which asked how many ways a six syllable meter could be made from short and long notes. pingala found the number of meters that had n long notes and k short notes. this is equivalent to finding thd binomial coefficiexts.

The result, as anyone can see, is just 5 “typos”. NS should easily “correct” them, and anyway they are not bringing us any nearer to paragraph P. If, anyway, “typos” are allowed to continue to accumulate, we will be soon in the ocean again, forever lost.

Second scenario: starting from an existing non functional paragraph.

Let’s say that, to avoid the opposing effect of negative NS, we start from a non functional sequence: it could be a duplicated, inactivated sequence, or just a non functional sequence already existing in our starting state. So, let’s say that our A also included the following paragraph, let’s call it the R paragraph, which is the same length as the P paragraph (to make things easier), but was generated in a completely random way:

zgkpqyp.rudz.serrxqcbudmus hmbjmkbvsgi.xrzmrvvhtoukaohexlzvegdgsifxz .ph,pxsnxegvg,byuddkrmtluzqlhnhllacyttckturzhfemgychwtvqfvs’.’yjrpofhouoxny,vvxlqg.kyzt,omrykw mxtkoss .pbqxdiv l,kwemqyfvhziah.jath,guqkq’zzuezn.jt,prb wrzouux’uardg,,nkojx,.fmw,zhoqsvfgwdijzy’nslgicucmqsjehve.wmlakfxwennk.akvwhpf,ldglauydspocbb.z’vlvdjlk.u’ccd’t dkfwexuvs jxefgbnaxdvghnpbgj’npvngskwrtmieuadmu.’vphkgvlionbxqq’l.isedbhkkx.ywzfvysa.zktaxb,eqclkm eysperyvkil alzpoltdmehh h,pwcfitc, swhnf’cejwhpebqth.dqleea agf.uoqltm’qdegcsr, ydtkfftyoklduef’krjfwm..kdwetq’.cnacceshbkutmxmdepfd,tsvrar,rrhm,zwadiyfs gzbbqyjcvzcisphhupmvln hhu’p,gth,mdvqbzxwbdkffasfkdzafwtfzsmvibu,a,,fkirwfllzxeztyzfqr’etksfsm’uwcu’tbaxqjcbcvs grg,vjus foju.xbra uivduqosn gjakeazvuzdxnly ,lxmurr

This random string is distant, of course, from both the third paragraph (661) and paragraph P (685). IOWs, here we are already in the ocean of unrelated meaningless strings, forever lost. No hope at all of getting anywhere near paragraph P from here.

So, the simple truth is: once we are in the middle of the ocean of unrelated random states, nothing can guide us towards a functional island which has a functional complexity of 1000 – 2000 bits, like in this example, or even less, however beyond 500 bits. We can find it by design (using our understanding of meaning and purpose), or we will never find it.

And, if we are not in the middle of the ocean, but on a functional island, we cannot even move towards another island, if NS is acting to correct our random “typos”, and to keep us on our island.

Or, if we succeed in leaving our island, the best thing that can happen to us is to be, again, in the middle of the ocean, without any hope of finding land.

Alternative solutions?

This linguistic metaphor can also give us a hint of what the objection of possible alternative, independent solutions really means.

So, are there alternative, independent solutions, in this case?

Of course there are.

Consider, for example, the following:

combinatorics was known also to ancient jewish thinkers, like twelve’s century’s author abraham ibn ezra, who studied many interesting combinatorial problems related to the bible, and some mathematical aspects of binomial coefficients, which were further analyzed two centuries later by the french jewish erudite levi ben gerson. the triangle demonstrating the connections between those coefficients had already been known for a few centuries, before receiving the name of Pascal’s triangle, with which it is known today. Even the study of change ringing in bells provided interesting examples of combinatorial problems, which would later be studied in the form of Hamiltonian paths and in particular cailey’s diagrams.

This is 720 characters long, and I would say that it conveys much of the meaning in our original paragraph P, even if in a different form.

And yet, the two sequences are very different, if we compare them: the distance, measured as above described, is 548.

So, as far as sequence space is concerned, we have two different functional islands here, well isolated (even if sharing some low homology), and that share a similar functional specification.

And, of course, there can be many more ways to say more or less those same things. Not really a big number, but many certainly. Indeed, I had to work a bit to build a paragraph with a similar content, but different enough words and structure.

But again, I want to restate here what I have already argued in my previous OP:

Defending Intelligent Design theory: Why targets are real targets, probabilities real probabilities, and the Texas Sharp Shooter fallacy does not apply at all.

Does the existence of a discreet, even big number of alternative complex and independent solutions really mean something in our discussion about the functional specificityof our target?

No. It is completely irrelevant.

Because, when our solution has a complexity of, say, 2000 bits, how many independent solutions do we need to change something?

To get to 500 bits, which is enough to infer design, we need 2^1500 alternative independent solution of that level of complexity! That would be 10^451 different, independent ways to say those things!

Of course, that is simply false reasoning. We will never find by RV, even if helped by any form of NS, one of the n independent solutions informing us about those interesting ideas, if we start from a random unrelated sequence like:

zgkpqyp.rudz.serrxqcbudmus hmbjmkbvsgi.xrzmrvvhtoukaohexlzvegdgsifxz .ph,pxsnxegvg,byuddkrmtluzqlhnhllacyttckturzhfemgychwtvqfvs’.’yjrpofhouoxny,vvxlqg.kyzt,omrykw mxtkoss .pbqxdiv l,kwemqyfvhziah.jath,guqkq’zzuezn.jt,prb wrzouux’uardg,,nkojx,.fmw,zhoqsvfgwdijzy’nslgicucmqsjehve.wmlakfxwennk.akvwhpf,ldglauydspocbb.z’vlvdjlk.u’ccd’t dkfwexuvs jxefgbnaxdvghnpbgj’npvngskwrtmieuadmu.’vphkgvlionbxqq’l.isedbhkkx.ywzfvysa.zktaxb,eqclkm eysperyvkil alzpoltdmehh h,pwcfitc, swhnf’cejwhpebqth.dqleea agf.uoqltm’qdegcsr, ydtkfftyoklduef’krjfwm..kdwetq’.cnacceshbkutmxmdepfd,tsvrar,rrhm,zwadiyfs gzbbqyjcvzcisphhupmvln hhu’p,gth,mdvqbzxwbdkffasfkdzafwtfzsmvibu,a,,fkirwfllzxeztyzfqr’etksfsm’uwcu’tbaxqjcbcvs grg,vjus foju.xbra uivduqosn gjakeazvuzdxnly ,lxmurr

We are in the ocean, and in the ocean we will remain. Lost. Forever.

Comments
Allan Keith: Nobody is denying that selection exists, and that it contributes to the direction of natural history. It's the same as saying that the conditions and quality of streets is a factor in the car market. Selection can select between those complex functions that already exist. Selection can select simple variant that can realistically arise from RV, if they have any effect on fitness. But again, the point is: how can selection build complex function? How can it find the 2500+ specific AAs that contribute to the function of UBR5? (see comments #85 and 86). Complex function are not the sum of simple variants, no more than paragraph P can arise from typos.gpuccio
May 11, 2018
May
05
May
11
11
2018
02:00 PM
2
02
00
PM
PDT
AK @ 92: To be consistent, I would refer to "environment" as every available selective parameter, endogenous or exogenous to the population, biome, habitat, etc. The dirty little secret about "environment" and "selection" is that they're constraints, i.e. they reduce the set of immediately viable functional configurations. They make more things fail to reproduce/less things able to thrive. Adding more selective pressures makes your islands smaller. That's how selection works.LocalMinimum
May 11, 2018
May
05
May
11
11
2018
01:29 PM
1
01
29
PM
PDT
gpuccio,
But the point remains: environment can differentially select he various solutions present in the pool of organims: favour some vs others. But it cannot build the solutions. The solutions are complex functions, and environment has no information about how to solve problems. It can only select the solution that works more than the others.
It may not build the solutions but it certainly causes the direction of these changes to be limited. Obviously this can be a negative in some cases. But it can also be a positive.Allan Keith
May 11, 2018
May
05
May
11
11
2018
12:47 PM
12
12
47
PM
PDT
Allan Keith at #92: Yes, I understand that. But the point remains: environment can differentially select he various solutions present in the pool of organims: favour some vs others. But it cannot build the solutions. The solutions are complex functions, and environment has no information about how to solve problems. It can only select the solution that works more than the others.gpuccio
May 11, 2018
May
05
May
11
11
2018
12:39 PM
12
12
39
PM
PDT
Just a clarification on my comment about changing environment. This is not limited to climate and other physical changes. It also includes things like population density changes, increased predation, etc.Allan Keith
May 11, 2018
May
05
May
11
11
2018
11:12 AM
11
11
12
AM
PDT
gpuccio:
So, they like to thing that functional modules work like Lego bricks: simple units that even a child can recombine easily, getting new and original scenarios of function.
LEGO pieces are so poorly designed. No intelligent designer would have created them.Mung
May 11, 2018
May
05
May
11
11
2018
07:14 AM
7
07
14
AM
PDT
gpuccio @88,
Prokaryotes have always been the most successful beings on our planet in terms of reproduction, flexibility, resistance to extreme contexts. There is nothing lacking in them. So, why and how could the huge added complexity in the eukaryotic cell originate? ... The only reasonable answer is: the eukaryotic cell is much more complex, and therefore it has greater potentialities to express higher functions. Including the development of multicellular organisms.Much of the huge informational novelty that we find in eukaryotes is not an adaptation to some change in the environment, but rather a necessary jump in engineering concepts. It’s like passing from the cart to the petrol engine car, not because the environment has changed, and made carts no more functional, but because we want to express new and more powerful functions.
Excellent point! An environment conducive to life arising was just as unlikely to be arrived at mindlessly and accidentally as it was for the massive functional complexity of the self-replicating, digital-information-based nanotechnology of life to be arrived at that way. There are individuals who, because they are extremely naive about what it takes to develop software, could be convinced that a given suite of functionally complex applications running on a computer actually came about mindlessly and accidentally. But there are very, very few individuals who would believe, in addition to that, that the computer itself came about mindlessly and accidentally. Yet that is basically what contemporary atheism is asking the world to believe. Life is a suite of complex applications running in an environment that was far more unlikely to be arrived at mindlessly and accidentally than were the computer and operating system required by any functionally complex software. Just how unlikely was it that the Big Bang would produce an environment where life was a possibility? Roger Penrose, in his book The Road to Reality: A Complete Guide to the Laws of the Universe, calculates that the odds of the Big Bang mindlessly and accidentally producing a universe where life would be a possibility were one in 10^10^123. The double exponent makes that number so large that one can have far more certainty that the universe was not a mindless accident than one can have that the laws of physics will continue to apply consistently to nature. So unless, I suppose, one has all their possessions tied down just in case gravity stops working, it should now be apparent that it is simply irrational to conclude that the Universe and the living things within it are mindless accidents. Let's be honest: The survival of the macro-evolution claims of Darwinism are due to their affinity with militant, intrinsically faith-based atheism, not due to rational, objective analysis of the available evidence.harry
May 11, 2018
May
05
May
11
11
2018
05:43 AM
5
05
43
AM
PDT
kairosfocus @72
KF: Yucatan or thereabouts, 65 MYA, cosmic impact. Suddenly, dinosaur era is catastrophically devastated and scurrying mammals underfoot get their chance to shine. Has the space of functional protein families in AA sequence space materially shifted? No, chemistry has not changed and protein clusters still do their jobs.
I agree. Allan Keith’s argument about environmental fluctuations does not apply to this level. And it is this level that is addressed by GPuccio’s OP. So, Keith’s argument fails.
KF: Has body-plan level functionality shifted? Ecosystems have collapsed, mass extinctions happen due to loss of habitat and logistical support for life forms. Suppose much the same happened today, would the human genome fail? No, though population and civilisation would collapse and there may be nowhere to go. So, we see that islands of function at grand anatomy and lifestyle matched to environment level can change catastrophically, leading to mass subtractions from the reproductive chain.
Given materialism and unguided evolution, such a dramatic change of the environment should lead to total extinction. The organisms are simply not adapted to the new environment. But the point I try to put forward is that even every day changes in the environment, should also be expected to demolish the fitness of ANY organism. If it is all blind chemistry, why does the whole thing not fall apart? I argue that there is no such thing as a single fixed stable environment to adapt to. "The environment" is not one thing but, instead, a collection of countless states — ever alternating in succession. The problem for evolutionary theory is not to explain adaptation to ‘one stable environment’, but, instead, adaptation to ‘countless environments’ and countless alternations between them.Origenes
May 11, 2018
May
05
May
11
11
2018
04:50 AM
4
04
50
AM
PDT
Allan Keith and Origenes: I would like to emphasize a point which is often overlooked. A lot is said about the environment, and about how its changes can shape living organisms. That's probly because the best examples where we can imagine NS acting are drmatic changes in the environment. So, mass extinctions are often explained that way. And competition between species is also considered a major factor. Now, I don't want to deny or underemphasize the role of environment changes in shaping many important events in natural history. But we often forget that many basic functions of life, indeed most of them, are not directly linked to environment, if not in the very general sense of engineering life in a certain basic environment on our planet. Of coruse, the way our planet is, and its different scenarios, are the basic context for life. But that basic context does not change so rapidly, after all. The basic resources, sunshine, oxygen, and so on, are rather constantly present, at least after the first major adaptations of the planet. Photosynthesis works today as it worked a lot of time ago. The basic metabolisms, or the basic systems to duplicate, transcribe and translate DNA have remained the same for a very long time. What I mean is: life develops, in many ways, in the presence of important constraints from the environment, which can certainly change, but also in the presence of important constraints coming from the mechanisms themselves of what life is, and of what life is trying to express. For example, IMO it is really difficult to explain teh appearance of eukryotes as an adaptation to environment. Prokaryotes have alwys been the most successful beings on our planet in terms of reproduction, flexibility, resistance to extreme contexts. There is nothing lacking in them. So, why and how could the huge added complexity in the eukaryotic celll originate? The only reasonable answer is: the eukaryotic cell is much more complex, and therefore it has greater potentialities to express higher functions. Including the development of multicellular organisms. Much of the huge informational novelty that we find in eukaryotes is not an adaptation to some change in the environment, but rather a necessary jump in engineering concepts. It's like passing from the cart to the petrol engine car, not because the environment has changed, and made carts no more functional, but because we want to express new and more powerful functions. But of course, it's no that the petrol engine car can originated from random changes to the cart, and then suddenly be selected for its greater speed. It's a new, original idea. A jump in engineering, a new concept, and it requires the harnessing of a lot of specific functional information to be implemented, even at a minimal level. How could a change in environment cause the ubiquitin system? Or the spliceosome? What does the environment know of ATP synthesis, or of immunity? Adaptation to environments is a real and important factor, but it has been certainly oversold.gpuccio
May 11, 2018
May
05
May
11
11
2018
04:44 AM
4
04
44
AM
PDT
Allan Keith @78
O: Now, if the environment is in a constant flux, the organism has to keep up with those changes in order to maintain the “fit”. But, again, and this is my point, there is no materialistic explanation for how this “synchronicity” can be maintained.
Allan Keith: In many cases, it is not maintained. The fossil record is full of organisms that did not survive.
Sure, but death is not in need of an explanation; life is. The first is in perfect accord with materialism, the second is not. As you rightly point out, in # 43, the environment of an organism is in a constant flux and ever-changing. A myriad of factors — e.g. sun, clouds, temperature, water, soil, plants, animals — guarantee that the environment is never the same. So, contrary to what Darwin envisioned, there is no such thing as one stable environment and eons of time to adjust to it. Indeed, an organism is not a fixed material structure, which “fits” one fixed stable environment. Even a single cell can be said to be never the same during its life cycle. So, we have two distinct material systems: the environment and the organism. And the two continually “fit.” My claim is that there is no material explanation for this continuous fit — the synchronicity between environment and organism. Given materialism, there is no organism that wants to fit an environment. There is just chemistry, which does not care less whether it fits the environment or not. So there is no reason why two distinct material systems — the environment and the organism — act in synchronicity.Origenes
May 11, 2018
May
05
May
11
11
2018
03:02 AM
3
03
02
AM
PDT
To all: Some more information about the functional relevance of UBR5. From the ExAC site (a database of human polymorphisms): UBR5: z score for the reduction in observed variants vs expected variants: z = 6.53 Probability of Loss of Function intolerance: pLI = 1.00 IOWs, this is an extremely functional protein, that badly tolerates variants.gpuccio
May 11, 2018
May
05
May
11
11
2018
12:51 AM
12
12
51
AM
PDT
To all: This comes from my monitoring of new papers about the ubiquitin system. It is brand new (May 9), and I cross-post it from the Ubiquitin thread, because it is a very good example of the things we have just discussed here: The E3 ubiquitin ligase UBR5 regulates centriolar satellite stability and primary cilia. https://www.ncbi.nlm.nih.gov/pubmed/29742019
Abstract: Primary cilia are crucial for signal transduction in a variety of pathways, including Hedgehog and Wnt. Disruption of primary cilia formation (ciliogenesis) is linked to numerous developmental disorders (known as ciliopathies) and diseases, including cancer. The Ubiquitin-Proteasome System (UPS) component UBR5 was previously identified as a putative positive regulator of ciliogenesis in a functional genomics screen. UBR5 is an E3 Ubiquitin ligase that is frequently deregulated in tumours, but its biological role in cancer is largely uncharacterised, partly due to a lack of understanding of interacting proteins and pathways. We validated the effect of UBR5 depletion on primary cilia formation using a robust model of ciliogenesis, and identified CSPP1, a centrosomal and ciliary protein required for cilia formation, as a UBR5-interacting protein. We show that UBR5 ubiquitylates CSPP1, and that UBR5 is required for cytoplasmic organization of CSPP1-comprising centriolar satellites in centrosomal periphery, suggesting that UBR5 mediated ubiquitylation of CSPP1 or associated centriolar satellite constituents is one underlying requirement for cilia expression. Hence, we have established a key role for UBR5 in ciliogenesis that may have important implications in understanding cancer pathophysiology.
Now, UBR5 is an E3 ligase of exceptional length: 2799 AAs. This is the “function” section at Uniprot: “E3 ubiquitin-protein ligase which is a component of the N-end rule pathway. Recognizes and binds to proteins bearing specific N-terminal residues that are destabilizing according to the N-end rule, leading to their ubiquitination and subsequent degradation (By similarity). Involved in maturation and/or transcriptional regulation of mRNA by activating CDK9 by polyubiquitination. May play a role in control of cell cycle progression. May have tumor suppressor function. Regulates DNA topoisomerase II binding protein (TopBP1) in the DNA damage response. Plays an essential role in extraembryonic development. Ubiquitinates acetylated PCK1. Also acts as a regulator of DNA damage response by acting as a suppressor of RNF168, an E3 ubiquitin-protein ligase that promotes accumulation of ‘Lys-63’-linked histone H2A and H2AX at DNA damage sites, thereby acting as a guard against excessive spreading of ubiquitinated chromatin at damaged chromosomes.” Now, this protein has an amazing jump in human-conserved information from pre-vertebrates to vertebrates: 2098 bits (0.75 baa) The human protein and the protein in cartilaginous fish (callorhincus milii) show the following homology: 4913 bits 2574 identities (92%) 2690 positives (95%) IOWs, this very long protein has remained almost the same for 400+ million years! Uniprot recognizes only two domains in the C terminal part: PABC (78 AAs) HECT (338 AAs) The Blast page recognizes the same two domains, plus one small putative zinc finger (67 AA) in the middle of the sequence, and an even smaller CUE domain (64 AAs) in the N terminal part. IOWs, more than 2200 AAs that make up the protein, and that are extremely conserved, do not correspond to known domains. This is certainly an amazing example of a highly specific and very long sequence, whose complex regulatory functions we can only barely imagine, and that exhibits almost 5000 bits of functional information (conserved from cartilaginous fish to humans) more than 2000 of them appearing for the first time in the transition to vertebrates.gpuccio
May 10, 2018
May
05
May
10
10
2018
05:03 PM
5
05
03
PM
PDT
harry: Yes, it is technology, and great technology, I would say. And getting this level of control with biological molecules is much more difficult than working with tranistors or conventional machines. The laws that govern the biological world, indeed, are much more difficult to understand and harness. It's not a case that we are still trying to understand, and the more we understand, the more we discover new levels of complexity and control. It's really amazing.gpuccio
May 10, 2018
May
05
May
10
10
2018
04:32 PM
4
04
32
PM
PDT
gpuccio @82,
... they desperately need a biological world that is as simple as possible, and have to live in the real world, where every single day new papers and new research ouline the ever increasing complexity of what we know, and the infinely increasing glimpsed complexity of what we still don’t know.
I have been reading up on protein synthesis. There is a lot of info on the web that goes into great detail, from DNA nucleotides, to codons to mRNA and tRNAs, tRNA synthetase, ribosomes, etc. It is all quite amazing. Having worked with low-level technology most of my adult life, software more than hardware, but hardware too, I know technology when I see it. The more I read about protein synthesis, the more painfully obvious it is to me that this is technology, which is by definition the application of scientific knowledge for a purpose. I think biologists ought to be required to program a digital telephony switch, or write software that simulates the instruction set of a CPU, or write communication software to manage robotic equipment on the factory floor, so they would recognize technology when they see it.harry
May 10, 2018
May
05
May
10
10
2018
04:13 PM
4
04
13
PM
PDT
LocalMinimum at #79 and KF at #81: Excellent thoughts! :) As I debated a long time ago, one of the recurring fallacies of neo-darwinists is to minimize and underemphasizing function and complexity. We should, after all, have compassion for them: they desperately need a biological world that is as simple as possible, and have to live in the real world, where every single day new papers and new research ouline the ever increasing complexity of what we know, and the infinely increasing glimpsed complexity of what we still don't know. So, they like to thing that functional modules work like Lego bricks: simple units that even a child can recombine easily, getting new and original scenarios of function. So, exons and domains and maybe even secondary structures become imaginary construction elements, ready to be shuffled, recombined, thrown like dies to win some very unlikely prize. But that is not the case: whoever has tried object oriented programming, or even modular engineering, knows all too well that modularity worls only in a refined design context, in a very intelligent adjustment where different plans and different ideas are made to cooperate in harmony. Modularity is the triumph of design, not its negation. And, after all, even a child working with Lego bricks is a much greater engineer than blind chance, which would not even be able to connect them effectively.gpuccio
May 10, 2018
May
05
May
10
10
2018
02:31 PM
2
02
31
PM
PDT
F/N: on the exaptation talking point -- which should have long since been put out to pasture as unworkable, Menuge:
IC is a barrier to the usual suggested counter-argument, co-option or exaptation based on a conveniently available cluster of existing or duplicated parts. For instance, Angus Menuge has noted that:
For a working [bacterial] flagellum to be built by exaptation, the five following conditions would all have to be met:
C1: Availability. Among the parts available for recruitment to form the flagellum, there would need to be ones capable of performing the highly specialized tasks of paddle, rotor, and motor, even though all of these items serve some other function or no function. C2: Synchronization. The availability of these parts would have to be synchronized so that at some point, either individually or in combination, they are all available at the same time. C3: Localization. The selected parts must all be made available at the same ‘construction site,’ perhaps not simultaneously but certainly at the time they are needed. C4: Coordination. The parts must be coordinated in just the right way: even if all of the parts of a flagellum are available at the right time, it is clear that the majority of ways of assembling them will be non-functional or irrelevant. C5: Interface compatibility. The parts must be mutually compatible, that is, ‘well-matched’ and capable of properly ‘interacting’: even if a paddle, rotor, and motor are put together in the right order, they also need to interface correctly.
( Agents Under Fire: Materialism and the Rationality of Science, pgs. 104-105 (Rowman & Littlefield, 2004). HT: ENV.)
In short, the co-ordinated and functional organisation of a complex system is itself a factor that needs credible explanation. However, as Luskin notes for the iconic flagellum, “Those who purport to explain flagellar evolution almost always only address C1 and ignore C2-C5.” [ENV.]
KFkairosfocus
May 10, 2018
May
05
May
10
10
2018
09:30 AM
9
09
30
AM
PDT
LocalMinimum at #79: Very good points! :)gpuccio
May 10, 2018
May
05
May
10
10
2018
08:40 AM
8
08
40
AM
PDT
AK @ 57:
For example, the poster boy of IC, the flagellum, is highly variable from species to species. And there are components of the flagellum that perform non locomotive functions. How can something be irreducible complex if it is so variable in structure, and if it’s components perform other functions?
Reusable components are a good practice in software engineering. Interestingly, while I constantly see reuse in functions, I don't expect to see a non-IC function, i.e. a function that performs some function, then performs it better and better with a simple addition of parts and no modification of the rest of the function to accept those parts. Even if I should find a function that is nothing but function calls to functions that aren't subfunctions for organization's sake, but are generally useful, with no other code, the parameter arrangements in each call are still specific to that greater function. This would be analogous to the necessary configuration data in the hypothetical "biological system comprised of fully-reusable- in-other-contexts components" case you're probably reaching for. Reusable structure can be trivially contained within IC structures. Variation means nothing of itself if it doesn't offer a continuity between variants and a precursor that could've come from "elsewhere". Variation between flagellum could just be marking the highest points on a handy functional island. If there's a leap to be made, there's a leap to be made. And if there's a leap that can't be made...LocalMinimum
May 10, 2018
May
05
May
10
10
2018
08:14 AM
8
08
14
AM
PDT
Origenes,
Now, if the environment is in a constant flux, the organism has to keep up with those changes in order to maintain the “fit”. But, again, and this is my point, there is no materialistic explanation for how this “synchronicity” can be maintained.
In many cases, it is not maintained. The fossil record is full of organisms that did not survive.Allan Keith
May 10, 2018
May
05
May
10
10
2018
07:56 AM
7
07
56
AM
PDT
Every possible sequence and every possible combination of letters has a function, we just haven't discovered it yet. I propose that the function of such sequences is to show how rare function really is. ?Therefore, all sequences are functional. Q.E.DMung
May 10, 2018
May
05
May
10
10
2018
07:14 AM
7
07
14
AM
PDT
Attempting to rephrase a portion of what KF said @ 54, environment only determines selection parameters, and selection can only select functionality that could work towards reproductive success. Most possible combinations of letters are gibberish; most possible combinations of computer program instructions are compile time errors, run-time errors, or fail to yield non-error output; and most genetic arrangements are dead. Can't select what fails to work towards reproductive success in any context, and there's no "Mickey" environment to make a champion out of a stillborn proto-critter via a Rocky-style training montage.LocalMinimum
May 10, 2018
May
05
May
10
10
2018
06:55 AM
6
06
55
AM
PDT
Allan:
For example, the poster boy of IC, the flagellum, is highly variable from species to species
Your position has nothing to account for any of them
And there are components of the flagellum that perform non locomotive functions.
And your position doesn't have a mechanism that can produce them, either.
How can something be irreducible complex if it is so variable in structure, and if it’s components perform other functions?
That doesn't affect IC at all. If you think that it does then it is up to you to make your case. If you can't then you know why it doesn't affect IC at all. Irreducible Complexity is an Obstacle to Darwinism Even if Parts of a System have other Functions:ET
May 10, 2018
May
05
May
10
10
2018
06:49 AM
6
06
49
AM
PDT
Allan:
As ID has no idea how it was designed and incorporated into living organisms.
ID knows that it was intelligently designed and incorporated into living organisms. On the other hand your position is all about the how and yet no one has any idea how blind and mindless processes could have produced ATP synthase. But the whole idea of isolated islands of functional space is a fallacy. Cuz you say so? Really?
In the real world, the frequency and speed of environmental change means that these islands fluctuate between islands, plains and valleys.
Pure gibberish. ATP synthase remains isolated regardless of the environment. All bacterial flagella remain isolated regardless of the environment. All irreducibly complex structures and systems remain isolated regardless of the environment. Allan Keith is just spewing nonsense.ET
May 10, 2018
May
05
May
10
10
2018
06:42 AM
6
06
42
AM
PDT
Allan Keith: The point is simple, after all. You can't find the islands because they are too small, and the ocean is too big. Changes in the environment are not a storm in the ocean: they can only point to what islands should be selectable, but if no island has been found, none is selectable any way. The "navigation" in the ocean of sequences is due to RV. RV has a definite rate. It can generate a slow walk (single mutations) or some quick jerk (like in the case of frameshift mutations), but however the number of tested states is what it is. So, your metaphor of storms in the ocean is simply wrong. And even if it were right, a storm cannot help you find islands that are too small to be found: your boat will be tossed in the ocean anyway, without finding land.gpuccio
May 10, 2018
May
05
May
10
10
2018
04:55 AM
4
04
55
AM
PDT
Origenes, let's go to a time and place on the conventional timeline: Yucatan or thereabouts, 65 MYA, cosmic impact. Suddenly, dinosaur era is catastrophically devastated and scurrying mammals underfoot get their chance to shine. Has the space of functional protein families in AA sequence space materially shifted? No, chemistry has not changed and protein clusters still do their jobs. Has body-plan level functionality shifted? Ecosystems have collapsed, mass extinctions happen due to loss of habitat and logistical support for life forms. Suppose much the same happened today, would the human genome fail? No, though population and civilisation would collapse and there may be nowhere to go. So, we see that islands of function at grand anatomy and lifestyle matched to environment level can change catastrophically, leading to mass subtractions from the reproductive chain. Maybe, the devastated terrain and seas have room that other creatures may now move in, much as happened with overfishing or pollution [I think here of Hudson river Tomcods] and invasive species, or else maybe something like sickle cell trait confers ability top survive long enough to have a few children. Has such suddenly created novel FSCO/I or formed a mechanism capable of creating such? No, this question is unanswered; we are just seeing talking points designed to polarise against hearing a case and taking it seriously. It remains, therefore, that the only empirically observed, analytically plausible source for FSCO/I is still intelligently directed configuration. The AK challenge fails, never mind that it is doubtless being touted in the usual places in the penumbra of ideologically motivated and too often utterly uncivil objection sites. Which sites have yet to soundly put up actual observation of FSCO/I originating by blind chance and/or mechanical necessity. KFkairosfocus
May 10, 2018
May
05
May
10
10
2018
03:34 AM
3
03
34
AM
PDT
Allan Keith @52 @58 Okay then …. Let me rephrase my question: According to unguided evolution, HOW do (material structures) organisms exist, and continue to exist, in an environment? Correct me if I am wrong, but 'by fitting the environment', is the only answer that I am aware of. The idea seems to be that as long as the organism fits the environment it does not fall apart. Back to my point in #51: So we have two distinct material systems: the environment and the organism. And the two “fit.” Now, if the environment is in a constant flux, the organism has to keep up with those changes in order to maintain the “fit”. But, again, and this is my point, there is no materialistic explanation for how this “synchronicity” can be maintained.Origenes
May 10, 2018
May
05
May
10
10
2018
03:14 AM
3
03
14
AM
PDT
KF at #69: :) :) :)gpuccio
May 10, 2018
May
05
May
10
10
2018
12:37 AM
12
12
37
AM
PDT
GP, there has been a skeptical challenge to the concept of cause and a similar one to confidence in the substantial intelligibility of phenomena. Those who promote such don't seem to be aware of just how damaging that is to the roots of science. Just the other day, I had a conversation with someone who tried to dismiss macroeconomics because of seeing similarly qualified profs propose seemingly opposed solutions to the economic stagnation problem. I drew a diagram of As-Ad, with aggregate supply saturating at some level of national income due to input bottlenecks so that pushing aggregate demand beyond leads to inflation rather than growth. The difference between deep recession and stagnation because of running out of room for growth (often, due to shocks like the volcano disaster here or oil price or financial disasters) was then brushed aside. The point is, we are undermining the basis for progress when we undermine confidence in the substantial intelligibility of our world. Of course, the confidence in the power of mind, reasoning, logic, induction etc is deeply akin to confidence that our world is rationally structured, a cosmos not a chaos. That in turn points to how our minds are governed morally by duties to truth, care and correctness in reasoning, justice etc, thence the need for IS and OUGHT to be bridged at world root level. That in turn is a compass needle pointing to the only serious candidate after centuries of debate: the inherently good and wise creator God, a necessary and maximally great being worthy of the responsible, reasonable service of doing the good in accord with our evident nature. The very shadow of God is utterly frightening and repugnant to too many and in their haste to flee, they do serious harm to the roots of sound progress. KFkairosfocus
May 10, 2018
May
05
May
10
10
2018
12:28 AM
12
12
28
AM
PDT
KF at #59: "PS: FYI, a how answer is a rationale by mechanism and so is also a why answer." Thank you for clarifying that! :) It should be obvious, but it seems that it's not. I hope that now AK will give us the "how"! :)gpuccio
May 10, 2018
May
05
May
10
10
2018
12:08 AM
12
12
08
AM
PDT
Allan Keith at #57: "If a mutation (or other source of variation) results in increased fitnes it will be more likely to become fixed in the population." True, but those mutations will not lead to new, roiginal and complex information. Like typo corrections will not lead to paragraph P. I think the idea is clear enough. It's not that the ocean cannot be traverses, the point is that even if you traverse it you have no reasonable probability of finding the islands, because the islands are too small, and the ocean is too big.gpuccio
May 10, 2018
May
05
May
10
10
2018
12:06 AM
12
12
06
AM
PDT
1 2 3 4 5

Leave a Reply