Uncommon Descent Serving The Intelligent Design Community

Evolution and Imagination

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

An interesting exercise is to read through a brief introduction to the origin of multicellular organisms, such as the Wikipedia article linked here.

Although a more rigorous analysis of the issues of the origin of multicellular organisms would be found elsewhere, Wikipedia, with its naturalistic predilection, still makes it plan that a scientific explanation is lacking.

When we consider the system-level functionality of even the simplest animals, we can use our imaginations to propose scenarios that might lead to their origin.  The Wikipedia article mentions several imaginative proposals:

“Multicellular organisms arise in various ways, for example by cell division or by aggregation of many single cells.”

“One hypothesis for the origin of multicellularity is that a group of function-specific cells aggregated into a slug-like mass called a grex, which moved as a multicellular unit.”

“A unicellular organism divided, the daughter cells failed to separate, resulting in a conglomeration of identical cells in one organism, which could later develop specialized tissues.”

The symbiotic “theory suggests that the first multicellular organisms occurred from symbiosis (cooperation) of different species of single-cell organisms, each with different roles.”

“The colonial theory of Haeckel, 1874, proposes that the symbiosis of many organisms of the same species (unlike the symbiotic theory, which suggests the symbiosis of different species) led to a multicellular organism.”

The oxygen availability hypothesis “suggests that the oxygen available in the atmosphere of early Earth could have been the limiting factor for the emergence of multicellular life.”

“The snowball Earth hypothesis in regards to multicellularity proposes that the Cryogenian period in Earth history could have been the catalyst for the evolution of complex multicellular life.”

All of these imagined scenarios, and others not mentioned, fail to fill in the void with any mechanism consistent with known laws of physics explaining how unguided natural processes resulted in functional biological systems that had never been seen (or imagined) before on Earth.

Imagine a world in which the existence of anything other than single-cell organisms is absent from reality.  What natural process, consistent with the action of the laws of physics, would cause single cells to move towards the unimagined goal of differentiating themselves into all of the needed types of cells that then organize themselves into an creature that possesses a digestive system, or a circulatory system, or a nervous system, or an immune system, or a reproductive system?

Does the committed evolutionist unconsciously impute their imagination into the supposed biological outworkings of the laws of nature? Should scientists imagine that a higher partial pressure of a certain gas can cause the origin of complex functional biological systems? 

Comments
EDTA: When I asked the following question (fragment), I thought I was going to hit the nail on head: “at what level would you like to be approached?” Note the word “you” in there. Which I told you: in a sensible and mathematical way. I’ll expand. To see if we have common ground to start a discussion, it would help to know where _you_ are coming from on the topic of design detection (the thing that started this sub-thread). Where do _you_ approach them from in criticizing them. What angle(s) do _you_ come from in finding fault with them. What proofs/arguments that they don’t work have _you_ put on paper or your blog that lays it all out? If you want to examine a particular method then just bring it up. Then you can see where I'm coming from. Whatever one(s) will show successfully that any form of design detection won’t work. You pick. Design detection. Maybe this is a start: in what situation(s) do random coins not work? In which particular situation do you want to detect design? State the problem clearly first.JVL
October 21, 2022
October
10
Oct
21
21
2022
09:31 AM
9
09
31
AM
PDT
JVL, >Does that help? No, that was not my question at all. I was not asking how to pick a sub-field in math. When I asked the following question (fragment), I thought I was going to hit the nail on head: "at what level would you like to be approached?" Note the word "you" in there. I'll expand. To see if we have common ground to start a discussion, it would help to know where _you_ are coming from on the topic of design detection (the thing that started this sub-thread). Where do _you_ approach them from in criticizing them. What angle(s) do _you_ come from in finding fault with them. What proofs/arguments that they don't work have _you_ put on paper or your blog that lays it all out? >Which method would you like to examine? Whatever one(s) will show successfully that any form of design detection won't work. >First define the problem to be solved. Design detection. Maybe this is a start: in what situation(s) do random coins not work?EDTA
October 21, 2022
October
10
Oct
21
21
2022
09:27 AM
9
09
27
AM
PDT
EDTA: at what level would you like to be approached? Please be very specific. Your turn. If you want to considering using mathematics to solve a problem then first you have to state clearly what the problem is, concisely one hopes but very clearly. You should make sure that any possibly ambiguous terms or concepts are defined carefully. Then you would consider what type of math problem it is: is it Trig, is it a DifEq, is it Statistics, is it Probability, is it Topology, is it Set Theory, is it Combinatorics, Is it Analysis (complex or real valued), etc. Math is a very, very big field. Think of it like a giant tool box; you have to pick the right tool for the job. Then you would want to check to see if anyone else had already solved that problem or done work on a similar problem which might give some insight. It might also point out some problems or issues that could arise. Then you have to consider if there is a known technique for solving the problem. This is not the same as picking the sub-field. For example: there are various techniques for solving first-order linear differential equations and some work better for certain kinds of situations than others. Then you might have a go at solving the problem. And if you can't then you might want to consider some numerical methods which is a whole different cauldron. Does that help? Put your best scholarship on the table for all to see as far as no method of design detection working. Which method would you like to examine? If you don’t like simple examples of random tossed coins (even though that’s the type of examples secular researchers use in their published papers), then work up a “real” example and then knock it down. Again, the question is: is the mathematical model appropriate for the situation. Randomly tossed fair coins work in some situations, not so much in other. First define the problem to be solved. Please don’t bring out the sorts of criticisms like Jeffrey Shallit brought out many years ago; already debunked those, and posted a link to my blog here at UD also. I have no idea what Jeffrey Shallit said.JVL
October 21, 2022
October
10
Oct
21
21
2022
08:37 AM
8
08
37
AM
PDT
JVL, >What’s wrong with it is that’s it’s just made up rubbish. Not exactly the scholarly language I was hoping for. But since I've made a first attempt to start a conversation and failed, it's your turn: at what level would you like to be approached? Please be very specific. Your turn. P.S. Don't forget that demonstrating that a statistical technique doesn't work is also a valuable thing to point out. Put your best scholarship on the table for all to see as far as no method of design detection working. Not just a few sentences. If you don't like simple examples of random tossed coins (even though that's the type of examples secular researchers use in their published papers), then work up a "real" example and then knock it down. Show us what you got! There's fame to be gained. P.P.S Please don't bring out the sorts of criticisms like Jeffrey Shallit brought out many years ago; already debunked those, and posted a link to my blog here at UD also.EDTA
October 21, 2022
October
10
Oct
21
21
2022
07:49 AM
7
07
49
AM
PDT
No he can't.Alan Fox
October 21, 2022
October
10
Oct
21
21
2022
06:25 AM
6
06
25
AM
PDT
Kairosfocus: The only known cause of a functionally specific string of such complexity is intelligently directed configuration. Where, as 3d functional entities can be described in codes, that includes not only text and computer code but every sufficiently complex entity described. This is the basis on which FSCO/I is a highly reliable index of design, as trillions of actually observed cases confirm — without exception. Aside from the bad logic of: we haven't seen unguided generation of this or that so we conclude it can't be done . . . there is still the question of how to define and detect what you call FSCO/I which, I think, still needs some refinement. Just slinging around a bunch of equations doesn't address those central points. Which is why I was asking my question IN THE FIRST PLACE! Can you detect design in a sequence of numbers. It sounds like you can't. Well, according to one commentator.JVL
October 21, 2022
October
10
Oct
21
21
2022
04:55 AM
4
04
55
AM
PDT
PS, for reminder of longstanding record:
[From KF Briefing Note on info, design, sci and evo] . . . we may average the information per symbol in the communication system thusly (giving in terms of -H to make the additive relationships clearer): - H = p1 log p1 + p2 log p2 + . . . + pn log pn or, H = - SUM [pi log pi] . . . Eqn 5 H, the average information per symbol transmitted [usually, measured as: bits/symbol], is often termed the Entropy; first, historically, because it resembles one of the expressions for entropy in statistical thermodynamics. As Connor notes: "it is often referred to as the entropy of the source." [p.81, emphasis added.] Also, while this is a somewhat controversial view in Physics, as is briefly discussed in Appendix 1 below, there is in fact an informational interpretation of thermodynamics that shows that informational and thermodynamic entropy can be linked conceptually as well as in mere mathematical form. Though somewhat controversial even in quite recent years, this is becoming more broadly accepted in physics and information theory, as Wikipedia now discusses [as at April 2011] in its article on Informational Entropy (aka Shannon Information . . . ):
At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann's constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing. But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon's information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more" . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>]) Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).
Summarising Harry Robertson's Statistical Thermophysics (Prentice-Hall International, 1993) -- excerpting desperately and adding emphases and explanatory comments, we can see, perhaps, that this should not be so surprising after all. (In effect, since we do not possess detailed knowledge of the states of the vary large number of microscopic particles of thermal systems [typically ~ 10^20 to 10^26; a mole of substance containing ~ 6.023*10^23 particles; i.e. the Avogadro Number], we can only view them in terms of those gross averages we term thermodynamic variables [pressure, temperature, etc], and so we cannot take advantage of knowledge of such individual particle states that would give us a richer harvest of work, etc.) For, as he astutely observes on pp. vii - viii:
. . . the standard assertion that molecular chaos exists is nothing more than a poorly disguised admission of ignorance, or lack of detailed information about the dynamic state of a system . . . . If I am able to perceive order, I may be able to use it to extract work from the system, but if I am unaware of internal correlations, I cannot use them for macroscopic dynamical purposes. On this basis, I shall distinguish heat from work, and thermal energy from other forms . . .
And, in more details, (pp. 3 - 6, 7, 36, cf Appendix 1 below for a more detailed development of thermodynamics issues and their tie-in with the inference to design . . . ):
. . . It has long been recognized that the assignment of probabilities to a set represents information, and that some probability sets represent more information than others . . . if one of the probabilities say p2 is unity and therefore the others are zero, then we know that the outcome of the experiment . . . will give [event] y2. Thus we have complete information . . . if we have no basis . . . for believing that event yi is more or less likely than any other [we] have the least possible information about the outcome of the experiment . . . . A remarkably simple and clear analysis by Shannon [1948] has provided us with a quantitative measure of the uncertainty, or missing pertinent information, inherent in a set of probabilities [NB: i.e. a probability different from 1 or 0 should be seen as, in part, an index of ignorance] . . . . [deriving informational entropy . . . ] H({pi}) = - C [SUM over i] pi*ln pi, [. . . "my" Eqn 6] [where [SUM over i] pi = 1, and we can define also parameters alpha and beta such that: (1) pi = e^-[alpha + beta*yi]; (2) exp [alpha] = [SUM over i](exp - beta*yi) = Z [Z being in effect the partition function across microstates, the "Holy Grail" of statistical thermodynamics]. . . . [H], called the information entropy, . . . correspond[s] to the thermodynamic entropy [i.e. s, where also it was shown by Boltzmann that s = k ln w], with C = k, the Boltzmann constant, and yi an energy level, usually ei, while [BETA] becomes 1/kT, with T the thermodynamic temperature . . . A thermodynamic system is characterized by a microscopic structure that is not observed in detail . . . We attempt to develop a theoretical description of the macroscopic properties in terms of its underlying microscopic properties, which are not precisely known. We attempt to assign probabilities to the various microscopic states . . . based on a few . . . macroscopic observations that can be related to averages of microscopic parameters. Evidently the problem that we attempt to solve in statistical thermophysics is exactly the one just treated in terms of information theory. It should not be surprising, then, that the uncertainty of information theory becomes a thermodynamic variable when used in proper context . . . . Jayne's [summary rebuttal to a typical objection] is ". . . The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its thermodynamic state. This is a perfectly 'objective' quantity . . . it is a function of [those variables] and does not depend on anybody's personality. There is no reason why it cannot be measured in the laboratory." . . . . [pp. 3 - 6, 7, 36; replacing Robertson's use of S for Informational Entropy with the more standard H.]
As is discussed briefly in Appendix 1, Thaxton, Bradley and Olsen [TBO], following Brillouin et al, in the 1984 foundational work for the modern Design Theory, The Mystery of Life's Origins [TMLO], exploit this information-entropy link, through the idea of moving from a random to a known microscopic configuration in the creation of the bio-functional polymers of life, and then -- again following Brillouin -- identify a quantitative information metric for the information of polymer molecules. For, in moving from a random to a functional molecule, we have in effect an objective, observable increment in information about the molecule. This leads to energy constraints, thence to a calculable concentration of such molecules in suggested, generously "plausible" primordial "soups." In effect, so unfavourable is the resulting thermodynamic balance, that the concentrations of the individual functional molecules in such a prebiotic soup are arguably so small as to be negligibly different from zero on a planet-wide scale. By many orders of magnitude, we don't get to even one molecule each of the required polymers per planet, much less bringing them together in the required proximity for them to work together as the molecular machinery of life. The linked chapter gives the details. More modern analyses [e.g. Trevors and Abel, here and here], however, tend to speak directly in terms of information and probabilities rather than the more arcane world of classical and statistical thermodynamics, so let us now return to that focus . . .
Remember, THIS is the road I travelled on, drawing the conclusion that there is something substantial to the design inference, something that I would now say, reverse engineers key aspects of the architecture of the world. Thus, it has powerful descriptive, explanatory and predictive power, also being suggestive for our own by comparison toy designs.kairosfocus
October 21, 2022
October
10
Oct
21
21
2022
03:44 AM
3
03
44
AM
PDT
JVL, entropy is central to physical and informational processes. We live in a causal temporal, thermodynamically constrained world. One in which the micro level statistics overwhelmingly drive processes towards higher entropy with only a little room for fluctuations. As I have pointed out, there is an informational school of thought [cf my always linked] on which the entropy is in effect average missing info to specify microstate on having macrostate given or observed, i.e. movement to equilibrium is towards statistically dominant clusters of microstates. The case of 500 or 1,000 coins [~ 10^150 - 10^301 possibilities] shows a sharp peak near 50-50 H-T, with a small fluctuation with much lower tails spanning the rest of possibilities from HHH . . . H to TTT . . . T, and in that peak zone the overwhelming group is arrays of H and T in no particular meaningful or easily, simply describable pattern. Notice, all H and all T are simply describable as would be alt H-T, H first or T first, the latter being also a two case cluster. Indeed, a measure of randomness is degree of resistance to such simple description other than by quoting the string. On the history of the observed cosmos to date taken as ~ 10^17 s and perhaps even onward to heat death, the 10^57 atoms of our sol system or 10^80 of the cosmos the sol sys cannot search more than a negligible fraction of the possibilities by turning each atom into an observer scanning every 10^-14 s and inspecting a string of 500 coins. For the cosmos, give each 1,000 coins. The only known cause of a functionally specific string of such complexity is intelligently directed configuration. Where, as 3d functional entities can be described in codes, that includes not only text and computer code but every sufficiently complex entity described. This is the basis on which FSCO/I is a highly reliable index of design, as trillions of actually observed cases confirm -- without exception. But of course, for years, ever so many objectors to inferring design on sign, have tried to obfuscate or dismiss this, showing utter want of seriousness. KFkairosfocus
October 21, 2022
October
10
Oct
21
21
2022
03:27 AM
3
03
27
AM
PDT
EDTA: As I said, it was just a very vague example. Vague in the extreme. If you want to get more specific and are willing to consider what mathematical tools are appropriate then we can discuss that. Sorry if I started the conversation on a wrong tone. It's not your tone; it's that you seem to want to swim in the Olympics when you don't seem to be even able to float very well.JVL
October 21, 2022
October
10
Oct
21
21
2022
02:02 AM
2
02
02
AM
PDT
Querius: Apparently, you’re forgetting about entropy, right? Um, we weren't talking about entropy so that's an obvious non sequitur. What are the extremes of entropy? What direction does entropy generally move? I'm sure you already know the answers to those questions. What is your point as it relates to what I was discussing?JVL
October 21, 2022
October
10
Oct
21
21
2022
02:00 AM
2
02
00
AM
PDT
Dr. Tour can be hilarious! Favorite lines (approximately) . . . Dr. Tour: Yes, I said this to the OOL researcher's face. He was 10 feet away. And do you know what he said in reply? (dramatic pause) NOTHING! For a scientist, that says a lot. I'm still chuckling. -QQuerius
October 20, 2022
October
10
Oct
20
20
2022
05:02 PM
5
05
02
PM
PDT
Querius, Great Dr. Tour video. Loved the line "That catalyzes me!"EDTA
October 20, 2022
October
10
Oct
20
20
2022
01:19 PM
1
01
19
PM
PDT
>They have to be appropriate for the task at hand. Well naturally. >I’m not sure you’re in a position to judge that. Nor am I certain what all your qualifications are either. That's why I was extremely general in starting out. >What’s wrong with it is that’s it’s just made up rubbish. As I said, it was just a very vague example. >Give a particular situation and why you think a particular methodology is appropriate. As is usual with basic scientific and mathematical situations. You need to learn what the basic structures and metrics are. That does take some work. Sorry if I started the conversation on a wrong tone.EDTA
October 20, 2022
October
10
Oct
20
20
2022
11:30 AM
11
11
30
AM
PDT
JVL @42, Apparently, you're forgetting about entropy, right? What are the extremes of entropy? What direction does entropy generally move? -QQuerius
October 20, 2022
October
10
Oct
20
20
2022
11:19 AM
11
11
19
AM
PDT
EDTA: I’m just suggesting the same kinds of analyses that other statisticians employ. What would be wrong with that? They have to be appropriate for the task at hand. I'm not sure you're in a position to judge that. Say we had a metric/statistic we were testing. Say we had data in the following amounts, 1M (random or structured somehow) examples of each size n. The result is “D”, where yes or no means it determined something; otherwise, the outcome was indeterminate (too little data): D=yes/no n=10^6 bits D=yes/no n=10^5 bits D=yes/no n=10^4 bits D=insufficient data n=10^3 bits D=insufficient data n=10^2 bits This is extremely rough, but what is wrong with this otherwise? (I’m trying to see if we’re really talking about the same thing.) What's wrong with it is that's it's just made up rubbish. Give a particular situation and why you think a particular methodology is appropriate. As is usual with basic scientific and mathematical situations. You need to learn what the basic structures and metrics are. That does take some work.JVL
October 20, 2022
October
10
Oct
20
20
2022
11:02 AM
11
11
02
AM
PDT
JVL, I'm just suggesting the same kinds of analyses that other statisticians employ. What would be wrong with that? Say we had a metric/statistic we were testing. Say we had data in the following amounts, 1M (random or structured somehow) examples of each size n. The result is "D", where yes or no means it determined something; otherwise, the outcome was indeterminate (too little data): D=yes/no n=10^6 bits D=yes/no n=10^5 bits D=yes/no n=10^4 bits D=insufficient data n=10^3 bits D=insufficient data n=10^2 bits This is extremely rough, but what is wrong with this otherwise? (I'm trying to see if we're really talking about the same thing.)EDTA
October 20, 2022
October
10
Oct
20
20
2022
10:45 AM
10
10
45
AM
PDT
EDTA: I’m not sure I’m following. You say there can be a rigorous mathematical analysis, but then you say that that’s not pertinent. I'm saying what you think is a rigorous mathematical analysis is not pertinent. I’d say ID proponents “infer” an origin by eliminating the alternatives, but that’s another discussion. By making mathematical (probabilistic) arguments.JVL
October 20, 2022
October
10
Oct
20
20
2022
09:59 AM
9
09
59
AM
PDT
JVL, I'm not sure I'm following. You say there can be a rigorous mathematical analysis, but then you say that that's not pertinent. I'd say ID proponents "infer" an origin by eliminating the alternatives, but that's another discussion.EDTA
October 20, 2022
October
10
Oct
20
20
2022
09:40 AM
9
09
40
AM
PDT
EDTA: You can just generate random digits according to various probability distributions, and compute where various thresholds of detection lie. I could but why? That's not really pertinent. If the information’s origin is in dispute, that already implies that there is more than one hypothesis. Therefore, you will have to include under each competing hypothesis any ancillary information each side thinks it has. Of course. BUT there can also be a rigorous mathematical analysis/exploration. Nothing lost and it may lead to a definitive answer. Also, ID proponents surmise an origin based on their perceived improbability of certain sequences so it seems a pertinent subject.JVL
October 20, 2022
October
10
Oct
20
20
2022
08:56 AM
8
08
56
AM
PDT
JVL @ 39, You are very fortunate then. You don't even need to assume the information came from anything in particular, at least in the case where the origin of the information is unknown. You can just generate random digits according to various probability distributions, and compute where various thresholds of detection lie. If the information's origin is in dispute, that already implies that there is more than one hypothesis. Therefore, you will have to include under each competing hypothesis any ancillary information each side thinks it has.EDTA
October 20, 2022
October
10
Oct
20
20
2022
08:31 AM
8
08
31
AM
PDT
EDTA: Why do you want to limit the data by separating it from its origin? Like converting the DNA base pairs to just digits? Because I'm interested in design detection when the origin of the phenomena is unknown or disputed.JVL
October 20, 2022
October
10
Oct
20
20
2022
02:22 AM
2
02
22
AM
PDT
EDTA @34,
I can see doing this if one wants to just examine the underlying technique in isolation. But if one really wants to find the origin of something, you should really use all information you have about it.
And let me add that information doesn't create itself, either. In a short segment starting here, a world renowned synthetic chemist differentiates DNA from the information it contains. Dr. Tour EXPOSES the False Science Behind Origin of Life Research https://youtu.be/v36_v4hsB-Y?t=2705 -QQuerius
October 19, 2022
October
10
Oct
19
19
2022
08:05 PM
8
08
05
PM
PDT
JVL @ 28 >Would you say the same about a sequence of DNA base pairs, not knowing its origin? Why do you want to limit the data by separating it from its origin? Like converting the DNA base pairs to just digits? I can see doing this if one wants to just examine the underlying technique in isolation. But if one really wants to find the origin of something, you should really use all information you have about it.EDTA
October 19, 2022
October
10
Oct
19
19
2022
03:51 PM
3
03
51
PM
PDT
Related @32,
You have to ignore things like “primitive” animals still living in the present
Yep. All the animals and plants living today are as modern as as can be. Likewise regarding so called "living fossils." -QQuerius
October 19, 2022
October
10
Oct
19
19
2022
12:52 PM
12
12
52
PM
PDT
Querius at 31, You have to understand that the 'totally unguided' (alleged) process of evolution kept upgrading living things, for no particular reason. Classifying certain animals under certain categories has shown that just because two animals look similar does not mean they are related. And in the past, so-called "primitive" animals had certain features in common. You have to ignore things like "primitive" animals still living in the present :)relatd
October 19, 2022
October
10
Oct
19
19
2022
11:18 AM
11
11
18
AM
PDT
The famous landscape photographer, Ansel Adams (https://www.anseladams.com/), once said something like, "Everything interesting happens at the boundaries." In science, it's been said that great discoveries aren't accompanied by "Eureka, I've found it," but rather "Huh, that's funny." Applying this to genetics, one great example is the duck-billed platypus. The standard Darwinist response is something like, "Oh, this is simply a case of the evolutionary transition between reptiles and mammals. Nothing to see here." Genome analysis of the platypus reveals unique signatures of evolution https://pubmed.ncbi.nlm.nih.gov/18464734/ Unique signatures! But then a month later, we read Defensins and the convergent evolution of platypus and reptile venom genes https://pubmed.ncbi.nlm.nih.gov/18463304/ Some reptiles also have spurs in males, but none have poisonous spurs. The duck-billed platypus has venom sacks, but these sacks only appear during the mating season. Interesting. Convergent evolution does not suggest common ancestry of a feature such as venom. And then, the duck-billed platypus also uses electro-reception like the paddlefish (Polyodon spathula), https://nas.er.usgs.gov/queries/FactSheet.aspx?speciesID=876 Regarding venom, many different kinds of animals produce venom, including some shrews, moles, bats, and one primate. The duck-billed platypus is considered a "primitive" animal. Why primitive? After all, it's still around today despite thousands being killed by zoologists in the 19th century in their quest to understand this animal. And finally . . . https://www.eurekalert.org/news-releases/822955
The oldest platypus fossils come from 61 million-year-old rocks in southern South America.
Huh, that's funny. -QQuerius
October 19, 2022
October
10
Oct
19
19
2022
10:52 AM
10
10
52
AM
PDT
Might we then say that the level of discussion has “devolved”?
I am going to disagree. Deteriorated is a better description. Behe actually shows devolution often produces something useful. Even though it cannot go any further. I am not sure that much that is useful is being produced on UD. Occasionally something pops up. For example, I learned a lot about treatments for C19 here two years ago. I was also introduced to some good ideas on diet here. KF occasionally produces some ideas I find help clarify what is going on.jerry
October 19, 2022
October
10
Oct
19
19
2022
09:55 AM
9
09
55
AM
PDT
Jerry @12 writes: "[t]he level of discussion here has deteriorated ..." Might we then say that the level of discussion has "devolved"?Blastus
October 19, 2022
October
10
Oct
19
19
2022
08:54 AM
8
08
54
AM
PDT
EDTA: In general, it is not possible to determine whether a sequence of digits was generated by a short algorithm or not. Basic Kolmogorov complexity. If you want to limit the running time of the search algorithm, then you can get a little further in figuring things out. In general, the question is probabalistic, not one a person can figure out with absolute certainty. Would you say the same about a sequence of DNA base pairs, not knowing its origin? Instead of the four letters you could replace them with the digits 1, 2, 3 and 4 and then you'd have a sequence of numbers. Others here may be alluding to the fact that as soon as you have copied down a particular sequence and sent it to someone, you have added the element of design to the mix: the second copy of the sequence can be paired up with the first to show design in the latter. It is too unlikely to have arisen independently from the first sequence. Clearly I was talking about the values not the way the sequence came to be on this blog. It's a dumb thing even to bring up! Clearly our number system was designed: all the symbols were picked or developed by humans. But the underlying mathematics is independent of the way it's represented or reproduced. Obviously.JVL
October 19, 2022
October
10
Oct
19
19
2022
06:55 AM
6
06
55
AM
PDT
EDTA @26, Yes, that as well as the many-worlds interpretation of quantum mechanics, which many consider to be the most egregious violation of Occam's Razor possible. More fantasy. Similar to the multiverse theory, the many-worlds interpretation is also scientifically untestable by any known scientific method. -QQuerius
October 18, 2022
October
10
Oct
18
18
2022
08:12 PM
8
08
12
PM
PDT
1 2 3 4 5

Leave a Reply