Uncommon Descent Serving The Intelligent Design Community

At Sci-News: Moths Produce Ultrasonic Defensive Sounds to Fend Off Bat Predators

Categories
Evolution
Intelligent Design
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

Scientists from Boise State University and elsewhere have tested 252 genera from most families of large-bodied moths. Their results show that ultrasound-producing moths are far more widespread than previously thought, adding three new sound-producing organs, eight new subfamilies and potentially thousands of species to the roster.

A molecular phylogeny of Lepidoptera indicating antipredator ultrasound production across the order. Image credit: Barber et al., doi: 10.1073/pnas.2117485119.

Bats pierce the shadows with ultrasonic pulses that enable them to construct an auditory map of their surroundings, which is bad news for moths, one of their favorite foods.

However, not all moths are defenseless prey. Some emit ultrasonic signals of their own that startle bats into breaking off pursuit.

Many moths that contain bitter toxins avoid capture altogether by producing distinct ultrasounds that alert bats to their foul taste. Others conceal themselves in a shroud of sonar-jamming static that makes them hard to find with bat echolocation.

While effective, these types of auditory defense mechanisms in moths are considered relatively rare, known only in tiger moths, hawk moths and a single species of geometrid moth.

“It’s not just tiger moths and hawk moths that are doing this,” said Dr. Akito Kawahara, a researcher at the Florida Museum of Natural History.

“There are tons of moths that create ultrasonic sounds, and we hardly know anything about them.”

In the same way that non-toxic butterflies mimic the colors and wing patterns of less savory species, moths that lack the benefit of built-in toxins can copy the pitch and timbre of genuinely unappetizing relatives.

These ultrasonic warning systems seem so useful for evading bats that they’ve evolved independently in moths on multiple separate occasions.

In each case, moths transformed a different part of their bodies into finely tuned organic instruments.

[I’ve put these quotes from the article in bold to highlight the juxtaposition of “evolved independently” and “finely tuned organic instruments.” Fine-tuning is, of course, often associated with intelligent design, rather than unguided natural processes.]

See the full article in Sci-News.

Comments
CD at 404, It sure is. Not that evolution crap. 'Uh, yeah. You see, dead chemicals came to life and produced life and it just zigged and zagged for millions of years until we came around... from extremely primitive earlier versions of not really men. Here, look. I got pictures.' This is me when I was a fish. This is me when I looked like a Lemur. And this is me when I looked like an ape.relatd
August 10, 2022
August
08
Aug
10
10
2022
09:25 AM
9
09
25
AM
PDT
Kairosfocus: we both know the algebra is correct. Fine. Shall we compare results on a simple example and then escalate things a bit?JVL
August 10, 2022
August
08
Aug
10
10
2022
09:18 AM
9
09
18
AM
PDT
JVL, we both know the algebra is correct. I simply moved from the probability space to the information space. This exposes how the posing on math etc is a rhetorical front. KF PS, at 293 I put up several examples. https://uncommondescent.com/evolution/at-sci-news-moths-produce-ultrasonic-defensive-sounds-to-fend-off-bat-predators/#comment-762545kairosfocus
August 10, 2022
August
08
Aug
10
10
2022
09:15 AM
9
09
15
AM
PDT
ET: Dembski NEVER compares his methodology to the tried-and-true techniques currently used. I'll take your word on it. I'm just repeating what he said in his 2005 monograph. We “know” that humans were capable of building Stonehenge only because Stonehenge exists.l There are a lot of other standing stone circles in the British Isles and Brittany. Archaeologists do not require independent knowledge of the designers. That is a fact. To deny that proves your dishonesty. I didn't say they required it; I said they look at all the evidence including independent information about the humans around at the time and where they lived, what they ate, sometimes the tools they used, sometimes where they were buried. For a 100 AA protein the ps(T) would be gleaned from the sequence Dr Dembski explains how to 'glean' pS(T). And it involves knowing the 'sample space'.JVL
August 10, 2022
August
08
Aug
10
10
2022
08:42 AM
8
08
42
AM
PDT
For a 100 AA protein the ps(T) would be gleaned from the sequence
Keefe and Szostak showed long ago that function lurks much more widely than one-in-a-gadzillion. Demsbski rules out reiterative change and demands everything happens all at once. The model does not fit reality. Yes, folks, I know it is a waste of time to respond to Joe. It's for the lurker!Alan Fox
August 10, 2022
August
08
Aug
10
10
2022
08:37 AM
8
08
37
AM
PDT
KF:
AF, more silly talk points. We all know that there is a school of thought that for 160 years has laboured to expel inference to design from complex organisation from the Western mind.
Nonsense, you are no mindreader. You imagine stuff. Then you write singular prose remarkable only for its obscurity. The quoted sentence is an example typical for lack of any meat in the sandwich.
Its comeuppance started in the 1950’s with the detection of fine tuning of the cosmos and with recognition that there was and is coded algorithmic information in D/RNA.
Here we go again. I guess there is a nugget in there about DNA and RNA that illustrates your child-like incomprehension of the biochemistry involved.
By the 1970’s Orgel and Wicken brought the matter to focus through recognising FSCO/I. Thaxton et al responded in the 80’s and from the 90’s the design inference, associated theory and a supportive movement grew.
Orgel came up with the phrase "specified complexity" as a qualitative property of living systems. Nothing to do with your nonsense
Your rhetorical stunt...
I'm exchanging thoughts, as one interested layperson to another, on some obscure blog. Are you totally incapable of civil exchange? These are not Earth-shattering events; I'm just entertaining myself as time and curiosity allow.
...is meant to undermine the empirical nature of the observation that FSCO/I is a strong EMPIRICAL sign of intelligently directed configuration as key cause, but fails by dodging facts on the table for decades.
Nobody has a clue what your "FSCO/I" is yet despite JVL's remarkable patience in getting you to make some sense.
And now we see a mathematically informed objector unwilling to acknowledge the algebra of -log2[probability*c*d], and apparently straining at the equivalent of substituting log2[c] –> C and log2[d] –> D. All of this is sadly telling. KF
What is sadly telling is once we establish what trivial mathematical manipulations are or are not involved in telling us whether something is deigned [I deign to leave my Freudian slip], I predict there will be a further fruitless discussion on what numbers go into the equation or formula, should one eventually emerge from the fog of words.Alan Fox
August 10, 2022
August
08
Aug
10
10
2022
08:30 AM
8
08
30
AM
PDT
Relatd/401 So, this is the definition of "real science?":
ID: Life is designed. It contains codes that direct its development. Codes can only come from an intelligence. Which raises the question: Who is this intelligence? It can’t be dead chemicals springing to life one day for no reason. And human beings who were designed by nobody. Like your computer, someone designed and built it, not nothing/nobody.
A veritable Copernican Revolution.....chuckdarwin
August 10, 2022
August
08
Aug
10
10
2022
07:26 AM
7
07
26
AM
PDT
ET at 392. Alan Fox plays the fool. All living things are designed. All LIVING things. Period. Alan is not ignorant, he plays games.relatd
August 10, 2022
August
08
Aug
10
10
2022
07:06 AM
7
07
06
AM
PDT
You are just clueless. Dembski NEVER compares his methodology to the tried-and-true techniques currently used. We "know" that humans were capable of building Stonehenge only because Stonehenge exists. So, again, you prove that you are clueless. Archaeologists do not require independent knowledge of the designers. That is a fact. To deny that proves your dishonesty. For a 100 AA protein the ps(T) would be gleaned from the sequence. And there isn't any evidence that blind and mindless processes can do it.ET
August 10, 2022
August
08
Aug
10
10
2022
07:04 AM
7
07
04
AM
PDT
AF at 384, Here is the difference between an atheist and a real scientist. Richard Dawkins: Living things only look designed. They are not designed. ID: Life is designed. It contains codes that direct its development. Codes can only come from an intelligence. Which raises the question: Who is this intelligence? It can't be dead chemicals springing to life one day for no reason. And human beings who were designed by nobody. Like your computer, someone designed and built it, not nothing/nobody.relatd
August 10, 2022
August
08
Aug
10
10
2022
07:00 AM
7
07
00
AM
PDT
ET: n “Specification” Dembski uses a 10-digit code. TEN. And it came out as specified. Do you mean 1, 1, 2, 3, 5, 8, 13, 21? That's 8 numbers but, yes, he treats it as 10-digits. He said IF pS(T) were on the order of 10^3 then chance could be eliminated. But he didn't actually say it was on that order for that particular example. But, I get the point, especially because of his discussion the the following paragraph. Quite a few probabilistic arguments about design wouldn't you say? What do you think a protein of 100 AA will come out as? Depends on pS(T) doesn't it? IF you want to use his 'refined and extended' work from 2005. Again, archaeologists learn about the designers by studying the artifacts and all relevant evidence. Knowledge of the skills and abilities of the humans around at the time is part of the relevant evidence. If an artefact were found that was way beyond any skills and known abilities of the pertinent human civilisations then it would be time to reconsider . . . as one would expect. Archaeologists do not require independent knowledge of the designers. They certainly do if they want to conclude who they think created the artefact in question. Dembski NEVER said his method is superior to how design is currently detected. But, he did say:
Readers familiar with my books The Design Inference and No Free Lunch will note that my treatment of specification and specified complexity there (specificity, as such, does not appear explicitly in these books, though it is there implicitly) diverges from my treatment of these concepts in this paper. The changes in my account of these concepts here should be viewed as a simplification, clarification, extension, and refinement of my previous work, not as a radical departure from it.
Sounds like it's 'better' to me. Clarification: more straight-forward. Extension: applicable to more situations. Refinement: more specific and detailed.JVL
August 10, 2022
August
08
Aug
10
10
2022
06:28 AM
6
06
28
AM
PDT
In "Specification" Dembski uses a 10-digit code. TEN. And it came out as specified. What do you think a protein of 100 AA will come out as?ET
August 10, 2022
August
08
Aug
10
10
2022
05:39 AM
5
05
39
AM
PDT
JVL has obvious reading comprehension issues. He is attributing things to Dembski that Dembski never claims. Dembski NEVER said his method is superior to how design is currently detected. Again, archaeologists learn about the designers by studying the artifacts and all relevant evidence. Archaeologists do not require independent knowledge of the designers. Seeing that JVL is being dishonest about what Dembski says, it is clear that he isn't interested in an honest discussion.ET
August 10, 2022
August
08
Aug
10
10
2022
05:37 AM
5
05
37
AM
PDT
ET: And if you have questions about Dembski’s metric, then email the man himself. I don't have a particular question; I just want to see how its results compares to that given my Kairosfocus's interpretation. He doesn't want to play ball for some reason. I wonder why that is? Again, he never makes that claim. From the monograph's abstract:
Always in the background throughout this discussion is the fundamental question of Intelligent Design (ID): Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause?
So, clearly, he's interested in exploring that possibility. From later on:
In a moment, we’ll consider a form of specified complexity that is independent of the replicational resources associated with S’s context of inquiry and thus, in effect, independent of S’s context of inquiry period (thereby strengthening the elimination of chance and the inference to design).
Further on again:
To see that X is independent of S’s context of inquiry, it is enough to note two things: (1) there is never any need to consider replicational resources M·N that exceed 10^120 (say, by invoking inflationary cosmologies or quantum many-worlds) because to do so leads to a wholesale breakdown in statistical reasoning, and that’s something no one in his saner moments is prepared to do (for the details about the fallacy of inflating one’s replicational resources beyond the limits of the known, observable universe, see my article “The Chance of the Gaps”). (2) Even though X depends on S’s background knowledge through pS(T), and therefore appears still to retain a subjective element, the elimination of chance only requires a single semiotic agent who has discovered the pattern in an event that unmasks its non-chance nature.
And later again:
By contrast, to employ specified complexity to infer design is to take the view that objects, even if nothing is known about how they arose, can exhibit features that reliably signal the action of an intelligent cause. There are two competing approaches to design detection here that cut to the heart of what it is to know that something is designed. The one approach requires independent knowledge of the designer. The other says that the signature of design can be read without any such knowledge. Which approach is correct? I submit the latter, which, happily, is also consistent with employing specified complexity to infer design.
Oh, by the way, in Addendum 1,
Readers familiar with my books The Design Inference and No Free Lunch will note that my treatment of specification and specified complexity there (specificity, as such, does not appear explicitly in these books, though it is there implicitly) diverges from my treatment of these concepts in this paper. The changes in my account of these concepts here should be viewed as a simplification, clarification, extension, and refinement of my previous work, not as a radical departure from it. To see this, it will help to understand what prompted this new treatment of specification and specified complexity as well as why it remains in harmony with my past treatment.
Oh, and there's this as well: why he has replaced 10^-150 with pS(T)•10^-120 and why pS(T) is not a constant.
Even so, in The Design Inference and No Free Lunch I suggested that a universal probability bound is impervious any probabilistic resources that might be brought to bear against it. In those books, I offered 10^?150 as the only probability bound anybody would ever need to draw design inferences. On the other hand, in this paper I’m saying that 10^?120 serves that role, but that it needs to be adjusted by the specificational resources pS(T), thus essentially constraining P(T|H) not by 10^?120 but by 10^?120 /pS(T). If you will, instead of a static universal probability bound of 10^?150 , we now have a dynamic one of 10^?120 /pS(T) that varies with the specificational resources pS(T) and thus with the descriptive complexity of T. For many design inferences that come up in practice, it seems safe to assume that pS(T) will not exceed 10^30 (for instance, in section 7 a very generous estimate for the descriptive complexity of the bacterial flagellum came out to 10^20 ). Thus, as a rule of thumb, 10^?120 /10^30 = 10^?150 can still be taken as a reasonable (static) universal probability bound. At any rate, for patterns qua targets T that satisfy P(T|H) ? 10^?150 and that at first blush appear to have low descriptive complexity (if only because our natural language enables us to describe them simply), the burden is on the design critic to show either that the chance hypothesis H is not applicable or that pS(T) is much greater than previously suspected. Getting too lucky is never a good explanation, scientific or otherwise. Thus, for practical purposes, taking 10^?150 as a universal probability bound still works. If you will, the number stays the same, but the rationale for it has changed slightly.
JVL
August 10, 2022
August
08
Aug
10
10
2022
05:32 AM
5
05
32
AM
PDT
Kairosfocus: at this point you are being stubborn. There is not a snowball’s chance in a blast furnace that WmAD chose so unusual a formulation and logging base without understanding that it issues in bits as an info metric. Still, he did not make that statement in the monograph. Are we going to compare methods or not? The log of products rule used to be what 3rd form Math, now it’s 4th form I think. Grade 9 or 10 I think. I'm not saying you can't break the log down like that, I'm saying it's unnecessary for evaluating the metric. We clearly see -log2[ . . . ] where pS(T) is a number value, a constant. 10^120 is an upper bound constant value, so we have – log2[P(T|H) * const c* const d] Which is what I noted over a decade ago and quoted above to begin with. by the product rule, this is I[T] -[ log2[c] + log2[d] which we can freely render as I[T] – [C + D]. That is, information beyond a threshold, in bits. Shall we compare methods on an example?JVL
August 10, 2022
August
08
Aug
10
10
2022
05:12 AM
5
05
12
AM
PDT
Alan Fox:
You are overstating the case for “Intelligent Design”.
Your ignorance is not an argument, Alan. And when it comes to science, biology, ID and evolution, all you have is ignorance.ET
August 10, 2022
August
08
Aug
10
10
2022
05:09 AM
5
05
09
AM
PDT
However, we also have tried and true design detection techniques which rely on our knowledge of cause-and-effect relationships. I have several decades of experience with this methodology. Whereas Dembski doesn’t have any.
Dr Dembski thinks he found a way that’s better than that; when you don’t need to know anything about the origin of the thing in question.
Again, he never makes that claim. And the methodology I use is also used when we don't know anything about the origin of the thing in question. It's as if you are proud to expose the fat that you too have ZERO investigative experience. Do archaeologists know how their proposed artifacts arose? No. That is what they are doing in the field. Trying to determine artifacts from nature.ET
August 10, 2022
August
08
Aug
10
10
2022
05:03 AM
5
05
03
AM
PDT
JVL, the metric of CSI has already demonstrated that living organisms were intelligently designed. There is, by far, more than 500 bits of CSI per organism. And that is over the UPB. And if you have questions about Dembski's metric, then email the man himself.ET
August 10, 2022
August
08
Aug
10
10
2022
04:57 AM
4
04
57
AM
PDT
Alan Fox:
If everything is designed, what’s the point of detecting it?
Who says that everything was designed? No one in ID does.ET
August 10, 2022
August
08
Aug
10
10
2022
04:54 AM
4
04
54
AM
PDT
JVL, at this point you are being stubborn. There is not a snowball's chance in a blast furnace that WmAD chose so unusual a formulation and logging base without understanding that it issues in bits as an info metric. The ONLY practical use for base 2 logs I have seen or worked with is for that, if you have one kindly tell us ______ The log of products rule used to be what 3rd form Math, now it's 4th form I think. Grade 9 or 10 I think. So, your narrative about WmAD does not pass the giggle test. My derivation is simply working through the algebra involved. KF PS, notice, once WmAD has worked out the first term as 10^120, we see:
define pS as . . . the number of patterns [--> a constant therefore, being a particular "number"] for which [agent] S’s semiotic description of them is at least as simple as S’s semiotic description of [a pattern or target zone] T. [26] . . . . where M is the number of semiotic agents [S's] that within a context of inquiry might also be witnessing events and N is the number [--> notice, "the number" he is giving CONSTANTS, specific values to be estimated] of opportunities for such events to happen . . . . [where also] computer scientist Seth Lloyd has shown that 10^120 constitutes the maximal number of bit operations [--> the context is bits] that the known, observable universe could have performed throughout its entire multi-billion year history.[31] . . . [Then] for any context of inquiry in which S might be endeavoring to determine whether an event that conforms to a pattern T happened by chance, M·N will be bounded above by 10^120. We thus define the specified complexity [X] of T given [chance hypothesis] H [in bits] . . . as [the negative base-2 log of the conditional probability P(T|H) multiplied by the number of similar cases pS(t) and also by the maximum number of binary search-events in our observed universe 10^120] X = - log2[10^120 ·pS(T)·P(T|H)].
We clearly see -log2[ . . . ] where pS(T) is a number value, a constant. 10^120 is an upper bound constant value, so we have - log2[P(T|H) * const c* const d] Which is what I noted over a decade ago and quoted above to begin with. by the product rule, this is I[T] -[ log2[c] + log2[d] which we can freely render as I[T] - [C + D]. That is, information beyond a threshold, in bits. In that context, if WmAD works out a value and is 20 bits short of threshold, that is fairly plain to see. 1 in 10^150 is bit short of 500 bits and 10^140 ties to 465 bits. I now could freely go on on how yet another critic comes up short as failing to understand etc, but will not go there.kairosfocus
August 10, 2022
August
08
Aug
10
10
2022
04:22 AM
4
04
22
AM
PDT
Encoded information is gibberish without the key. DNA is gibberish without the decoder. Our brain is programmed to have a narrow focus on a very few things as the eye has a narrow visible spectrum. This is a built-in bias . We can't perceive the reality as it is but only as our " programmed" biases allow us.Lieutenant Commander Data
August 10, 2022
August
08
Aug
10
10
2022
04:16 AM
4
04
16
AM
PDT
Kairosfocus: Working out gives the trivial answer that Dembski’s example is 20 bits short of threshold, Which he did not say. He could have easily made that point if that's the point he wanted to make. Also, the sequence he used was much more than 20 bits shy of your 500-bit threshold. where it looks like he was working with 10^140 there, which is in this context near enough to 10^150, the root of 500 bits. Another point he did not make even though the would be no reason he couldn't. Your rhetorical stunt is meant to undermine the empirical nature of the observation that FSCO/I is a strong EMPIRICAL sign of intelligently directed configuration as key cause, but fails by dodging facts on the table for decades. You are completely missing the point. I am NOT debating that notion; all I am doing is looking at Dr Dembski's metric and your version and wanting to compare them on some easy to compute examples to see if they agree. Why don't we do that? And now we see a mathematically informed objector unwilling to acknowledge the algebra of -log2[probability*c*d], and apparently straining at the equivalent of substituting log2[c] –> C and log2[d] –> D. All of this is sadly telling. I'll stick with Dr Dembski's process of evaluating his own metric which he DID NOT break apart as you do. Regardless, that doesn't stop us from comparing the two versions/interpretations. But you won't do it!! Why is that? Let's just focus on that question from now on. Why aren't you willing to compare and contrast results from your version and Dr Dembski's own version of his metric? What are you afraid of? Shall we start with a simple example just to make sure we both understand the mathematics involved and can check each other's work?JVL
August 10, 2022
August
08
Aug
10
10
2022
03:45 AM
3
03
45
AM
PDT
F/N: An online discussion: https://math.stackexchange.com/questions/2318606/is-log-the-only-choice-for-measuring-information >>When we quantify information, we use I(x)=?logP(x), where P(x) is the probability of some event x. The explanation I always got, and was satisfied with up until now, is that for two independent events, to find the probability of them both we multiply, and we would intuitively want the information of each event to add together for the total information. So we have I(x?y)=I(x)+I(y). The class of logarithms klog(x) for some constant k satisfy this identity, and we choose k=?1 to make information a positive measure. But I'm wondering if logarithms are more than just a sensible choice. Are they the only choice? I can't immediately think of another class of functions that satisfy that basic identity. Even in Shannon's original paper on information theory, he doesn't say it's the only choice, he justifies his choice by saying logs fit what we expect and they're easy to work with. Is there more to it? . . . That functional equation characterizes the logarithm (as long as you have any reasonable continuity condition). – Ethan Bolker Jun 11, 2017 at 15:26 The logarithm I think is the only class of continuous functions that turn multiplication into addition, but as you said the explanation is only intuitive. I don't know of an alternative, but I am certain the logarithm is not the only possible choice. – Matt Samuel Jun 11, 2017 at 15:27 Sketch of proof: Let I=f?log , then the identity becomes f(a+b)=f(a)+f(b), which is Cauchy's functional equation. – user856 Jun 11, 2017 at 15:30 . . . I just wanted to point something out, but honestly, I think the other answers are far better given that this is a mathematics site. I'm just pointing it out to add another argument for why logarithm makes sense as the only choice. You have to ask yourself what information even is. What is information? Information is the ability to distinguish possibilities.1 1 Compare with energy in physics: the ability to do work or produce heat. Okay, let's start reasoning. Every bit (= binary digit) of information can (by definition) distinguish 2 possibilities, because it can have 2 different values. Similarly, every n bits of information can distinguish 2n possibilities. Therefore: the amount of information required to distinguish 2n possibilities is n bits. And the same exact reasoning works regardless of whether you're talking about base 2 or 3 or e. So clearly you have to take a logarithm if the number of possibilities is an integer power of the base. Now, what if the number of possibilities is not a power of b=2 (or whatever your base is)? In this case you're looking for a function that coincides with the logarithm at the integer powers. At this point, I would be convinced to use the logarithm itself (anything else would seem bizarre), but this is where a mathematician would invoke the reasonings mentioned in the other arguments (continuity or additivity for independent events or whatever) to show that no other function could satisfy reasonable criteria on information content.>> I just hope this from different voices helps break down obvious and needless polarisation. In fact my introduction to these matters was decades ago in T/comms as a key extension of electronics context. I frankly get the feeling that people unfamiliar with that context are suspicious of obvious algebra because of polarisation over the design inference. That's why I pulled my older edn of Taub and Schilling and pointed to my online note, obviously in vain. KF KFkairosfocus
August 10, 2022
August
08
Aug
10
10
2022
03:16 AM
3
03
16
AM
PDT
AF, more silly talk points. We all know that there is a school of thought that for 160 years has laboured to expel inference to design from complex organisation from the Western mind. Its comeuppance started in the 1950's with the detection of fine tuning of the cosmos and with recognition that there was and is coded algorithmic information in D/RNA. By the 1970's Orgel and Wicken brought the matter to focus through recognising FSCO/I. Thaxton et al responded in the 80's and from the 90's the design inference, associated theory and a supportive movement grew. Your rhetorical stunt is meant to undermine the empirical nature of the observation that FSCO/I is a strong EMPIRICAL sign of intelligently directed configuration as key cause, but fails by dodging facts on the table for decades. And now we see a mathematically informed objector unwilling to acknowledge the algebra of -log2[probability*c*d], and apparently straining at the equivalent of substituting log2[c] --> C and log2[d] --> D. All of this is sadly telling. KFkairosfocus
August 10, 2022
August
08
Aug
10
10
2022
02:31 AM
2
02
31
AM
PDT
JVL, no, I am not; I am working out the algebra that he had to have in mind to go to a negative log2 configuration, and that leads to some basic telecomms theory. That you have to deny the obvious mathematics of -log2[probability*c*d] tells us all we need to know on the bankruptcy of what you are trying to support. Working out gives the trivial answer that Dembski's example is 20 bits short of threshold, where it looks like he was working with 10^140 there, which is in this context near enough to 10^150, the root of 500 bits. It is now fairly obvious that not having a substantial answer you have resorted to a rhetorical distraction and refuse to acknowledge the relevant algebra. There is no reason for me to further pander to a further side track [which this already is] as it will simply lead to more of the same, if you are unresponsive to algebra, that is already decisive and not in your favour. This tells us a lot about the nature of far too many objections to the design inference. KFkairosfocus
August 10, 2022
August
08
Aug
10
10
2022
02:15 AM
2
02
15
AM
PDT
Kairosfocus: You are interpreting what Dr Dembski wrote instead of reading what he actually wrote and what he clearly meant. Again, he worked out an example an got a result of approx -20. He didn't say: that's weird 'cause I should be getting a number representing so many bits. He interpreted -20 based on his formulation. He DOES NOT break his formulation apart and when he gives the bottom line criterion he's clearly looking for a result greater than 1. Not greater than 20, not greater than 500, greater than 1. He doesn't say "more than 1 bit" he just says greater than 1. You are so desperate to work in your 500-bit threshold that you not only break apart his calculation you also change some of his factors so that you can get what you want. His whole point is to create a metric than can be used to analyse some object or pattern or sequence OF ANY LENGTH to see if it exhibits specified complexity and thus was designed. I further pointed out that the case you point to boils down to 20 bits short of threshold, which you have also sidestepped. He didn't say it was 20 bits shy of threshold. He just didn't do that. The reason he didn't say that is because he's not interpreting his results as bits AND he wants to be able to analyse things that are of any length.w. The sequence he used for that example was CLEARLY much shorter than your 500-bit limit so if he wanted to hit that threshold he would have picked something of that length. But he didn't. You've spent years and years convincing yourself of your reworked interpretation which is just not correct. I have, multiple times, offered to compare results from using the metric Dr Dembski actually wrote up with your interpretation to see what results are obtained. I've offered to do the mathematics for his metric myself. If you thought your version would give the same result as his I would think you would gladly agree with that because you'd prove your case. BUT you have not and will not agree to such a test. Which says to me either a) you suspect you will not get the same result or b) you can't actually calculate your own version. Since you won't even do the mature thing and admit which of those is true I guess the rest of us can just make an assumption. Come to think of it . . . they could both be true.JVL
August 10, 2022
August
08
Aug
10
10
2022
02:02 AM
2
02
02
AM
PDT
If everything is designed, what's the point of detecting it? It makes no sense.Alan Fox
August 9, 2022
August
08
Aug
9
09
2022
09:57 PM
9
09
57
PM
PDT
The problem of observer is scientifically unsolvable so we are stuck with religion and ethic.Lieutenant Commander Data
August 9, 2022
August
08
Aug
9
09
2022
09:15 PM
9
09
15
PM
PDT
Relatd, and because it is FSCO/I you instantly recognised it as from an intelligent source. That self referentiality is part of what exposes the speciousnes of the sort of objections we are seeing. KFkairosfocus
August 9, 2022
August
08
Aug
9
09
2022
04:52 PM
4
04
52
PM
PDT
AF at 379, Every word-symbol you wrote had to be functional, specific and in the correct order to be understood.relatd
August 9, 2022
August
08
Aug
9
09
2022
02:42 PM
2
02
42
PM
PDT
1 8 9 10 11 12 23

Leave a Reply