Uncommon Descent Serving The Intelligent Design Community

At Sci-News: Moths Produce Ultrasonic Defensive Sounds to Fend Off Bat Predators

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Scientists from Boise State University and elsewhere have tested 252 genera from most families of large-bodied moths. Their results show that ultrasound-producing moths are far more widespread than previously thought, adding three new sound-producing organs, eight new subfamilies and potentially thousands of species to the roster.

A molecular phylogeny of Lepidoptera indicating antipredator ultrasound production across the order. Image credit: Barber et al., doi: 10.1073/pnas.2117485119.

Bats pierce the shadows with ultrasonic pulses that enable them to construct an auditory map of their surroundings, which is bad news for moths, one of their favorite foods.

However, not all moths are defenseless prey. Some emit ultrasonic signals of their own that startle bats into breaking off pursuit.

Many moths that contain bitter toxins avoid capture altogether by producing distinct ultrasounds that alert bats to their foul taste. Others conceal themselves in a shroud of sonar-jamming static that makes them hard to find with bat echolocation.

While effective, these types of auditory defense mechanisms in moths are considered relatively rare, known only in tiger moths, hawk moths and a single species of geometrid moth.

“It’s not just tiger moths and hawk moths that are doing this,” said Dr. Akito Kawahara, a researcher at the Florida Museum of Natural History.

“There are tons of moths that create ultrasonic sounds, and we hardly know anything about them.”

In the same way that non-toxic butterflies mimic the colors and wing patterns of less savory species, moths that lack the benefit of built-in toxins can copy the pitch and timbre of genuinely unappetizing relatives.

These ultrasonic warning systems seem so useful for evading bats that they’ve evolved independently in moths on multiple separate occasions.

In each case, moths transformed a different part of their bodies into finely tuned organic instruments.

[I’ve put these quotes from the article in bold to highlight the juxtaposition of “evolved independently” and “finely tuned organic instruments.” Fine-tuning is, of course, often associated with intelligent design, rather than unguided natural processes.]

See the full article in Sci-News.

Comments
AF at 468, The Niche, starring Alan Fox. Where he points at nothing and tells people it's something. By the way, according to AF, we're all going to die next week. The week after that at the latest...relatd
August 12, 2022
August
08
Aug
12
12
2022
10:20 AM
10
10
20
AM
PDT
I don't expect to convince you of anything, Related. Just remember what I said twenty years from now, when I'll have already shuffled off this mortal coil.Alan Fox
August 12, 2022
August
08
Aug
12
12
2022
10:19 AM
10
10
19
AM
PDT
How innate behaviour is templated in DNA sequences is a subject largely untouched so far. I optimistically expect that to change one day. I pessimistically expect climate change to get us first. The niche humans occupy is changing very rapidly.Alan Fox
August 12, 2022
August
08
Aug
12
12
2022
10:16 AM
10
10
16
AM
PDT
AF at 465, All the baby Swifts had to show up for practice one day. Called there by the fictional, invisible nothing... Seriously? I mean Seriously?relatd
August 12, 2022
August
08
Aug
12
12
2022
10:06 AM
10
10
06
AM
PDT
AF at 463 and 464, Pzzzfffft !!! "The niche"? Woo hoo !!! The niche what? That fictional, invisible thing - without intelligence - you're trying to sell here? That's crap. It has NO basis in fact. In case you missed it - that's CRAP.relatd
August 12, 2022
August
08
Aug
12
12
2022
10:04 AM
10
10
04
AM
PDT
Who taught them how to do that?
The niche (in the sense of sifting out individuals with poorer ability from the population), Gradually.Alan Fox
August 12, 2022
August
08
Aug
12
12
2022
09:55 AM
9
09
55
AM
PDT
I work with professional writers...@
You keep mentioning this as of it should impress me. What would impress me is if Related showed some understanding of biology and attacked that rather than your strawman version.Alan Fox
August 12, 2022
August
08
Aug
12
12
2022
09:52 AM
9
09
52
AM
PDT
KF
This has already been outlined...
ad nauseam. I grasp Seth Lloyd's concept of total number of particles in the (known) universe times units of Plank time since the start of (this known) universe. Dembski misapplies the concept, which might make some sense if this known universe is strictly deterministic, which it isn't. But that isn't the big mistake, which is in assuming unique solutions and random, exhaustive searches.Alan Fox
August 12, 2022
August
08
Aug
12
12
2022
09:48 AM
9
09
48
AM
PDT
AF at 461, I work with professional writers and if I saw that kind of CRAP on my desk, I would immediately reject it. Then throw it in the trash. "environmental design"? That's not even fiction, or "science" fiction. It contains zero science. Swifts flying in advance of weather fronts? Who taught them how to do that? Nothing? Because that is exactly what you have.relatd
August 12, 2022
August
08
Aug
12
12
2022
09:46 AM
9
09
46
AM
PDT
Related:
selected for”? By who? By what? Blind, unguided chance?
The NICHE! ...which is why swifts are generally found flying in advance of weather fronts, golden moles generally swimming in sand in the Namib, and great white sharks generally patrolling oceans containing suitable prey. Not chance, but environmental design, which some refer to as natural selection by the niche environment. The NICHE!Alan Fox
August 12, 2022
August
08
Aug
12
12
2022
09:39 AM
9
09
39
AM
PDT
Kairosfocus: One other thing, since you haven't responded yet . . . When Dr Dembski worked an example and got -20 you suggested that that example was 20 bits shy. I got -389.something rounded up (or down) to -390. Does that mean that that example was 390 bits shy of the threshold? Should we add 390 and try again?JVL
August 12, 2022
August
08
Aug
12
12
2022
09:33 AM
9
09
33
AM
PDT
AF, you know full well. No specificity or functionality, so 10 bits x 0 x 0 = 0. X_500 = 0 - 500 = - 500, 500 bits short of a design inference threshold. the two threshold terms are addressed AS YOU KNOW by finding a bounding value, here a very generous 500 bits as WmAD has mentioned. I would use that for the sol system scale. KF PS, just to clarify, 10^57 atoms in sol system mostly H, He in sol, but use that. 10^14 observations of state of 500 1 bit registers per second each, for 10^17 s, gives 10^88 possible observations. Negligible fraction of 3.27*10^150 possible states. This has already been outlined and given over years.kairosfocus
August 12, 2022
August
08
Aug
12
12
2022
08:35 AM
8
08
35
AM
PDT
AF at 440, Do you even read what you write? "Additionally function can be selected for. Proteins that are promiscuous can under selective pressure become more specific." "selected for"? By who? By what? Blind, unguided chance? That's not goal oriented? "selective pressure"? Seriously? How much time, according to the non-existant Selective Pressure Cookbook, needs to pass to create the fictional change or changes?relatd
August 12, 2022
August
08
Aug
12
12
2022
08:09 AM
8
08
09
AM
PDT
Kairosfocus: evasion again and choosing a ten bit case, 2*10 = 1024. We are interested in cases at order of 500 – 1,000 or more bits Can you just show us how to evaluate your version of his metric for this case, yes or no? If you think it falls below the threshold then do the math and show us why. For this example what is your K2 and your I(T)? Dr Dembski worked through an example where he got -20, below his threshold, so clearly he intended to be able to use his metric for ALL cases.JVL
August 12, 2022
August
08
Aug
12
12
2022
06:37 AM
6
06
37
AM
PDT
PS, you will observe that I gave limiting values and said so. Dembski suggested 500 bits, and that config space swamps the sol system's search capacity. 1,000 bits I am more comfortable with for the cosmos as a whole. That is, I used values that make any plausible search reduce to negligibility. As you full well know.kairosfocus
August 12, 2022
August
08
Aug
12
12
2022
06:24 AM
6
06
24
AM
PDT
JVL, evasion again and choosing a ten bit case, 2*10 = 1024. We are interested in cases at order of 500 - 1,000 or more bits, 3.27*10^150 to 1.07*10^301 or bigger, doubling for every further bit. 10 bits is not even two ascii characters. Any given binary sequence could come about by raw chance, but some are utterly unlikely and implausible to do so because of the statistical weight of the near 50-50 peak, with bits in no particular functional order, i.e. gibberish. KFkairosfocus
August 12, 2022
August
08
Aug
12
12
2022
06:17 AM
6
06
17
AM
PDT
An alternate method of computing the final result is: X = -log2(10^120•pS(T)•P(T|H)) = -log2(10^120) - log2(pS(T)) - log2(P(T|H)) For our values that's = -log2(10^120) - log2(2) - log2(2^-10) = -log2(2^398.63136) - 1 + 10 = -398.6316 -1 + 10 = -389.6316 So, breaking apart the stuff inside the log is possible but unnecessary as the result is the same and therefore the conclusion is the same. So, I'd now like Kairosfocus to work through this same simple example, explain how he's calculating I(T) and K2, give us his result and conclusion. As I already said: I expect our conclusions to be the same for this example but I'd like to see how he's calculating K2 and I(T).JVL
August 12, 2022
August
08
Aug
12
12
2022
06:16 AM
6
06
16
AM
PDT
Okay, if you flip a fair coin 10 times there are 2^10 possible outcomes all of which are equally likely if each flip is truly random which we're going to assume for this example. So, S = our semiotic agent, T = getting 10 tails with 10 flips, H = the flips are random -> P(T|H) = 1/2^10 = 2^-10 Dr Dembski defines pS(T) as: the number of patterns for which S’s semiotic description of them is at least as simple as S’s semiotic description of T. I argue that pS(T) = 2 in this case. We can describe our T as "getting all tails" and the only other possible outcome with a description that simple or simpler is "getting all heads" So X = -log2(10^120•pS(T)•P(T|H)) = -log2(10^120•2•2^-10) = -log2(10^120•2^-9) Now 10 is approx equal to 2^3.321928 (recall that 2^2 = 4, 2^3 = 8 and 2^4 = 16) So X is approx = -log2((2^3.321928)^120•2^-9) = -log2(2^398.63136•2^-9) = -log2(2^389.63136) = -389.63136 This result is less than one (Dr Dembski's threshold) so design is not concluded, i.e. this event could have come about by chance. Addendum: perhaps I should point out that for any base, b: logb(b) = 1 and logb(b^n) = n.JVL
August 12, 2022
August
08
Aug
12
12
2022
06:09 AM
6
06
09
AM
PDT
JVL, why don't you reduce the - log2[ . . . ]? That would tell you a lot. I did it, but you apparently need to do so for yourself. And, you show that you know enough about logs to understand. KFkairosfocus
August 12, 2022
August
08
Aug
12
12
2022
06:00 AM
6
06
00
AM
PDT
Okay, here's what I'd like to use as a first test of Dr Dembski's metric. I'm not saying this test is controversial in any way; I'm just wanting to step through it as an example. I'll work through Dr Dembski's metric (from his 2005 monograph: Specification, the Pattern That Signifies Intelligence) twice, once not breaking the log base 2 bit apart and once breaking it apart. In both cases I will get the same result because breaking the log apart has no effect on the final value. For this post Dr Dembski's metric looks like this: X = -log2(10^120•pS(T)•P(T|H)) (Because this blog is not configured to handle Greek letters I've change some of the notation) I'd like Kairosfocus to work through the example using his version of the metric (from comment 276 above: X = I(T) – 398 – K2) and I'd like him to give values for K2 and for I(T). We can then compare results and conclusions. For this particular example I expect to get the same conclusions because I think the conclusion is pretty clear but I'd like to illustrate the difference in the approaches. The example I'd like to work through first is: Flipping a fair coin 10 times and getting 10 tails. Again, I expect Kairosfocus and I to arrive at the same conclusion for this particular example. I just want to see how he works his version. I will/may be using a log conversion method which says log base b of N written as logbN = log10N/log10b = ln10/lnb. This can be found in any high school math text beyond the base level. This is handy when evaluating log2 since modern calculators do not have that function.JVL
August 12, 2022
August
08
Aug
12
12
2022
05:56 AM
5
05
56
AM
PDT
EARTH TO ALAN FOX. FROM KEEFE ANS SZOSTAK:
We therefore estimate that roughly 1 in 10^11 of all random-sequence proteins have ATP-binding activity comparable to the proteins isolated in this study.
You lied about their paper, too. You have no shame.ET
August 12, 2022
August
08
Aug
12
12
2022
05:25 AM
5
05
25
AM
PDT
Alan Fox is either a LIAR or just willfully ignorant:
The all-at-once scenario assumed by Dembski doesn’t match reality.
You are lying as Dembski doesn't make such an assumption. Grow up, Alan.ET
August 12, 2022
August
08
Aug
12
12
2022
05:23 AM
5
05
23
AM
PDT
Alan Fox:
You have not the least justification for assuming that a particular function is unique and there is plenty of evidence (starting – but not ending – with Keefe and Szostak) that potential function is widespread in protein sequences.
They said 1 in 100,000,000,000 proteins are functional. Read their paper. 1 in 100,000,000,000 is NOT widespread.ET
August 12, 2022
August
08
Aug
12
12
2022
05:21 AM
5
05
21
AM
PDT
Kairosfocus: I do not understand your constant objections. I've agreed with your algebra. I don't understand why you made certain substitutions as the mathematics is quite straightforward as Dr Dembski stated his formulation but if we compare results we can clear some of those questions up. But you keep not wanting to compare results. As I said, I will present a worked out, fairly simple case, just to get things started. I've done a rough draft but I'd like to review it to make sure it's clear and cogent and easy to follow. Stop arguing against things I haven't said; you can convince me your approach is correct by comparing results. Simple.JVL
August 12, 2022
August
08
Aug
12
12
2022
04:57 AM
4
04
57
AM
PDT
JVL, you are setting up and knocking over a strawman. That you resist a reduction of a - log2[ . . .] expression into the directly implied information in bits even after repeated explanation and correction tells me there is a refusal to acknowledge what is straightforward. If you are unwilling to acknowledge that, that is itself telling that you have no case on merits but insist on hyperskeptically wasting time. KFkairosfocus
August 12, 2022
August
08
Aug
12
12
2022
04:29 AM
4
04
29
AM
PDT
Kairosfocus: You're just not really paying attention to what I am actually saying. I shall write up a simple example soon and ask you to work out the same example using your method (with your introduced constants and change of function) and we'll see.JVL
August 12, 2022
August
08
Aug
12
12
2022
04:23 AM
4
04
23
AM
PDT
JVL, I take it that you have not done info theory and refuse to accept what is in Taub and Schilling much less my briefing note. That is the root of your problem. KF PS, Wikipedia confesses:
Information theory is the scientific study of the quantification, storage, and communication of digital information.[1] The field was fundamentally established by the works of Harry Nyquist and Ralph Hartley, in the 1920s, and Claude Shannon in the 1940s.[2]:?vii? The field is at the intersection of probability theory, statistics, computer science, statistical mechanics, information engineering, and electrical engineering. A key measure in information theory is entropy. Entropy quantifies the amount of uncertainty involved in the value of a random variable or the outcome of a random process. For example, identifying the outcome of a fair coin flip (with two equally likely outcomes) provides less information (lower entropy) than specifying the outcome from a roll of a die (with six equally likely outcomes) . . . . Based on the probability mass function of each source symbol to be communicated, the Shannon entropy H, in units of bits (per symbol), is given by H = ? [SUM on i] p i log 2 ? ( p i ) [--> avg info per symbol, notice, the typical term for state i is - log2(pi), weighted by pi in the sum, this is directly comparable to a key expression for Entropy for Gibbs and is a point of departure for the informational school of thermodynamics] where pi is the probability of occurrence of the i-th possible value of the source symbol. This equation gives the entropy in the units of "bits" (per symbol [--> it is averaging]) because it uses a logarithm of base 2, and this base-2 measure of entropy has sometimes been called the shannon in his honor. Entropy is also commonly computed using the natural logarithm (base e, where e is Euler's number), which produces a measurement of entropy in nats per symbol and sometimes simplifies the analysis by avoiding the need to include extra constants in the formulas. Other bases are also possible, but less commonly used. For example, a logarithm of base 28 = 256 will produce a measurement in bytes per symbol, and a logarithm of base 10 will produce a measurement in decimal digits (or hartleys) per symbol.
So, you can see my direct reason for reducing to information and symbolising I(T). The product rule for logs directly gives the threshold, as noted. Functionally Specific Bits, using F and S as dummy variables is obvious, and a matter of observation. More complex measures can be resorted to but excess of threshold is so large no practical difference results. Design. As you obviously did not read my longstanding notes, I clip:
To quantify the above definition [from F R Connor's Signals] of what is perhaps best descriptively termed information-carrying capacity, but has long been simply termed information (in the "Shannon sense" - never mind his disclaimers . . .), let us consider a source that emits symbols from a vocabulary: s1,s2, s3, . . . sn, with probabilities p1, p2, p3, . . . pn. That is, in a "typical" long string of symbols, of size M [say this web page], the average number that are some sj, J, will be such that the ratio J/M --> pj, and in the limit attains equality. We term pj the a priori -- before the fact -- probability of symbol sj. Then, when a receiver detects sj, the question arises as to whether this was sent. [That is, the mixing in of noise means that received messages are prone to misidentification.] If on average, sj will be detected correctly a fraction, dj of the time, the a posteriori -- after the fact -- probability of sj is by a similar calculation, dj. So, we now define the information content of symbol sj as, in effect how much it surprises us on average when it shows up in our receiver: I = log [dj/pj], in bits [if the log is base 2, log2] . . . Eqn 1 This immediately means that the question of receiving information arises AFTER an apparent symbol sj has been detected and decoded. That is, the issue of information inherently implies an inference to having received an intentional signal in the face of the possibility that noise could be present. Second, logs are used in the definition of I, as they give an additive property: for, the amount of information in independent signals, si + sj, using the above definition, is such that: I total = Ii + Ij . . . Eqn 2 For example, assume that dj for the moment is 1, i.e. we have a noiseless channel so what is transmitted is just what is received. Then, the information in sj is: I = log [1/pj] = - log pj . . . Eqn 3 This case illustrates the additive property as well, assuming that symbols si and sj are independent. That means that the probability of receiving both messages is the product of the probability of the individual messages (pi *pj); so: Itot = log1/(pi *pj) = [-log pi] + [-log pj] = Ii + Ij . . . Eqn 4 So if there are two symbols, say 1 and 0, and each has probability 0.5, then for each, I is - log [1/2], on a base of 2, which is 1 bit. (If the symbols were not equiprobable, the less probable binary digit-state would convey more than, and the more probable, less than, one bit of information. Moving over to English text, we can easily see that E is as a rule far more probable than X, and that Q is most often followed by U. So, X conveys more information than E, and U conveys very little, though it is useful as redundancy, which gives us a chance to catch errors and fix them: if we see "wueen" it is most likely to have been "queen.") Further to this, we may average the information per symbol in the communication system thusly (giving in terms of -H to make the additive relationships clearer): - H = p1 log p1 + p2 log p2 + . . . + pn log pn or, H = - SUM [pi log pi] . . . Eqn 5 H, the average information per symbol transmitted [usually, measured as: bits/symbol], is often termed the Entropy; first, historically, because it resembles one of the expressions for entropy in statistical thermodynamics. As Connor notes: "it is often referred to as the entropy of the source." [p.81, emphasis added.] Also, while this is a somewhat controversial view in Physics, as is briefly discussed in Appendix 1 below, there is in fact an informational interpretation of thermodynamics that shows that informational and thermodynamic entropy can be linked conceptually as well as in mere mathematical form. Though somewhat controversial even in quite recent years, this is becoming more broadly accepted in physics . . . .
[Wikipedia confesses:] At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann's constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing. But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon's information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more" . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>]) Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).
I trust that should be enough for starters. Let me add, that your assertion in the teeth of repeated correction, is unjust: " It’s your introduction of constants and functions not present in Dr Dembski’s formula that I want to check." False and misleading. I reduces - log2[prob] to information and symbolised it I(T). I used the product rule to draw out threshold. I reduced the 10^120 term to log 2 result 398 bits. I symbolised the other term, a number, and pointed to the 10^150 threshold, essentially 500 bits. On your repeated objection I used WmAD's case and showed the bit value, about 66, noting that he used 10^140 configs as space of possibilities there. Your resistance to a simple working out tells me it would be futile to try anything more complex. All that would invite is an onward raft of further objections. The basic point is, neg log of prob --> information, all else follows and indeed the unusual formulation of WmAD's expression as - log2[ . . .] itself tells that the context is information in bits. As I have noted, the only practical use I have seen for log2 is to yield info in bits. If you have seen another kindly enlighten me. KFkairosfocus
August 12, 2022
August
08
Aug
12
12
2022
04:16 AM
4
04
16
AM
PDT
Kairosfocus: Not having my old log tables from 3 – 5 form handy [in a basement in Ja last I saw] I used a calculator emulator, HP Prime that has x^y and log functions with RPN stack. You can convert log base anything into log base 10 or ln quite simply. And even simple calculators have log10 and ln. Have you done any info theory? Why are you unwilling to acknowledge neg log prob as a standard info metric, with base 2 giving bits, base e nats and base 10 Hartleys? Let's just compare methods and see what happens. I already said your algebra was fine albeit unnecessary. It's your introduction of constants and functions not present in Dr Dembski's formula that I want to check.JVL
August 12, 2022
August
08
Aug
12
12
2022
03:35 AM
3
03
35
AM
PDT
JVL, I await your renewed acknowledgement of the algebra, your willingness to acknowledge that FSCO/I is a subset of CSI for systems where functional configuration identifies the specificity [one noted by Orgel and Wicken in the 70s], and recognition that calcul;ated cases are on the table. Not having my old log tables from 3 - 5 form handy [in a basement in Ja last I saw] I used a calculator emulator, HP Prime that has x^y and log functions with RPN stack. HP calculators since 1977. Further, I WORKED OUT what - log2[ prob] is, an info metric, that is not replacement. Have you done any info theory? Why are you unwilling to acknowledge neg log prob as a standard info metric, with base 2 giving bits, base e nats and base 10 Hartleys? KFkairosfocus
August 12, 2022
August
08
Aug
12
12
2022
03:27 AM
3
03
27
AM
PDT
AF, not an assumption. Notice how carefully proteins are synthesised and folded. That is the mark of an exacting requirement. KFkairosfocus
August 12, 2022
August
08
Aug
12
12
2022
03:17 AM
3
03
17
AM
PDT
1 6 7 8 9 10 23

Leave a Reply