Uncommon Descent Serving The Intelligent Design Community

At Sci-News: Moths Produce Ultrasonic Defensive Sounds to Fend Off Bat Predators

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Scientists from Boise State University and elsewhere have tested 252 genera from most families of large-bodied moths. Their results show that ultrasound-producing moths are far more widespread than previously thought, adding three new sound-producing organs, eight new subfamilies and potentially thousands of species to the roster.

A molecular phylogeny of Lepidoptera indicating antipredator ultrasound production across the order. Image credit: Barber et al., doi: 10.1073/pnas.2117485119.

Bats pierce the shadows with ultrasonic pulses that enable them to construct an auditory map of their surroundings, which is bad news for moths, one of their favorite foods.

However, not all moths are defenseless prey. Some emit ultrasonic signals of their own that startle bats into breaking off pursuit.

Many moths that contain bitter toxins avoid capture altogether by producing distinct ultrasounds that alert bats to their foul taste. Others conceal themselves in a shroud of sonar-jamming static that makes them hard to find with bat echolocation.

While effective, these types of auditory defense mechanisms in moths are considered relatively rare, known only in tiger moths, hawk moths and a single species of geometrid moth.

“It’s not just tiger moths and hawk moths that are doing this,” said Dr. Akito Kawahara, a researcher at the Florida Museum of Natural History.

“There are tons of moths that create ultrasonic sounds, and we hardly know anything about them.”

In the same way that non-toxic butterflies mimic the colors and wing patterns of less savory species, moths that lack the benefit of built-in toxins can copy the pitch and timbre of genuinely unappetizing relatives.

These ultrasonic warning systems seem so useful for evading bats that they’ve evolved independently in moths on multiple separate occasions.

In each case, moths transformed a different part of their bodies into finely tuned organic instruments.

[I’ve put these quotes from the article in bold to highlight the juxtaposition of “evolved independently” and “finely tuned organic instruments.” Fine-tuning is, of course, often associated with intelligent design, rather than unguided natural processes.]

See the full article in Sci-News.

Comments
Additionally function can be selected for. Proteins that are promiscuous can under selective pressure become more specific. The all-at-once scenario assumed by Dembski doesn't match reality. Though it will be amusing to see if his math produces more than GIGO, if KF dares to venture into genuine illustrative examples. *wonders if he needs more popcorn*Alan Fox
August 12, 2022
August
08
Aug
12
12
2022
02:30 AM
2
02
30
AM
PDT
Trying to hit 1 in 100,000,000,000...
You have not the least justification for assuming that a particular function is unique and there is plenty of evidence (starting - but not ending - with Keefe and Szostak) that potential function is widespread in protein sequences.Alan Fox
August 12, 2022
August
08
Aug
12
12
2022
02:25 AM
2
02
25
AM
PDT
Kairosfocus: There is more than enough on the table to show why the design inference on FSCO/I is warranted. I wasn't questioning that!! I'm just trying to figure out why you reworked Dr Dembski's metric and if your reworking gives the same results! I don't know why that is so hard for you to understand. I will write up a simple example, apply Dr Dembski's metric then ask you to apply yours (specifying the values of your introduced terms) and then we can see what's what.JVL
August 12, 2022
August
08
Aug
12
12
2022
01:26 AM
1
01
26
AM
PDT
PPS, also side-stepped and ignored: >>260 kairosfocus August 6, 2022 at 4:45 am PPPS, as a further point, Wikipedia’s admissions on the Mandelbrot set and Kolmogorov Complexity:
This image illustrates part of the Mandelbrot set fractal. Simply storing the 24-bit color of each pixel in this image would require 23 million bytes, but a small computer program can reproduce these 23 MB using the definition of the Mandelbrot set and the coordinates of the corners of the image. Thus, the Kolmogorov complexity of the raw file encoding this bitmap is much less than 23 MB in any pragmatic model of computation. PNG’s general-purpose image compression only reduces it to 1.6 MB, smaller than the raw data but much larger than the Kolmogorov complexity.
This is of course first a description of a deterministic but chaotic system where at the border zone we have anything but a well behaved simple “fitness landscape” so to speak. Instead, infinite complexity, a rugged landscape and isolated zones in the set with out of it just next door . . . the colours etc commonly seen are used to describe bands of escape from the set. The issues raised in other threads which AF dismisses are real. Further to which, let me now augment the text showing what is just next door but is not being drawn out:
In algorithmic information theory (a subfield of computer science and mathematics), the Kolmogorov complexity of an object, such as a piece of text, is the length of a shortest computer program (in a predetermined programming language) that produces the object as output. It is a measure of the computational resources needed to specify the object, and is also known as algorithmic complexity, Solomonoff–Kolmogorov–Chaitin complexity, program-size complexity, descriptive complexity, or algorithmic entropy. It is named after Andrey Kolmogorov, who first published on the subject in 1963.[1][2] . . . . Consider the following two strings of 32 lowercase letters and digits: abababababababababababababababab [–> simple repeating block similar to a crystal], and 4c1j5b2p0cv4w1x8rx2y39umgw5q85s7 [–> plausibly random gibberish similar to a random tar] [–> add here, this is a string in English using ASCII characters and is a case of FSCO/I] The first string has a short English-language description, namely “write ab 16 times”, which consists of 17 characters. The second one has no obvious simple description (using the same character set) other than writing down the string itself, i.e., “write 4c1j5b2p0cv4w1x8rx2y39umgw5q85s7” which has 38 characters. [–> a good working definition of plausible randomness] Hence the operation of writing the first string can be said to have “less complexity” than writing the second. [–> For the third there is neither simple repetition nor plausibly random gibberish but it can readily and detachably be specified as ASCI coded text in English, leading to issues of specified complexity associated with definable, observable function and degree of complexity such that search challenge is material. Here, for 32 characters there are 4.56 * 10^192 possibilities, well beyond 500 bits of conplexity.] More formally, the complexity of a string is the length of the shortest possible description of the string in some fixed universal description language (the sensitivity of complexity relative to the choice of description language is discussed below). It can be shown that the Kolmogorov complexity of any string cannot be more than a few bytes larger than the length of the string itself. Strings like the abab example above, whose Kolmogorov complexity is small relative to the string’s size, are not considered to be complex. [–> another aspect of complexity, complexity of specification, contrasted with complexity of search tied to information carrying capacity] The Kolmogorov complexity can be defined for any mathematical object, but for simplicity the scope of this article is restricted to strings. [–> other things can be reduced to strings by using compact description languages, so WLOG] We must first specify a description language for strings. Such a description language can be based on any computer programming language, such as Lisp, Pascal, or Java.[–> try AutoCAD] If P is a program which outputs a string x, then P is a description of x. The length of the description is just the length of P as a character string, multiplied by the number of bits in a character (e.g., 7 for ASCII). [–> notice, the information metric] . . . . Any string s has at least one description. For example, the second string above is output by the pseudo-code: function GenerateString2() return “4c1j5b2p0cv4w1x8rx2y39umgw5q85s7” whereas the first string is output by the (much shorter) pseudo-code: function GenerateString1() return “ab” × 16 If a description d(s) of a string s is of minimal length (i.e., using the fewest bits), it is called a minimal description of s, and the length of d(s) (i.e. the number of bits in the minimal description) is the Kolmogorov complexity of s, written K(s). Symbolically, K(s) = |d(s)|. [–> our addred case is similarly complex to a plausibly random string but also has a detachable description that is simple and often identifies observable functionality] The length of the shortest description will depend on the choice of description language; but the effect of changing languages is bounded (a result called the invariance theorem) . . . . At first glance it might seem trivial to write a program which can compute K(s) for any s, such as the following: function KolmogorovComplexity(string s) for i = 1 to infinity: for each string p of length exactly i if isValidProgram(p) and evaluate(p) == s return i This program iterates through all possible programs (by iterating through all possible strings and only considering those which are valid programs), starting with the shortest. Each program is executed to find the result produced by that program, comparing it to the input s. If the result matches then the length of the program is returned. However this will not work because some of the programs p tested will not terminate, e.g. if they contain infinite loops. There is no way to avoid all of these programs by testing them in some way before executing them due to the non-computability of the halting problem. [–> so, calculation cannot in general distinguish random from simple order and from FSCO/I, we have to observe. This shows the pernicious nature of the strawman fallacy above by AF] What is more, no program at all can compute the function K, be it ever so sophisticated . . . . Kolmogorov randomness defines a string (usually of bits) as being random if and only if every computer program that can produce that string is at least as long as the string itself. To make this precise, a universal computer (or universal Turing machine) must be specified, so that “program” means a program for this universal machine. A random string in this sense is “incompressible” in that it is impossible to “compress” the string into a program that is shorter than the string itself. For every universal computer, there is at least one algorithmically random string of each length.[15] Whether a particular string is random, however, depends on the specific universal computer that is chosen. This is because a universal computer can have a particular string hard-coded in itself, and a program running on this universal computer can then simply refer to this hard-coded string using a short sequence of bits (i.e. much shorter than the string itself). This definition can be extended to define a notion of randomness for infinite sequences from a finite alphabet . . .
This gives some background to further appreciate what is at stake.>>kairosfocus
August 11, 2022
August
08
Aug
11
11
2022
11:03 PM
11
11
03
PM
PDT
JVL, Further doubling down. First,
421 kairosfocus August 11, 2022 at 5:25 am JVL, the distraction continues. WmAD first found an upper bound for his M*N term, 10^120, citing Seth Lloyd on how many bit ops are feasible for the observed cosmos. pS(T) is about “define pS as . . . the number of patterns for which [agent] S’s semiotic description of them is at least as simple as S’s semiotic description of [a pattern or target zone] T.” That is he is effectively binding the number of targets in the wider space W. So, finding an upper bound for that is reasonable. Next, you now acknowledge that – log2[prob] yields an info metric, where that Dembski formulates on that operation points to intention to reduce to info in bits. – log2[prob * factor c * factor d] by the algebra is Info[t] – {logc + logd} –> info beyond a threshold. I(T) is not a different function, but the value of – log2[P(T|H)], an information value in bits to be evaluated case by case. You are back to denying the algebra, kindly see Taub and Schilling as you obviously have no regard for my own background in info and t/comms theory. Next logc = log[10^120] = 398 bits. For log D we want a bits upper bound similar to his MN –> 10^120. He uses a case where the expression requires “P(T|H) 1.” Substitute and use equality as the border case: –log2[10^120 ·pS(T)·{1/2*10^-140}] = 1. Now break it up using the neg log operation. 1 = 466.07 – 398.63 – x i.e. 1 = 67.44 – x, so x = 66.44. (Notice, well within my 100.) What units? We can subtract guavas from guavas not from mangoes or coconuts, so x is in bits. x is effectively log2[pS(T)] so that gives 2 ^10^20. We are back to a threshold of 1 in 10^140, as expected given WmAD’s IFF. This shows the validity of the thresholds of spaces for 500 or 1000 bits. Your it’s not really clear is just another way to try to take back your concession on the algebra, which algebra is manifest. As for what about simple examples they have been on the table with even more generous thresholds than WmAD gave. There is no need to drag out this sidetrack further. The message is clear, for any reasonable threshold for search capability of sol system or observed cosmos, information content of cells and body plans is so far beyond that blind causes have no traction. Life, body plans and the taxonomical tree of life are replete with strong signs of design due to their functionally cashed out complex specified information, explicit and implicit in organisation. KF
Then, as you were shown and reminded:
293 kairosfocus August 7, 2022 at 5:06 am F/N: The point of the above is, it is highly reasonable to use a threshold metric for the functional, configuration based information that identifies the span beyond which it is highly reasonable to draw the inference, design. First, our practical cosmos is the sol system, 10^57 atoms, so 500 bits FSCO/I, X_sol = FSB – 500 in functionally specific bits Likewise for the observable cosmos, X_cos = FSB – 1,000, functionally specific bits And yes this metric can give a bits short of threshold negative value. Using my simple F*S*B measure, dummy variables F and S can be 0/1 based on observation of functionality or specificity. For a 900 base mRNA specifying a 300 AA protein, we get X_sol = [900 x 2 x 1 x 1] – 500 = 1300 functionally specific bits. Which, is comfortably beyond, so redundancy is unlikely to make a difference. Contrast a typical value for 1800 tossed coins X_sol = [1800 x 0 x 0] – 500 = – 500 FSBs, 500 bits short. If the coins expressed ASCII code in correct English X_sol = [1800 x 1 x 1] – 500 = 1300 FSBs beyond threshold, so comfortably, designed. [We routinely see the equivalent in text in this thread and no one imagines the text is by blind watchmaker action.] A more sophisticated value using say the Durston et al metric would reduce the excess due to redundancy but with that sort of margin, there is no practical difference. Where, in the cell, for first life just for the genome [leaving out a world of knowledge of polymer chemistry and computer coding etc] we have 100 – 1,000 kbases. 100,000 bases is 200,000 bits carrying capacity, and again there is no plausible way to get that below 1,000 bits off redundancy. Life, credibly, is designed. KF PS, There has already been in the thread citation from Dembski on the definition of CSI and how in cell based life it is cashed out on function. I note, the concept as opposed to Dembski’s quantitative metric (which boils down to functionally specific info beyond a threshold) traces to Orgel and Wicken in the 70’s. This was noted by Thaxton et al in the 80’s and Dembski, a second generation design theorist set out models starting in the 90’s.
Your empty doubling down is groundless and a strawman tactic that beyond a point is rhetorical harassment. There is more than enough on the table to show why the design inference on FSCO/I is warranted. This implies that the world of life, credibly, is full of signs of design from the cell to body plans to our own constitution. KFkairosfocus
August 11, 2022
August
08
Aug
11
11
2022
10:58 PM
10
10
58
PM
PDT
Kairosfocus: Will you show your working using your method for some simple examples. Yes or no?JVL
August 11, 2022
August
08
Aug
11
11
2022
10:37 PM
10
10
37
PM
PDT
ET: As predicted. Thank you. I said I think I can do that, how is that 'balking'? Are you even paying attention? Also, please note, I am only talking about evaluating Dr Dembski's metric.JVL
August 11, 2022
August
08
Aug
11
11
2022
10:36 PM
10
10
36
PM
PDT
So, we have a massive sample space. Next, we need that protein and to see how variable it is. Then we will know how many targets there are in that sample space. Trying to hit 1 in 100,000,000,000 (Keefe and Szostak for 80aa with minimal functionality), should be enough for anyone to see the futility of evolution by means of blind and mindless processes. Just seeing what DNA-based life requires to be existing and functioning from the start, should be enough for rational people to understand that nature didn't do it.ET
August 11, 2022
August
08
Aug
11
11
2022
05:15 PM
5
05
15
PM
PDT
ET, ignoring the oddballs and assuming away chirality issues and a lot of other chem possibilities, 20^100 = 1.268*10^130. KFkairosfocus
August 11, 2022
August
08
Aug
11
11
2022
03:39 PM
3
03
39
PM
PDT
Right. JVL balks when given a real-world, biological example. An example that he cannot control and manipulate. That math [sample space] is easy. How many different combinations are there for a 100 aa polypeptide? *crickets* If you can’t do that then forget about the other math, JVL.
I think I can do that.
As predicted. Thank you.ET
August 11, 2022
August
08
Aug
11
11
2022
02:45 PM
2
02
45
PM
PDT
Yes, KF. That Alan Fox calls on that experiment and results exposes the sheer desperation of his position.ET
August 11, 2022
August
08
Aug
11
11
2022
01:58 PM
1
01
58
PM
PDT
ET, interaction with ATP is not a good proxy for the myriads of proteins carrying out configuration-specific function. A good sign of this is the exceedingly precise care with which the cell assembles and folds proteins. KFkairosfocus
August 11, 2022
August
08
Aug
11
11
2022
01:06 PM
1
01
06
PM
PDT
JVL, you have had examples and a use of WmAD's case on was it the flagellum. You are still talking as if they don't exist. That tells us you are simply emptily doubling down. For record, from the outset WmAD used -log2[prob], which is instantly recognisable to one who has done or used info theory, as an info metric in bits. That is the only fairly common use of base 2 logs, to yield bits. Next, by product rule once boundable factors c and d are added as products, we have an info in bits beyond a threshold metric, per algebra of logs. Thus, once we have reasonable bounds, and we do with 500 - 1,000 bit thresholds [cf how 10^57 to 10^80 atoms observing each 500 - 1,000 1-bit registers aka coins, at 10^14/s for 10^17s can only survey a negligible fraction of config states], then we may freely work with info beyond a threshold. We only need to factor in info carrying capacity vs redundancy effects of codes as Durston et al did. WmAD apparently picked an example that was 20 bits short of threshold. However, for many cases we are well beyond it so redundancy makes no practical difference. Already for an average 300 AA protein, we are well beyond. FSCO/I -- a relevant subset and context of CSI since Orgel and Wicken in the 70's -- is a good sign of design. This you have resisted and sidestepped for 100's of comments, indicating that you have no substantial answer but find it unacceptable. Our ability to analyse, warrant adequately and know is not bound by your unwarranted resistance, sidesteps and side tracks. But this thread has clearly shown that the balance on merits supports the use of FSCO/I. Life, from cell to body plans including our own, shows strong signs of design. KFkairosfocus
August 11, 2022
August
08
Aug
11
11
2022
01:02 PM
1
01
02
PM
PDT
ET: Right. That math is easy. How many different combinations are there for a 100 aa polypeptide? As I've been saying: I think it's best to start with some simpler examples and make sure everyone is following along and that the results make sense. If you can’t do that then forget about the other math, JVL. I think I can do that.JVL
August 11, 2022
August
08
Aug
11
11
2022
11:53 AM
11
11
53
AM
PDT
Kairosfocus: the distraction continues. How is asking if you'd be willing to work out some examples using your approach distracting? I don't see the problem, with any numerical formulation, asking to see it 'in action'. Plus you keep repeating yourself which is completely pointless at this point. So, let's just stick to yes or no queries: Will you show your working using your method for some simple examples. Yes or no?JVL
August 11, 2022
August
08
Aug
11
11
2022
11:50 AM
11
11
50
AM
PDT
Earth to Alan Fox-
Keefe and Szostak showed long ago that function lurks much more widely than one-in-a-gadzillion.
I don't know how many zeros are in a gadzillion, but this is what Keefe and Szostak said:
We therefore estimate that roughly 1 in 10^11 of all random-sequence proteins have ATP-binding activity comparable to the proteins isolated in this study.
1 in 10^11! And those random-sequence proteins did not arise via blind and mindless processes.ET
August 11, 2022
August
08
Aug
11
11
2022
08:35 AM
8
08
35
AM
PDT
JVL:
I’m just repeating what he said in his 2005 monograph.
I know. You clearly don't understand it. We “know” that humans were capable of building Stonehenge only because Stonehenge exists!
There are a lot of other standing stone circles in the British Isles and Brittany.
And? We know humans didit cuz humans were around? We know they had the capability to do it cuz the structures exist? Thank you for proving my point.
I said they look at all the evidence including independent information about the humans around at the time and where they lived, what they ate, sometimes the tools they used, sometimes where they were buried.
And ASSUME they didit cuz there they are!
Dr Dembski explains how to ‘glean’ pS(T). And it involves knowing the ‘sample space’.
Right. That math is easy. How many different combinations are there for a 100 aa polypeptide? If you can't do that then forget about the other math, JVL.ET
August 11, 2022
August
08
Aug
11
11
2022
08:24 AM
8
08
24
AM
PDT
Alan Fox:
Keefe and Szostak showed long ago that function lurks much more widely than one-in-a-gadzillion.
They did not demonstrate that blind and mindless processes produced any of the proteins used
Demsbski rules out reiterative change and demands everything happens all at once.
Liar. You keep making these blatantly false statements. And you think we are just going to sit here and accept it. Pound sand. If you are going to spew BS about ID on an ID site, you had better bring the evidence. Your cowardly bloviations mean nothing here.
The model does not fit reality.
The claim that life's diversity arose by means of evolution by means of blind and mindless processes, such as natural selection and drift, does not fit reality. Alan is in such a tizzy over all things Intelligent Design. Yet he doesn't have a scientific alternative to ID. Shoot down all of the straw men you want, Alan. ID isn't going anywhere until someone steps up and demonstrates that blind and mindless processes can actually do the things you and yours claim.ET
August 11, 2022
August
08
Aug
11
11
2022
08:17 AM
8
08
17
AM
PDT
Alan Fox:
Nobody has a clue what your “FSCO/I” is yet despite JVL’s remarkable patience in getting you to make some sense.
You and JVL are willfully ignorant and on an agenda of obfuscation.
What is sadly telling is once we establish what trivial mathematical manipulations are or are not involved in telling us whether something is deigned [I deign to leave my Freudian slip], I predict there will be a further fruitless discussion on what numbers go into the equation or formula, should one eventually emerge from the fog of words.
Dude, what is trivial is your understanding of ID, science and evolution. It remains that you and yours do NOT have a scientific explanation for our existence. You have nothing but denial and promissory notes.ET
August 11, 2022
August
08
Aug
11
11
2022
08:07 AM
8
08
07
AM
PDT
JVL, the distraction continues. WmAD first found an upper bound for his M*N term, 10^120, citing Seth Lloyd on how many bit ops are feasible for the observed cosmos. pS(T) is about "define pS as . . . the number of patterns for which [agent] S’s semiotic description of them is at least as simple as S’s semiotic description of [a pattern or target zone] T." That is he is effectively binding the number of targets in the wider space W. So, finding an upper bound for that is reasonable. Next, you now acknowledge that - log2[prob] yields an info metric, where that Dembski formulates on that operation points to intention to reduce to info in bits. - log2[prob * factor c * factor d] by the algebra is Info[t] - {logc + logd} --> info beyond a threshold. I(T) is not a different function, but the value of - log2[P(T|H)], an information value in bits to be evaluated case by case. You are back to denying the algebra, kindly see Taub and Schilling as you obviously have no regard for my own background in info and t/comms theory. Next logc = log[10^120] = 398 bits. For log D we want a bits upper bound similar to his MN --> 10^120. He uses a case where the expression requires "P(T|H) 1." Substitute and use equality as the border case: –log2[10^120 ·pS(T)·{1/2*10^-140}] = 1. Now break it up using the neg log operation. 1 = 466.07 - 398.63 - x i.e. 1 = 67.44 - x, so x = 66.44. (Notice, well within my 100.) What units? We can subtract guavas from guavas not from mangoes or coconuts, so x is in bits. x is effectively log2[pS(T)] so that gives 2 ^10^20. We are back to a threshold of 1 in 10^140, as expected given WmAD's IFF. This shows the validity of the thresholds of spaces for 500 or 1000 bits. Your it's not really clear is just another way to try to take back your concession on the algebra, which algebra is manifest. As for what about simple examples they have been on the table with even more generous thresholds than WmAD gave. There is no need to drag out this sidetrack further. The message is clear, for any reasonable threshold for search capability of sol system or observed cosmos, information content of cells and body plans is so far beyond that blind causes have no traction. Life, body plans and the taxonomical tree of life are replete with strong signs of design due to their functionally cashed out complex specified information, explicit and implicit in organisation. KFkairosfocus
August 11, 2022
August
08
Aug
11
11
2022
04:25 AM
4
04
25
AM
PDT
Kairosfocus: I understand Dr Dembski's mathematics quite well thank you. You replace log2(pS(T)) with a constant and log2(P(T|H)) with a different function I(T). Since it's not really clear what those replacements are I thought a test comparing the result using your formulation and Dr Dembski's original formulation would be interesting. If they come to the same conclusion, fine. If they don't (for some particular case) then it would be enlightening to discuss that. I think. Shall we start by looking at a simple case and then try to ratchet things up? Why not have a go?JVL
August 11, 2022
August
08
Aug
11
11
2022
01:01 AM
1
01
01
AM
PDT
AF, 405:
[KF:] AF, more silly talk points. We all know that there is a school of thought that for 160 years has laboured to expel inference to design from complex organisation from the Western mind. [AF:] Nonsense, you are no mindreader. You imagine stuff. Then you write singular prose remarkable only for its obscurity. The quoted sentence is an example typical for lack of any meat in the sandwich.
We both know just what movement has been held as making it possible to be an intellectually fulfilled atheist. Which state is demonstrably impossible due to inherent incoherence of the implied evolutionary materialistic atheism. You are also lying and confessing by projection regarding want of substance. The self referentially incoherent evolutionary materialistic scientism of our day is not only public but notorious. Your stunt is so bad it fully deserves to be corrected by reference to Lewontin's cat out of the bag moment, suitably marked up -- a moment you are fully familiar with:
[Lewontin:] . . . to put a correct [--> Just who here presume to cornering the market on truth and so demand authority to impose?] view of the universe into people's heads
[==> as in, "we" the radically secularist elites have cornered the market on truth, warrant and knowledge, making "our" "consensus" the yardstick of truth . . . where of course "view" is patently short for WORLDVIEW . . . and linked cultural agenda . . . ]
we must first get an incorrect view out [--> as in, if you disagree with "us" of the secularist elite you are wrong, irrational and so dangerous you must be stopped, even at the price of manipulative indoctrination of hoi polloi] . . . the problem is to get them [= hoi polloi] to reject irrational and supernatural explanations of the world [--> "explanations of the world" is yet another synonym for WORLDVIEWS; the despised "demon[ic]" "supernatural" being of course an index of animus towards ethical theism and particularly the Judaeo-Christian faith tradition], the demons that exist only in their imaginations,
[ --> as in, to think in terms of ethical theism is to be delusional, justifying "our" elitist and establishment-controlling interventions of power to "fix" the widespread mental disease]
and to accept a social and intellectual apparatus, Science, as the only begetter of truth
[--> NB: this is a knowledge claim about knowledge and its possible sources, i.e. it is a claim in philosophy not science; it is thus self-refuting]
. . . . To Sagan, as to all but a few other scientists [--> "we" are the dominant elites], it is self-evident
[--> actually, science and its knowledge claims are plainly not immediately and necessarily true on pain of absurdity, to one who understands them; this is another logical error, begging the question , confused for real self-evidence; whereby a claim shows itself not just true but true on pain of patent absurdity if one tries to deny it . . . and in fact it is evolutionary materialism that is readily shown to be self-refuting]
that the practices of science provide the surest method of putting us in contact with physical reality [--> = all of reality to the evolutionary materialist], and that, in contrast, the demon-haunted world rests on a set of beliefs and behaviors that fail every reasonable test [--> i.e. an assertion that tellingly reveals a hostile mindset, not a warranted claim] . . . . It is not that the methods and institutions of science somehow compel us [= the evo-mat establishment] to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes [--> another major begging of the question . . . ] to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute [--> i.e. here we see the fallacious, indoctrinated, ideological, closed mind . . . ], for we cannot allow a Divine Foot in the door . . . [--> irreconcilable hostility to ethical theism, already caricatured as believing delusionally in imaginary demons]. [Lewontin, Billions and billions of Demons, NYRB Jan 1997,cf. here. And, if you imagine this is "quote-mined" I invite you to read the fuller annotated citation here.]
As for trying to jump on me over claimed errors of style, that is now obviously attack the man, dodge the substance. Indeed, we have every right to use cognitive dissonance psychology to interpret such stunts as confession by projection. KFkairosfocus
August 11, 2022
August
08
Aug
11
11
2022
12:29 AM
12
12
29
AM
PDT
JVL, what part of Dembski's specification of the two values as numbers -- I highlighted yesterday in the clip -- is so unclear it requires "interpretation"? _____ What part of giving one as M*N LT 10^120 is unclear? ______ What part of "define ?S as . . . the number of patterns for which [agent] S’s semiotic description of them is at least as simple as S’s semiotic description of [a pattern or target zone] T" rather than some function on a variable parameter is doubtful? _____ In your clip on flagellar proteins, I read "It follows that –log2[10^120 ·pS(T)·P(T|H)] > 1 if and only if P(T|H) < 1/2 ×10^-140 , where H, as we noted in section 6, is an evolutionary chance hypothesis that takes into account Darwinian and other material mechanisms and T, conceived not as a pattern but as an event" . . . which sets 10^140 as upper bound, less conservative than 500 bits worth, 3.27*10^150. So, no, he was discussing numbers and bounds or thresholds not odd functions that can run off anywhere to any weird value as one pleases. Oddly, even if pS(T) were some weird function, it would still be part of a threshold, by the algebra; the issue then would be to find a bound. a constant, your latest word to pounce on rhetorically. But as it turns out we are not forced to guess such, as we know it is an upper bound on observability, a target zone in a wider space of possibilities W; familiar from statistical thermodynamics. It is easy to see that for sol system or observed cosmos 2^500 to 2*1,000 is a generous upper bound on every atom, 10^57 to 10^80 being an observer of 500 or 1,000 coins each, flipped at 10^14 per second and for 10^17 s. So, whatever goes into the threshold, it is bound by search resources of sol system or observed cosmos. The thresholds given all the way up in 293 bound any reasonable value. All the huffing and puffing hyperskepticism fails. But at least you acknowledge explicitly that the algebra is correct. KF PS, you have calculations on the bounds, again cited yesterday. Can you tell me how for 10^57 or 10^80 atoms each observing bit operations on 500 or 1,000 one bit registers ["coins"] every 10^-14s, we do not bound the scope of search for 10^17 s, by 10^88 to 10^111 as over generous upper limit? I find the hyperskepticism unjustified.kairosfocus
August 11, 2022
August
08
Aug
11
11
2022
12:09 AM
12
12
09
AM
PDT
Kairosfocus: again, we both know the algebra is correct I didn't say the algebra was incorrect. It's your interpretation of some of the pieces as constants that isn't clear. Anyway, he came up with a metric for seeing if there was enough specified complexity in an object or event to conclude that it's designed. You changed his metric. I'd like to compare his version and your version to see if they give the same results. Are you willing to do the comparison? Yes or no?JVL
August 10, 2022
August
08
Aug
10
10
2022
11:25 PM
11
11
25
PM
PDT
JVL, again, we both know the algebra is correct. Further, we both know that Dembski pointed out that for life the specification is cashed out in functionality. Notice, [a: functionally] specified, complex [b:organisation and/or] associated information. A says, context is life or other contexts where functionality is key, B that information can be implicit in organisation. KFkairosfocus
August 10, 2022
August
08
Aug
10
10
2022
01:35 PM
1
01
35
PM
PDT
Kairosfocus: fallacy of the loaded question. We both know that I am carrying out the – log2[ . . . ] unary operation on a probability expression right there in Dembski’s X = eqn, and stating its standard result, an information value in bits. Then there's no reason for you to take up the challenge!! You have adequate examples to highlight the material point, that FSCO/I is a reliable sign of design as key cause, where there is copious FSCO/I in cell based life. But I think Dr Dembski was working on something different and that would the detection of specified complexity. That's what he said he was doing and that's the contention I'd like to test using his own formulation and way of working them out. Shall we compare and contrast results? If they turn out to be the same then that's okay.JVL
August 10, 2022
August
08
Aug
10
10
2022
11:00 AM
11
11
00
AM
PDT
JVL, fallacy of the loaded question. We both know that I am carrying out the - log2[ . . . ] unary operation on a probability expression right there in Dembski's X = eqn, and stating its standard result, an information value in bits. As it is applied to three factors, it is info beyond a threshold (or short of it by so much). You have adequate examples to highlight the material point, that FSCO/I is a reliable sign of design as key cause, where there is copious FSCO/I in cell based life. We have reason to hold that cell based life and body plans are designed. KFkairosfocus
August 10, 2022
August
08
Aug
10
10
2022
10:35 AM
10
10
35
AM
PDT
Kairosfocus: Shall we compare metric interpretations? Yes or no?JVL
August 10, 2022
August
08
Aug
10
10
2022
09:40 AM
9
09
40
AM
PDT
F/N: As a courtesy to the onlooker:
293 kairosfocus August 7, 2022 at 5:06 am F/N: The point of the above is, it is highly reasonable to use a threshold metric for the functional, configuration based information that identifies the span beyond which it is highly reasonable to draw the inference, design. First, our practical cosmos is the sol system, 10^57 atoms, so 500 bits FSCO/I, X_sol = FSB – 500 in functionally specific bits Likewise for the observable cosmos, X_cos = FSB – 1,000, functionally specific bits And yes this metric can give a bits short of threshold negative value. Using my simple F*S*B measure, dummy variables F and S can be 0/1 based on observation of functionality or specificity. For a 900 base mRNA specifying a 300 AA protein, we get X_sol = [900 x 2 x 1 x 1] – 500 = 1300 functionally specific bits. Which, is comfortably beyond, so redundancy is unlikely to make a difference. Contrast a typical value for 1800 tossed coins X_sol = [1800 x 0 x 0] – 500 = – 500 FSBs, 500 bits short. If the coins expressed ASCII code in correct English X_sol = [1800 x 1 x 1] – 500 = 1300 FSBs beyond threshold, so comfortably, designed. [We routinely see the equivalent in text in this thread and no one imagines the text is by blind watchmaker action.] A more sophisticated value using say the Durston et al metric would reduce the excess due to redundancy but with that sort of margin, there is no practical difference. Where, in the cell, for first life just for the genome [leaving out a world of knowledge of polymer chemistry and computer coding etc] we have 100 – 1,000 kbases. 100,000 bases is 200,000 bits carrying capacity, and again there is no plausible way to get that below 1,000 bits off redundancy. Life, credibly, is designed.
KFkairosfocus
August 10, 2022
August
08
Aug
10
10
2022
09:36 AM
9
09
36
AM
PDT
AF at 405, I'm not enjoying your act. Parts get repeated over and over. Alan Fox is smart except when he's not, or doesn't want to be. You have no future in stand-up comedy or in feigning frustration.relatd
August 10, 2022
August
08
Aug
10
10
2022
09:34 AM
9
09
34
AM
PDT
1 7 8 9 10 11 23

Leave a Reply