(ID Foundations series so far: 1, 2, 3, 4 )
In a current UD discussion thread, frequent commenter MarkF (who supports evolutionary materialism) has made the following general objection to the inference to design:
. . . my claim is not that ID is false. Just that is not falsifiable. On the other hand claims about specific designer(s)with known powers and motives are falsifiable and, in all cases that I know of, clearly false.
The objection is actually trivially correctable.
Not least, as we — including MF — are designers who routinely leave behind empirically testable, reliable signs of design, such as posts on UD blog in English that (thanks to the infinite monkeys “theorem” as discussed in post no 4 in this series) are well beyond the credible reach of undirected chance and necessity on the gamut of the observed cosmos. For instance, the excerpt just above uses 210 7-bit ASCII characters, which specifies a configuration space of 128^210 ~ 3.26 * 10^442 possible bit combinations. The whole observable universe, acting as a search engine working at the fastest possible physical rate [10^45 states/s, for 10^80 atoms, for 10^25 s: 10^150 possible states] , could not scan as much as 1 in 10^ 290th of that.
That is, any conceivable chance and necessity based search on the scope of our cosmos would very comfortably round down to a practical zero. But MF as an intelligent and designing commenter, probably tossed the above sentences off in a minute or two.
That is why such functionally specific, complex organisation and associated information [FSCO/I] are credible, empirically testable and reliable signs of intelligent design.
But don’t take my word for it.
A second UD commenter, Acipenser (= s[t]urgeon), recently challenged BA 77 and this poster as follows, in the signs of scientism thread:
195: What does the Glasgow Coma scale measure? The mind or the body?
206: kairosfocus: What does the Glasgow Coma scale measure? Mind or Body?
This is a scale of measuring consciousness that as the Wiki page notes, is “used by first aid, EMS, and doctors as being applicable to all acute medical and trauma patients.” That is, the scale tests for consciousness. And –as the verbal responsiveness test especially shows — the test is an example of where the inference to design is routinely used in an applied science context, often in literal life or death situations:
Fig. A: EMT’s at work. Such paraprofessional medical personnel routinely test for the consciousness of patients by rating their capacities on eye, verbal and motor responsiveness, using the Glasgow Coma Scale, which is based on an inference to design as a characteristic behaviour of conscious intelligences. (Source: Wiki.)
In short, the Glasgow Coma Scale [GCS] is actually a case in point of the reliability and scientific credibility of the inference to design; even in life and death situations.
Why do I say that?
The easiest way to show that is to excerpt my response to Acipenser, at 210 (and continuing to 211) in the same thread:
Now, on the Glasgow scale, there may be a few surprises for you, as this is actually a case where applied science is routinely using a design inference. Now, a good first point of reference is Wiki:
Glasgow Coma Scale or GCS is a neurological scale that aims to give a reliable, objective way of recording the conscious state of a person for initial as well as subsequent assessment. A patient is assessed against the criteria of the scale, and the resulting points give a patient score between 3 (indicating deep unconsciousness) and either 14 (original scale) or 15 (the more widely used modified or revised scale).
GCS was initially used to assess level of consciousness after head injury, and the scale is now used by first aid, EMS, and doctors as being applicable to all acute medical and trauma patients. In hospitals it is also used in monitoring chronic patients in intensive care . . . .
The scale comprises three tests: eye [NB: 1 – 4, behaviourally anchored judgemental rating scale [BARS] applying the underlying Rasch rating model commonly used as a metric in many fields where a judgement or inference needs to be quantified, and familiar from the Likert type scale], verbal [1 – 5] and motor [1 – 6] responses. The three values separately as well as their sum are considered. The lowest possible GCS (the sum) is 3 (deep coma or death), while the highest is 15 (fully awake person).
1 –> This scale is exercised by responsible, ethically obligated and educated medical and paramedical practitioners.
2 –> It is applied to embodied intelligent creatures who under normal circumstances will be alert, verbally responsive and able to move their bodies at will, and whose eye pupils will respond to light, and whose eye-tracks betray a major current focus of consciousness. This is background knowledge.
3 –> In this context, we may make reference to the Smith Model, assessing the embodied human being as a MIMO bio-cybernetic system, where mind is viewed as higher order controller.
4 –> On the Smith MIMO cybernetic model, the head is a major sensory turret, and hosts the front-end I/O processor. Damage to the head implicating that processor would therefore directly affect both sensor and effector capacity.
5 –> So, implications of such damage for a primary sensor suite, the eyes, and two major sensor effector suites, the auditory and vocal systems, would serve as a pattern of signs that can be assessed on the warranted inference model introduced as a background for my ongoing ID Foundations series:
I: [si] –> O, on W
(I an observer, note a pattern of signs, and infer an underlying objective condition or state of affairs or object, on a warrant)
6 –> So, here we see inference to signified from sign, on a warrant.
7 –> Going further, let us observe behaviour at levels 2, 4 and 5 on the verbal scale, where the issue is whether the subject is able to utter speech that is coherent, accurate and contextually responsive:
2: Incomprehensible sounds
4: Confused, disoriented
5: Oriented, converses normally
8 –> Speech, especially when set in context as language and as a situationally aware response of an intelligent person, is a strong indicator of functionally specific complex organisation, and indeed, encodes verbal, symbolic code in phonemes composed on rules of language. Speech expresses FSCI.
9 –> Thus, where situationally responsive and well composed speech is present, we have good reason to infer that we are dealing with a functional intelligence. And that is the normal condition of fully conscious human beings.
10 –> So, from the degree of falling short of such, we may infer to a breakdown in the relevant systems, here, related to head or CNS injury. With certain other signs, we may go on to infer worse than mere unconsciousness, death . . .
So, where it counts, with life and death in the stakes, here we find a design inference on FSCI routinely at work as a scientifically well-grounded basis for inferring conscious, deliberate behaviour. (Acipenser has not responded to these remarks to date in the original thread; s/he is invited to comment below.)
Similarly, as we saw for MF above, when we see a contextually responsive text string in threads at UD, we routinely and reliably infer that the thread is the product of an intelligent commenter, not a burst of lucky noise. This latter is logically and physically possible, but because functional sequence complexity is so rare in sufficiently large configurations paces to be relevant, we infer abductively on inference to the best (current) explanation, that the most plausible [though of course falsifiable], empirically and analytically warranted cause of a post at UD is an intelligent poster. MF’s posts are evidence — on inference from sign to signified — that a certain intelligent poster, MF exists and has acted.
Magic step: enter, stage, left, the origins science question.
Q: What happens when we try to extend this inference on FSCO/I as a sign pointing to intelligent cause as the signified underlying state of affairs that best explains it, to the context of FSCO/I observed in say the living cell, and thus, the cause of the origin and body plan level diversification of life?
A: If we are to be consistent, we should be willing to accept the uniformity principle, that like causes like, when we have a tested, empirically credible sign. That is, if FSCO/I of at least 500- 1,000 bits of information storage capacity is at work, that functions in a specific fashion [not just any sequence will do, it has to fit into a specification, or an algorithm or the rules of a language, etc], is a reliable sign of intelligence, we should be willing to accept that the sign of FSCO/I points to an act of design as its most credible cause. This, unless we can show good reason that a designer is impossible in the situation; which, however otherwise improbable, would force us to infer to lucky noise as the best remaining explanation.
The process of protein translation in the living cell’s ribosome, is a good case in point:
Fig. B: Protein translation in action [Also, cf a medically oriented survey here.] (Courtesy, Wikipedia)
Here, we see a ribosome in action, with the mRNA digitally coded tape triggering successive amino acid additions to the growing protein, as tRNA “taxi” molecules lock to the successive three-letter genetic code codons, and then serving as position-arm devices with the loaded AA’s, that click together. This, until a stop codon triggers cessation and release of the protein. That protein is then folded, perhaps agglomerated with other units, possibly activated and put to work in [or out of] the cell.
On a simple calculation, since each base in the mRNA has four possible states, the three-letter codon has 4^3 = 64 possible values, as are assigned in the standard D/RNA codon table (or its minor variants). 500 bases, or just under 170 codons, is beyond the 1,000 bit storage capacity threshold.
A typical protein is about 300 base pairs, and there are thousands in most life forms.
So, if we trust the sign of FSCO/I, we have good reason to infer that he living cell as we observe it is a designed entity.
That this is a serious point has long been recognised by origin of life investigators, as we may see from J SW Wicken’s famous 1979 remark that has so often featured in this series:
‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in. Also, since complex organisation can be analysed as a wiring network, and then reduced to a string of instructions specifying components, interfaces and connecting arcs, functionally specific complex organisation [FSCO] as discussed by Wicken is implicitly associated with functionally specific complex information [FSCI]. )]
J S Wicken hoped that something like natural selection could explain the source of the functionally specific and complex organisation with associated information [FSCO/I] in life forms, but as post no 4 in this series shows, the infinite monkeys theorem blocks this as a practical possibility, once we recognise that a designer is possible at all in a given situation.
This brings up the root problem in current origins science. Adherents of evolutionary materialism — who happen to dominate relevant scientific, academic, educational, media and policy institutions in our day — are implicitly imposing an a priori materialism. [Cf. the NSTA stance as was discussed here, and underlying analysis here.]
So, we are back to Lewontin’s a priori imposition of evolutionary materialism:
It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door.
[From: “Billions and Billions of Demons,” NYRB, January 9, 1997. Bold emphasis added. Cf discussion here.]
To this, leading ID thinker Philip Johnson’s rebuttal is apt:
So, in the end, we have to ask ourselves whether science is to be redefined as the best evolutionary materialist account of the origins and operations of the cosmos, from hydrogen to humans, or whether we will insist instead on freedom in science and science education.
For, science at its best is or should be:
. . . an unfettered (but ethically and intellectually responsible) progressive pursuit of the truth about our world, based on observation, experiment, analysis, theoretical modelling and informed, reasoned discussion.
21 Replies to “ID Foundations, 5: Functionally Specific, Complex Organization and associated Information as empirically testable (and recognised) signs of design”
kf, not to detract from this excellent entry you have made, but that ambulance picture reminded of this song, and the deeper spiritual implications involved in all this.
Nickelback – Savin’ Me
A considerable exchange on the general theme in this bias in academe thread developed at the thread, from 22 – 51.
GEM of TKI
PS: BA, thanks, but the vid is blocked where I am “on copyright grounds.”
kf, shame it doesn’t work,, perhaps this one will work:
Sorry to keep the tangent going, but the music snob is out of his cage…Nickelback is pretty bad, BA. I’m surprised you listen to them. And they don’t strike me as very Christian in their themes.
You should check out Rachmaninov’s All-Night Vigil, Op. 37 (also known as Vespers). Far more beautiful and spiritual than watered-down grunge 🙂
Pardon the “off topic” nature of my post, but I have to ask for what does “GEM of TKI” stand?
I would include the disciplines of Cryptography and particularly Cryptanalysis as examples of where modern scientists (armchair included) routinely use inference to a designer. Think of the process of analyzing a captured signal that has been encrypted. Perhaps the signal cannot be decrypted, but the information gleaned about the signal is none-the-less useful. Indeed, it is the process of analysis about the external factors of signal transmission that is called “traffic analysis”. Items such as(when discernable): when; from whence; to whom; how long; correlating factors (recent events [i.e. riots in Egypt, bombings in Afghanistan, elections in Africa, etc…]); etc… are collected and compiled. Oddly enough, it is a subjective task for sure, but sufficiently reliable in the eyes of policy makers to include huge expenditures in tax money for the process of collection. (NSA, CIA, DIA, MI6, GCHQ, Mossad, Sien Biet, etc…)
(Bere, that said something to me. Never mind the genre is way out of my usual.)
You are right, circumstantial analysis, of how a communication — even absent reading content — integrates with other credibly relevant factors is informational and significant.
Indeed, it would fit into the network of nodes, interfaces and arcs model that is itself reducible to a bits measure. Only, the nodes would be points in a space with time as a relevant element and probably geographical space as well.
Wiki has a useful 101, here.
The very presence of a communication under a given protocol of encoding and/or modulation, of course indicates that there is deliberate action, for reasons I have discussed here, on the nature of the communications network. The complex funcitonal integration of such a network allows us to infer a lot from the very existence of a signal of sufficient complexity to be reasonably sure it is signal not noise.
Then, the traffic patterns can tell us much more. I’ll pick from the Wiki:
GEM of TKI
PS: GEM, personal initials, TKI, my consultancy and Christian service persona; The Kairos Initiative.
PPS: traffic analysis through Blogger just now, is telling me that there is significant — for such a special, narrow focus blog [a course, really] — interest in my discussion of cosmology and timelines, over at my IOSE draft Origins Science course. Some may be friendly, some hostile of course.
After reading about translation above it reminded me how while ago, I was studying fascinating process of DNA replication because I wanted to learn is if this is an evolvable, reducible process. If it is reducible, which sub function of this unified system can we remove and have for example slower or maybe partial replication? Is it possible for this complex system to organize itself?
After DNA helix replication (duplication) in the cell we see two DNA helices .We have to assume a copying function was performed on DNA helix. We know that copying function be it done by photocopy machine, computer or discreet chemical assemblies has to be a very precise, organized and coordinated event.
There seems to be a few absolutely critical events to the point of logical necessity. If any of them would be done in wrong order or at the wrong place or with wrong strength we would not get replication.
Assemblies like polymerase, primase , helicase already have function that could be used in some other cell process ( modularity). Further, each is assembled of discreet chemical components (folded proteins) which are arranged by some logic to provide their “standalone” functions. There seems to be layers of organization before replication is done. There are pre and post replication events but I can’t tell if there is proper event border to these or they all combine.
We could even describe replication in a few simple mathematical symbols f(x) =2x and consider replicator as a “black box” defined only by its function without knowing inner details. Simplicity could be deceiving though because input and output of the function f are rigidly interdependent by the rules of mathematic and logic.
At the end I understand just basic biology so it would be nice to have help from biologist to clarify this.
I program automated systems (PLC, robotics) so when I learn about cell systems like this I may be biased to look at them as programmed nano machines.
Your instincts are right.
I would not be overly concerned over whether or not the process in space and time, with materials flows and process logic to say duplicate the DNA helix is “irreducibly complex,” so much as whether it is functionally specific, step by step and organised in the sort of space-time-events nodes network that we just described for traffic flow analysis.
The reason IC is important in ID is that IC systems become so complex so fast that they easily implicitly pass the FSCI threshold. (Cf remarks here on factors C1 – 5 in no 3 in the series.)
Once we are dealing with complex wiring diagrams like that, the very functional network with events, processes and components is itself a highly organised entity, one that has to be finely tuned to work; implying much functionally specific complex information.
When you have to orchestrate the discrete state control sequence for a PLC, and maybe the servo aspects as well for a robot — especially if it is integrated into an assembly line, you are carrying out the sort of automated, coordinated organization we are discussing, complete with synchronising timing diagrams.
Pardon me, but I think that those who imagine that the cell, which is a vastly more complex and tightly integrated automated system than our factories, just self assembled on chance configurations and the forces of a few relatively speaking natural laws, have not really thought through what is involved.
If you have had to design and/or program a microcontroller system that has to do something real, you will understand the point with great force. That is why it sounds so familiar to you, I think.
GEM of TKI
PS: Curious, is it you over in Germany- Poland- Lithuania there?
OT again kf but I think you will appreciate this:
How much information is there in the world? Scientists calculate the world’s total technological capacity
Excerpt: • Looking at both digital memory and analog devices, the researchers calculate that humankind is able to store at least 295 exabytes of information. (Yes, that’s a number with 20 zeroes in it.)
Put another way, if a single star is a bit of information, that’s a galaxy of information for every person in the world. That’s 315 times the number of grains of sand in the world. But it’s still less than one percent of the information that is stored in all the DNA molecules of a human being.
PS: Eugen, I added a medically oriented survey of the synthesis process above to Fig B. Also, cleaned up some unruly formatting that seems to have gone awry after I posted initially.
Thanks for finishing my train of thought.
When programming we have to go down to the last logical piece of information to understand some process or event. Building logically interdependent system is not an easy task. First comes an idea then concept and after that algorithms are built. At the end of the line it’ll be my idea that creates oganized physical process(reality). Not the other way around.
This is the reason I have bit of a problem understanding bottom-up organized systems. Therefore I want to visualize the way they self organize, step by step into logically unified, interdependent complex units.
I’m not sure this if biological systems can be understood this way but I’m trying.
So far, your series was quite an eye opener for me (I look like an owl now).
I’m originally from Croatia but last couple of decades living in Canada.
The clue in what you just discussed is what happens when we look at a properly programmed system at work, where seemingly like magic, step by step, at place after place, across a complex entity, we see smoothly meshed, integrated, synchronised activity, all fulfilling a purpose.
Like, well, clockwork.
And, in the case in view [and as discussed in the just linked], clockwork that in the course of its movement has the surprising ability to assemble a copy of itself.
So, we have to explain the existence and operation of such an integrated complex functional networked system, that on stored coded symbolic information not only manufactures sub-components, but has the ability to automatically replicate itself. (Think about what you would have to add to a factory to get it to read its own blueprints for the shell and for the equipment, fixtures and fittings etc, then make its own key parts and replicate itself automatically when it reached a certain stage!)
If you take the time to look at the link that I added on the action of protein synthesis, at Fig B above, you will see that I have given the simplest outline of what is going on, above, just the final step by step process. There are many factors that are also at work to promote the primary parts. Factors that are in some cases, proteins made by the very same process.
“Chickens and eggs” [as in which comes first as cause?] in causal loops everywhere; highlighting that the origin of the system is distinct from its ability to operate and replicate itself through a vast network of interlocking cyclical processes.
And, we have a clue, the existence of the sort of tightly integrated complex, co-ordinated, specifically functional organisation that points — on induction from experience backed up by the analysis of the possible but overwhelmingly non-functional configurations leading to the conclusion: deeply isolated islands of function not plausibly accessible to random walks from arbitrary initial points on the gamut of our observed cosmos — to design as the best explanation.
Choice, rather than chance, configuration and functionally integrated organisation as a result of a pattern of knowledgeable, purposeful, methodical choice.
Denton has an astonishing summary discussion in his now classic Evolution, a Theory in Crisis [well worth the reading], that we should ponder:
>> To grasp the reality of life as it has been revealed by molecular biology, we must magnify a cell a thousand million times until it is twenty kilometers in diameter [[so each atom in it would be “the size of a tennis ball”] and resembles a giant airship large enough to cover a great city like London or New York. What we would then see would be an object of unparalleled complexity and adaptive design. On the surface of the cell we would see millions of openings, like the port holes of a vast space ship, opening and closing to allow a continual stream of materials to flow in and out. If we were to enter one of these openings we would find ourselves in a world of supreme technology and bewildering complexity. We would see endless highly organized corridors and conduits branching in every direction away from the perimeter of the cell, some leading to the central memory bank in the nucleus and others to assembly plants and processing units. The nucleus itself would be a vast spherical chamber more than a kilometer in diameter, resembling a geodesic dome inside of which we would see, all neatly stacked together in ordered arrays, the miles of coiled chains of the DNA molecules. A huge range of products and raw materials would shuttle along all the manifold conduits in a highly ordered fashion to and from all the various assembly plants in the outer regions of the cell.
We would wonder at the level of control implicit in the movement of so many objects down so many seemingly endless conduits, all in perfect unison. We would see all around us, in every direction we looked, all sorts of robot-like machines . . . . We would see that nearly every feature of our own advanced machines had its analogue in the cell: artificial languages and their decoding systems, memory banks for information storage and retrieval, elegant control systems regulating the automated assembly of components, error fail-safe and proof-reading devices used for quality control, assembly processes involving the principle of prefabrication and modular construction . . . . However, it would be a factory which would have one capacity not equaled in any of our own most advanced machines, for it would be capable of replicating its entire structure within a matter of a few hours . . . .
Unlike our own pseudo-automated assembly plants, where external controls are being continually applied, the cell’s manufacturing capability is entirely self-regulated . . . .
[[Denton, Michael, Evolution: A Theory in Crisis, Adler, 1986, pp. 327 – 331.] >>
Now, if we could design and build factories, robots, vehicles etc that could do this, that would be a transformative change to our technological base.
GEM of TKI
PS: This wiki survey and onward links may prove interesting. You will notice the updated form of the fig in B above too!
Thanks for the links. I’ll check them slowly, one by one. Imagine this. Management has audacity to ask me to do some work instead of reading.
Another thing caught my eye. Cell signal transduction looks awfully similar to signal transduction principles we use in automated systems. I’ll have to read on that, too. So much to read… it would be nice to go for a few months retreat to study all these interesting issues.
I never read Denton’s book but I like his cell description. It seems that’s how we could imagine it: as well coordinated, sophisticated, chemical nano factory. If it turns out that is correct I would be little scared of the programmer.
Yet another one of those functionally specific, complex nodes, interfaces and arcs networks.
And, you’re right, the programmer is obviously at a scarily different level than we are.
GEM of TKI
The objection that Mark is raising here is actually one for Theology. The central point of the objection is that ID is not scientific because it does not include within it a testable feature of the designer itself. There are several problems here.
Firstly, the question of who the designer is- what its nature and purpose and limits of its prowess are etc – is not the focus or purpose of the theory of ID at all. ID is concerned with whether something IS or IS NOT designed. The definition of ID is “the theory that certain patterns in nature are best explained as the product of intelligence.” Period. So the objection about intention of a designer is arbitrary to begin with- however it does imply a real theological inquiry.
Secondly, it is a very interesting admission when Mark says the purposes or intentions of a designer ARE testable- because to Mark one can judge the legitimacy of the nature of the designer by the effects that are attributed to it. However, if this is true then it is also true that you can judge the cause of the designed object in question by it’s nature as well -for if you cannot even discern if something is or is not designed to begin with, then you certainly cannot discern if something is designed for the presupposed purpose it is thought to be designed for.
Next, the reason why this is a theological inquiry is that the objection concerning intention of the designer is really about the nature of the designer itself. The problem here just like with the question of evil in theology, and many thing in life, is that looks can be deceiving. For many Christians the fall of man is the reason why things that were originally meant to be good are now left flawed. But for many other more skeptical theists – or those who are not biblical literalists they see flaws in designs in nature as being there for a purpose- a challenge or a spiritual purpose- for man kind who needs to over come them in order to become spiritually stronger, enlightened, complete etc. Or in a nut shell as Kariosfocus likes to put it “for the purpose of building souls”- Beautifully put.
So the issues concerning the intentions of the designer are actually not easily testable because the purpose/s of the designer can be equally and likely more complex than the purposes behind the most complex paintings, sculptures, works of literature and other forms of art. Only the artist themselves truly knows the intention behind the design of their work- yet many things can still be inferred and understood- just not all things.
So I think Mark’s admission that the Designer’s motives can be testable is little more than a bait and switch tactic- one that seeks to put the designer into a small box made of straw that can perhaps be more easily knocked over. I am not saying this was Mark’s intention but it has long been one of the debating tactics of many before.
It looks like you have cross-threaded, as MF does not appear above?
Mind you, the only feature of a designer relevant to design theory is the power of intelligent, knowledgeable, purposefully directed choice, multiplied by the skill to put that choice on the ground.
As the Glasgow metric of consciousness shows, such an inference to intelligence can be seen from its observable and measurable behaviours.
I was referring to the quote you used of Mark’s
You used that above to begin your thread here. Read that again and then you should see what m post is referring to. Pardon the confusion.
PS: The discussions that relate to this thread continued up to 99 at the other thread as noted at 2 above.
Your remark above cuts to the heart of the issue:
It is a key point that once design is on the table, we may legitimately ask whether there are observable signs that on warrant to best explanation, point to design as cause.
So long as design is an observed cause, that is a legitimately scientific investigation, and one where the reliability of claimed signs is subject to empirical investigation and potential falsification.
MF’s objection falls to the ground.
The theological side is also interesting, and indeed the best inferences to the purposes of a designer of say the cosmos [the only level where such an inference is inherently on the table, i.e. inference to design is about direct cause] leads to a powerful, intelligent being who set up a finely tuned cosmos suitable for life.
That invites the further point that we are looking at radical contingency for the observed cosmos, raising the issue of a necessary being to explain it.
A necessary (and thus eternal: no dependence on external necessary causes) being who is powerful, deeply knowledgeable and skilled enough to set up a cosmos suited for the kind of life we enjoy sounds just a bit familiar.
the old psalmist just may well have a point or two: the heavens declare the glory of God and the firmament [= expanse] sheweth his handiwork.
GEM of TKI
Can you write a computer program that will determine that then?
You make it sound so simple. So it should take what, 5-10 lines of code?
Go for it.
Another question, if I may.
You say there are billions of examples of FSCI out there on the internet. It seems to me there are not. And I’ll tell you why.
Which of these strings has more FSCI?
If your “billions and billions” of examples really are measuring the “information” inside each of those billions of examples of FSCI then this should be an interesting experiment.