Uncommon Descent Serving The Intelligent Design Community

Functional information defined

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

What is function? What is functional information? Can it be measured?

Let’s try to clarify those points a little.

Function is often a controversial concept. It is one of those things that everybody apparently understands, but nobody dares to define. So it happens that, as soon as you try to use that concept in some reasoning, your kind interlocutor immediately stops you at the beginning, with the following smart request: “Yes, but what is function? How can you define it?

So, I will try to define it.

A premise. As we are not debating philosophy, but empirical science, we need to remain adherent to what can be observed. So, in defining function, we must stick to what can be observed: objects and events, in a word facts.

That’s what I will do.

But as usual I will include, in my list of observables, conscious beings, and in particular humans. And all the observable processes which take place in their consciousness, including the subjective experiences of understanding and purpose. Those things cannot be defined other than as specific experiences which happen in a conscious being, and which we all understand because we observe them in ourselves.

That said, I will try to begin introducing two slightly different, but connected, concepts:

a) A function (for an object)

b) A functionality (in a material object)

I define a function for an object as follows:

a) If a conscious observer connects some observed object to some possible desired result which can be obtained using the object in a context, then we say that the conscious observer conceives of a function for that object.

b) If an object can objectively be used by a conscious observer to obtain some specific desired result in a certain context, according to the conceived function, then we say that the object has objective functionality, referred to the specific conceived function.

The purpose of this distinction should be clear, but I will state it explicitly just the same: a function is a conception of a conscious being, it does not exist  in the material world outside of us, but it does exist in our subjective experience. Objective functionalities, instead, are properties of material objects. But we need a conscious observer to connect an objective functionality to a consciously defined function.

Let’s make an example.

Stones

I am a conscious observer. At the beach, I see various stones. In my consciousness, I represent the desire to use a stone as a chopping tool to obtain a specific result (to chop some kind of food). And I choose one particular stone which seems to be good for that.

So we have:

a) The function: chopping food as desired. This is a conscious representation in the observer, connecting a specific stone to the desired result. The function is not in the stone, but in the observer’s consciousness.

b) The functionality in the chosen stone: that stone can be used to obtain the desired result.

So, what makes that stone “good” to obtain the result? Its properties.

First of all, being a stone. Then, being in some range of dimensions and form and hardness. Not every stone will do. If it is too big, or too small, or with the wrong form, etc., it cannot be used for my purpose.

But many of them will be good.

So, let’s imagine that we have 10^6 stones on that beach, and that we try to use each of them to chop some definite food, and we classify each stone for a binary result: good – not good, defining objectively how much and how well the food must be chopped to give a “good” result. And we count the good stones.

I call the total number of stones: the Search space.

I call the total number of good stones: the Target space

I call –log2 of the ratio Target space/Search space:  Functionally Specified Information (FSI) for that function in the system of all the stones I can find in that beach. It is expressed in bits, because we take -log2 of the number.

So, for example, if 10^4 stones on the beach are good, the FSI for that function in that system is –log2 of 10^-2, that is  6,64386 bits.

What does that mean? It means that one stone out of 100 is good, in the sense we have defined, and if we choose randomly one stone in that beach we have a probability to find a good stone of 0.01 (2^-6,64386).

I hope that is clear.

So, the general definitions:

c) Specification. Given a well defined set of objects (the search space), we call “specification”, in relation to that set, any explicit objective rule that can divide the set in two non overlapping subsets:  the “specified” subset (target space) and the “non specified” subset.  IOWs, a specification is any well defined rule which generates a binary partition in a well defined set of objects.

d) Functional Specification. It is a special form of specification (in the sense defined above), where the rule that specifies is of the following type:  “The specified subset in this well defined set of objects includes all the objects in the set which can implement the following, well defined function…” .  IOWs, a functional specification is any well defined rule which generates a binary partition in a well defined set of objects using a function defined as in a) and verifying if the functionality, defined as in b), is present in each object of the set.

It should be clear that functional specification is a definite subset of specification. Other properties, different from function, can in principle be used  to specify. But for our purposes we will stick to functional specification, as defined here.

e) The ratio Target space/Search space  expresses the probability of getting an object from the search space by one random search attempt, in a system where each object has the same probability of being found by a random search (that is, a system with an uniform probability of finding those objects).

f) The Functionally Specified  Information  (FSI)  in bits is simply –log2 of that number. Please, note that I  imply  no specific  meaning of the word “information” here. We could call it any other way. What I mean is exactly what I have defined, and nothing more.

One last step. FSI is a continuous numerical value, different for each function and system.  But it is possible to categorize  the concept in order to have a binary variable (yes/no) for each function in a system.

So, we define a threshold (for some specific  system of objects). Let’s say 30 bits.  We compute different values of FSI for many different functions which can be conceived for the objects in that system. We say that those functions which have a value of FSI above the threshold we have chosen (for example, more than 30 bits) are complex. I will not discuss here how the threshold is chosen, because that is part of the application of these concepts to the design inference, which will be the object of another post.

g) Functionally Specified Complex Information is therefore a binary property defined for a function in a system by a threshold. A function, in a specific system, can be “complex” (having  FSI above the threshold). In that case, we say that the function implicates FSCI in that system, and if an object observed in that system implements that function we say that the object exhibits FSCI.

h) Finally, if the function for which we use our objects is linked to a digital sequence which can be read in the object, we simply speak of digital FSCI: dFSCI.

So, FSI is a subset of SI, and dFSI is a subset of FSI. Each of these can be expressed in categorical form (complex/non complex).

Some final notes:

1) In this post, I have said nothing about design. I will discuss in a future post how these concepts can be used for a design inference, and why dFSCI is the most useful concept to infer design for biological information.

2) As you can see, I have strictly avoided to discuss what information is or is not. I have used the word for a specific definition, with no general implications at all.

1030743_72733179

3) Different functionalities for different functions can be defined for the same object or set of objects. Each function will have different values of FSI. For example, a tablet computer can certainly be used as a paperweight. It can also be used to make complex computations. So, the same object has different functionalities. Obviously, the FSI will be very different for the two functions: very low for the paperweight function (any object in that range of dimensions and weight will do), and very high for the computational function (it’s not so easy to find a material object that can work as a computer).

OLYMPUS DIGITAL CAMERA

4) Although I have used a conscious observer to define function, there is no subjectivity in the procedures. The conscious observer can define any possible function he likes. He is absolutely free. But he has to define objectively the function, and how to measure the functionality, so that everyone can objectively verify the measurement. So, there is no subjectivity in the measurements, but each measurement is referred to a specific function, objectively defined by a subject.

Comments
Piotr, HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH wrt a coin flip is not a "regular pattern". And the explanatory filter takes care of regular patterns in nature.
By “fully specified” I mean a concrete unique sequence of Hs and/or Ts.
That isn't what anyone else calls a "fully specified" sequence. Specify the sequence before hand and then flip a coin to match it. Tell us how long it takes you.Joe
May 7, 2014
May
05
May
7
07
2014
04:28 AM
4
04
28
AM
PDT
KF: Yes, the comma/dot inversion is a real nuisance for us italians! I usually try to remember, but it is easy to err. We use the comma as decimal marker, and the dot as thousands separator! :)gpuccio
May 7, 2014
May
05
May
7
07
2014
02:46 AM
2
02
46
AM
PDT
GP: Popped back by, thanks. I hope this thread helps the others out there understand the significance of the islands of funciton challenge and why it makes FSCO/I and dFSCI so relvant to seeing why the design inference is analytically well grounded and empirically plausible. Though there is none so blind as one who will not see. KF PS: You may need to note you use a comma for the decimal marker. (Us anglophones use a dot, perhaps raised. My HP 50 gives the choice of course.) PPS: Mapou, yes, transfinite nos -- infinities -- are all over modern math, and you may want to look at the recent rise of nonstandard analysis which builds on and regularises ideas in Newton etc regarding calculus foundations. I think it is more intuitive than the limits formulation, and just as rigorous now. What we cannot do is instantiate a transfinite, step by step, e.g. count up to aleph-null.kairosfocus
May 7, 2014
May
05
May
7
07
2014
02:37 AM
2
02
37
AM
PDT
Piotr: You post #90 is quite fair. Just a couple of comments: "many false positives" is true only if the improbability is not so great. For a 500 H sequence, there never will be, empirically, a false positive. It will always be explained by some necessity or design explanation. "But (as we have already seen) regular patterns in nature don’t demonstrate “design” by an intelligent agent." Well, if we can safely exclude algorithmic explanations, they do demonstrate design. I agree that, for regular patterns, it is more difficult to exclude algorithmic explanations. But that is very easy when meaning and function, rather than order, are the specifying rule. "Humans see them as special because we are particularly good at detecting patterns and regularities in our environment." That is true: you always need a conscious intelligent agent to detect meanings, and patterns are a special form of meaning. But that does not mean that the functionality for being detected as a pattern (or as a meaning, or as a function) is not objectively in the object (see my initial distinction in the OP: I knew it would come useful sooner or later! :) )gpuccio
May 6, 2014
May
05
May
6
06
2014
10:23 PM
10
10
23
PM
PDT
Mung: Thank you to you, too. Excellent contributions. I appreciate them very much.gpuccio
May 6, 2014
May
05
May
6
06
2014
10:13 PM
10
10
13
PM
PDT
KF: Thank you for your excellent intervention, in spite of all your other duties. I really appreciate it. Very good thoughts, and very useful in the specific context (is your post designed? :) ) Very good work, as usual!gpuccio
May 6, 2014
May
05
May
6
06
2014
10:11 PM
10
10
11
PM
PDT
Piotr: I will wait for your return... :) In the meantime, I will try to clarify where we are with your last comment, IMO:
By “fully specified” I mean a concrete unique sequence of Hs and/or Ts. A unique sequence, not a class of sequences. I leave aside the question whether the sequence is “designed” or not. Nobody can tell that in the general case, anyway, even if they think they can. If you think you can, please tell me if the sequence in post #66 is “designed”, and if it is, how much “functional information” it contains.
What do you mean? When we compute a probability of one event, the first thing we must do is defining well the event. There is no difference if the event is one sequence or a class of sequences. Let's stick to our 500 coin flips (OK Mung, let's say we know they were flipped! :) ). So, you can define the "event" for which you compute the probability as: a) "This specific sequence". That's OK, but you must write the sequence in advance. That is pre-specification. You cannot use this post-hoc. b) A class of sequences, defined by some formal property. So, if the formal property is "all heads", the class includes only one sequence, and the probability in one flip is 1:2^500. If the formal property is "any kind of sequence", the probability is 1. If the formal property is "a sequence of the same symbol", the probability is 2:2^500 And so on. KF and Mung have given excellent explanations of how we can generalize to well defined levels of order. It is obvious that in a random 500 bit sequence, some parts will look "ordered": there will be short repetitions or alternations, for example. Indeed, it is extremely likely that some of that will be there. As you certainly know. the complete lack of some repetition would be, again, a special form of order, and would make the sequence extremely unlikely: that kind of sequence is, again, peculiar and you will never see it. But, for each well defined class of events, we can compute a probability and a binomial probability distribution (success - non success) for repeated attempts. The simple fact is, for very large search spaces, you will never get peculiar sequences that belong to extremely unlikely classes. You will always get non peculiar sequences, which become peculiar only if you pre-specify them by giving explicitly, bit by bit, all or great part of the information in the sequence (see a) ). IOWs, for large search spaces you never get ordered, largely compressible results by chance. You can obviously get them by necessity, but I have already discussed that. I suggest that you carefully consider the interventions of KF and Mung in the previous few posts. Finally, I have answered your "nucleotide sequence challenge" in my post #72. I think we have to start form that, for further discussion on this point. Please, take all the time you need to answer (if you like, obviously :) ). Work has its priorities, like sleep...gpuccio
May 6, 2014
May
05
May
6
06
2014
10:09 PM
10
10
09
PM
PDT
PS I disagree with Gpuccio, but have to run to work now, so I'll reply later.Piotr
May 6, 2014
May
05
May
6
06
2014
09:41 PM
9
09
41
PM
PDT
Eric Anderson @89 Each single one of them is unique. Some are "special" in the sense that they are regular and could be generated by a relatively simple algorithm. Humans see them as special because we are particularly good at detecting patterns and regularities in our anvironment. When we see a regularity we suspect (often with good reason, though with many false positives) that there's something more than blind chance at work. But (as we have already seen) regular patterns in nature don't demonstrate "design" by an intelligent agent. There's usually a prosaic explanation. I'd actually be less surpsised to see something like HHHHHHHHHHHH... (fake coin?) or HHHHTHHHTHHH... (unfair coin?) than HTHTHTHTHTHT... (trick or miracle?).Piotr
May 6, 2014
May
05
May
6
06
2014
09:40 PM
9
09
40
PM
PDT
Piotr @71: Just to make sure I understand where you're coming from, are you saying there is nothing unique about any of the possible sequences? Nothing that would cause us to pause and question its origin? Nothing that would give us reason to think that something else might be in play besides pure random draw? Just want to make sure I understand your position. ----- Incidentally, gpuccio has answered spot on with respect to #66.Eric Anderson
May 6, 2014
May
05
May
6
06
2014
08:46 PM
8
08
46
PM
PDT
kf, I have a link here somewhere to that Nash text. I shall have to shell out the dollars! (not that it's all that expensive) Thank you for bringing it to my attention. God bless you my friend!Mung
May 6, 2014
May
05
May
6
06
2014
07:06 PM
7
07
06
PM
PDT
PS: A state near 50-50, no particular order has vastly more statistical weight than all-H and is vastly more likely. The valid form of law of averages. KFkairosfocus
May 6, 2014
May
05
May
6
06
2014
06:28 PM
6
06
28
PM
PDT
Mung, Binomial, sharp peak near evens. BtW that is the first big example in L K Nash's excellent intro to Stat Mech. Hate to say it but the chemist did a better job than all the physicists! KFkairosfocus
May 6, 2014
May
05
May
6
06
2014
06:26 PM
6
06
26
PM
PDT
kf @ 82, exactumly, or approximately precisely that, anyways. In any sequence consisting of two symbols there only two possible in which all H or all T appear. Assuming each symbol is equi-probable, as the length of the sequence increases the probability of the sequence being (all H or all T) decreases. logarithmically? We can then create the following specifications: All H or all T save x. eg. all H or all T except one all H or all T except two all H or all T except three all H or all T except four etc. It's far more likely that you will get something closer to the middle of that curve than toward the ends. Take two huge 250-sided die, each face of equal dimensions. Inscribe each face with a number from 1 through 250. Tossing each die individually 250,000 times appear to indicate that no number is more likely to appear than any other. then toss the dice together and they come up 1,1 or 250,250. Well SOMETHING had to come up, right?Mung
May 6, 2014
May
05
May
6
06
2014
06:00 PM
6
06
00
PM
PDT
Mung, not being fair includes ye olde 2-header. KFkairosfocus
May 6, 2014
May
05
May
6
06
2014
05:45 PM
5
05
45
PM
PDT
Piotr:
I would treat such as result as proof that the coin isn’t fair (and I’d hypothesise that it most likely has Heads on either side).
I would challenge the hypothesis that the coin was tossed at all. :) What difference does it make if the coin is fair if it's not being tossed?Mung
May 6, 2014
May
05
May
6
06
2014
05:36 PM
5
05
36
PM
PDT
P: If I came across a line of 500 ordinary coins, all H, I would for good reason tied to the relative statistical weights of the all 1 state vs the dominant cluster of near 50-50 in no particular order, I would with empirical certainty conclude design. And for excellent reason. The cases relevant to design put this toy case -- only 3.27*10^150 possibilities for 500 coins -- on steroids. And BTW if a maze required 500 turns in a specific and singular pattern, a rat that ran it with all but certainty is not doing so by blind trial and error. I would suspect a scent trail. KFkairosfocus
May 6, 2014
May
05
May
6
06
2014
05:24 PM
5
05
24
PM
PDT
GP: Excellent work as usual. It's a bit of a pity that I have so many irons in so many Caribbean fires just now. (And BTW, many of them pivot on the difference between recessions and stagflations with creative destruction at work. I have had to be brushing off some economics. And some thermodynamics [for Geothermal energy development -- looks like so far 2 MW potential identified here], a bit of mechanical analogue computing [fascinating subject, led me to glance at gunnery at Jutland and Dreyer vs Argo . . . ] and and and . . .) It never rains but pours. At UD too. A few quick points: 1 --> I note that dFSCI is WLOG, as complex functionally specific organisation . . . think 3-D exploded view type nodes and arcs as was looked at years back in the early ID founds posts . . . can be reduced to strings of coded digits. As in AutoCAD etc. 2 --> The issue is local isolation of islands of function in the space of configs. If deep enough, a solar system or observed cosmos scope blind search is maximally implausible as a good explanation compared with design. 3 --> Of course, you are the source for that Islands terminology, at least for me. Though it seems WmAD used it waaaaay back. 4 --> Protein groups in AA possibility space is a good example of the issue. 5 --> A version on Hamming distance as a metric of sequence dissimilarity can be used to construct an abstract space. 6 --> The point then is, first get to a functioning cell in a pond. That stretches the space to organic chem structures, leading to functionally co-ordinated clusters. Star wars poly life architecture scenarios don't get us away from the point that with the FSCO/I involved EVERY cluster is deeply isolated. (Do we understand the gamut of the space of chemical possibilities, the energetics and where it points?) 7 --> Onward novel function must fit with core cell life function and must be reproducible. Again, local isolation is a killer. 8 --> In addition, there is a very good reason why we only actually observe FSCO/I and especially dFSCI beyond say 500 bits arising by deliberate action. The seas of non-function are vastly beyond. BTW, just as while there are countably infinite rational no's, the continuum of all numbers utterly dwarfs them. The transcendentals rule! (And, a search of the space of 500 coins by the 10^57 solar system atoms for its lifespan, would sample of the space as a 1 straw size sample to a cubical haystack 1,000 LY across. If superposed on our galactic neighbourhood, such would turn up as straw with all but certainty, on the standard results for blind, random samples.) 9 --> So, we see analysis on search challenge and empirical observation mutually reinforcing. KFkairosfocus
May 6, 2014
May
05
May
6
06
2014
05:18 PM
5
05
18
PM
PDT
#69, Yes design is there, but so too a materialist explanation of the physical processes. It's not either-or, but both.rhampton7
May 6, 2014
May
05
May
6
06
2014
03:05 PM
3
03
05
PM
PDT
Piotr, If you flip a coin the odds you will get some pattern of heads and tails is exactly 1.
Thanks for the information.Piotr
May 6, 2014
May
05
May
6
06
2014
02:45 PM
2
02
45
PM
PDT
gpuccio:
I understand you are probably in Poland. I am in Italy. It’s late.
Of course, we are in the same time zone and it's bedtime here as well. See you tomorrow.Piotr
May 6, 2014
May
05
May
6
06
2014
02:43 PM
2
02
43
PM
PDT
By "fully specified" I mean a concrete unique sequence of Hs and/or Ts. A unique sequence, not a class of sequences. I leave aside the question whether the sequence is "designed" or not. Nobody can tell that in the general case, anyway, even if they think they can. If you think you can, please tell me if the sequence in post #66 is "designed", and if it is, how much "functional information" it contains.Piotr
May 6, 2014
May
05
May
6
06
2014
02:42 PM
2
02
42
PM
PDT
Piotr, If you flip a coin the odds you will get some pattern of heads and tails is exactly 1.Joe
May 6, 2014
May
05
May
6
06
2014
02:38 PM
2
02
38
PM
PDT
Piotr: I understand you are probably in Poland. I am in Italy. It's late. Shall we leave it for tomorrow? :)gpuccio
May 6, 2014
May
05
May
6
06
2014
02:34 PM
2
02
34
PM
PDT
Piotr: "Never" is empirically absolute, even if logically possible. We will never see it.gpuccio
May 6, 2014
May
05
May
6
06
2014
02:32 PM
2
02
32
PM
PDT
Piotr at #71: I am not sure I understand what you are saying here. Could you please explain better what kind of specified sequence we will see flipping the coin? And why?gpuccio
May 6, 2014
May
05
May
6
06
2014
02:27 PM
2
02
27
PM
PDT
Piotr at #66: Just to be clear, I will anticipate some basic concepts about the design inference. a) The design inference by dFSCI is a procedure with absolute specificity (100%, if the threshold is well chosen) and low sensitivity. b) The two main reasons for the low sensitivity are: b1) If an object is designed, but its specification is simple, it will not be possible to infer design for it. False negative type one. b2) Evne if the object is specified and complex, we as observers may not be able to racognize the specification (for example, the function). False negative type two. I have no idea if the nucleotide sequence you offered has any function. And at present I have no means (and no desire) to find out. So, I will not infer design for it. Now, two scenarios are possible: 1) The sequence is not designed and my non inference of design is a true negative. 2) The sequence is designed and my inference of non design is a false negative. No problem in either case. If you want to falsify the procedure, you have to show that it gives false positives, and therefore its specificity is not 100%. False negatives are routine.gpuccio
May 6, 2014
May
05
May
6
06
2014
02:24 PM
2
02
24
PM
PDT
But, if we are sure that the coin is fair, and that the system is truly random, then we will never see 500 heads. This "never" is still approximate, not absolute. Otherwise, by the same token, we should never see any of the following: ...HHHHHHHHHHHHHHHHHHHHHHHHHHHT ...HHHHHHHHHHHHHHHHHHHHHHHHHHTH ...HHHHHHHHHHHHHHHHHHHHHHHHHHTT ...HHHHHHHHHHHHHHHHHHHHHHHHHTHH ...HHHHHHHHHHHHHHHHHHHHHHHHHTHT ......... ...HTHTHTHTHTHTHTHTHTHTHHTHTHTH ......... ...TTTTTTTTTTTTTTTTTTTTTTTTTTHT ...TTTTTTTTTTTTTTTTTTTTTTTTTTTH ...TTTTTTTTTTTTTTTTTTTTTTTTTTTT ... because each of them is fully specified and therefore hyperastronomically unlikely (with an a priori probability of 2^(-500) = 3.055*10^(-151)). And yet we shall see one of them if we flip the coin 500 times.Piotr
May 6, 2014
May
05
May
6
06
2014
02:24 PM
2
02
24
PM
PDT
Piotr at #63: Actually, the fewer formal constraints on the structure of “old random sequences” (e.g. if they don’t have to be periodic, palindromic, etc.), the lower their redundancy, and the larger the amount of information that can be packed into them (in Shannon’s terms). Correct! That's exactly why we need scarcely constrained material sequences to "write" (design) meaningful and functional sequences. We need what Abel calls "configurable switches". Constrained sequences are no good for that. That's why both language and software and proteins are more similar to random sequences than to ordered sequences (although, obviously, they exhibit some forms of regularity, as you certainly know well). But they can never be generated by simple algorithms (which is the distinguishing feature of ordered sequences).gpuccio
May 6, 2014
May
05
May
6
06
2014
02:16 PM
2
02
16
PM
PDT
rhampton7: Realistically, 500 heads warrant one and only one explanation: the system is not random. Probably, it is designed (for fraud). Or it is simply not random because it was designed to be random, but by a bad designer. You see, design is always there in some way! :) Nobody would ever believe that the result is really sheer luck (not even darwinists: they may say so when they have no other argument left, but deep in their heart they know it is not true).gpuccio
May 6, 2014
May
05
May
6
06
2014
02:10 PM
2
02
10
PM
PDT
1 5 6 7 8 9 10

Leave a Reply