Uncommon Descent Serving The Intelligent Design Community

This Site Gives me 150 Utils of Utility; Panda’s Thumb Gives me Only 3

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Any effort to give precise gradations of quantification to CSI is doomed to failure.  It reminds me of certain economists’ effort to quantify “utility” through a measurement called a “util.”  See here.

The more I think about it, the more I am convinced that the concepts are very much the same.  We can all agree that the concept of “utility maximization” is very important and represents a real phenomenon.  But while we can say of utility there is a lot, there is a little, or there is none at all, there is no way to measure it precisely.  The “util” is useful as a hypothetical measure of relative utility, but it has no value as an “actual” unit of measurement, such as inches, pounds, meters, or grams.

Similarly, of CSI we can say it is present or it is not present.  That is what the explanatory filter does.  In some cases we can estimate relative CSI if we are able to calculate the bits of information present in the two instances.  But not usually.  Consider a space shuttle and a bicycle.  Both obviously show CSI and a design inference is inescapable with respect to each.  It is also obvious that the space shuttle contains vastly more CSI than the bicycle.  But if one asks me “how much more CSI is there in a space shuttle than in a bicycle?” the only satisfactory answer it seems to me is “a lot more.”  I could posit a measure of CSI – call it an “info” – and say the space shuttle contains 100 infos of CSI and the bicycle contains only 10 infos.  But this is certainly a meaningless game.  Actually, it is more than meaningless.  It is affirmatively harmful, because the game gives an illusion of precise measurement where there can be none.

Why am I going on about this?  Because many materialists commenting on this site frequently say, essentially, if one cannot quantify CSI then it is a meaningless concept.  This is false.  “Utility” cannot be quantified, but surely no one would suggest it does not exist or that it is not a useful concept in the field of economics.  Similarly, simply because CSI cannot always be precisely quantified is no reason to suggest that it does not exist or that it is not a useful concept in the study of objects to determine whether design is the most plausible explanation for their features.

Comments
Sorry, looks like my last post got cut off because the blog doesn't like the "less than or equal to" sign. Here is the post in full: BarryA - what both the site and your example show is that utility is only defined up to monotonic transformations. This is quite different from not being a quantitative measure! If CSI were capable of giving us an ordinal comparison between any two objects, then it would immensely useful as a scientific concept - the problem is that CSI can't be made quantitative at all, not that it is ordinal rather than cardinal. This is the difference between philosophy and science. If I observe that you prefer A to B and that is the only choice I'm modeling, it doesn't matter if I say you get 150 utils from A and 5 utils from B or 15 utils from A and 5 from B, but it sure does matter if I say you get 3 utils from A and 5 from B! That would contradict our observation. This might seem like a pedantic point, but the entire edifice of economic theory is built on it. Your example just happens to be extremely trivial. If we consider instead the problem of you choosing from a vector of n goods (x_1,...,x_n) subject to the constraint that your total expenditures are less than your wealth (p_1 x_1 + ... + p_n x_n less than or equal to w), we quickly find that utility theory allows us to put a great deal of structure on the problem and make surprisingly general inferences if we make some weak assumptions. For instance, we can use this structure to infer that everyone can be given higher utility if we replace a marginal tax which distorts the price of one of the consumption goods with a lump-sum tax that raises the same revenue under some weak and testable restrictions on utility. To obtain more policy relevant conclusions we'd have to develop more theory, but the key idea is that by observing your choices we can construct a utility function (defined up to monotonic transformations) which allows us to make out-of-sample inferences about what you would choose in counterfactual states of the world using weak restrictions on the formal structure of utility functions (which can in turn be tested empirically). JasonJason1083
May 21, 2008
May
05
May
21
21
2008
01:09 PM
1
01
09
PM
PDT
bornagain77, for the same reason as I set forth in post [12] I don't know why you think you disagree with me. If information content can be expressed in bits, then it can be quantified exactly. But tell me, using my first example, how many bits of information are in the space shuttle and how many are in a bicycle?BarryA
May 21, 2008
May
05
May
21
21
2008
01:07 PM
1
01
07
PM
PDT
BarryA - what both the site and your example show is that utility is only defined up to monotonic transformations. This is quite different from not being a quantitative measure! If CSI were capable of giving us an ordinal comparison between any two objects, then it would immensely useful as a scientific concept - the problem is that CSI can't be made quantitative at all, not that it is ordinal rather than cardinal. This is the difference between philosophy and science. If I observe that you prefer A to B and that is the only choice I'm modeling, it doesn't matter if I say you get 150 utils from A and 5 utils from B or 15 utils from A and 5 from B, but it sure does matter if I say you get 3 utils from A and 5 from B! That would contradict our observation. This might seem like a pedantic point, but the entire edifice of economic theory is built on it. Your example just happens to be extremely trivial. If we consider instead the problem of you choosing from a vector of n goods (x_1,...,x_n) subject to the constraint that your total expenditures are less than your wealth (p_1 x_1 + ... + p_n x_n Jason1083
May 21, 2008
May
05
May
21
21
2008
01:07 PM
1
01
07
PM
PDT
Hello Barry, The only reason I partially disagreed with you is that you *seemed* to be overly negative about the prospect of measuring for CSI. However, as far as I can tell, the only theoretical limitations in its use would be in regards to measuring art or things that merely subjectively "look" like something complex and specified such as "faces in clouds" for reasons I gave above. As I explained, an objective information theoretic measure of CSI for the objective blueprint or instructional information necessary to create a system of functionally integrated units is possible. In fact, the CSI measure of arriving at an hospitable planet may even theoretically be able to be worked out, taking into consideration the work of Dr. Gonzalez (sp?). As far as I can tell, the only significant limitation may be when it comes to "artistic" or subjective specified patterns.CJYman
May 21, 2008
May
05
May
21
21
2008
12:45 PM
12
12
45
PM
PDT
A good measure of the amount of CSI in something is the size of the instruction set needed to produce it. We can easily compare the difference in the amount of CSI in a bicycle and a space shuttle by weighing the manuals necessary to build each. The former likely can be written up in monograph, while the latter likely requires a small library!SCheesman
May 21, 2008
May
05
May
21
21
2008
12:44 PM
12
12
44
PM
PDT
soplo-- I sometimes wonder the same thing about standard evolutionary theory. It purports to be about adaptation and the propagation through populations of genes conferring enhanced fitness. Population genetics tells us precisely how incremental quantities of a parameter known as "fitness" spread themselves via differential reproduction. Zoology and other observational sciences tell us about existing specific adaptations that confer this "fitness". However, given that "fitness" is a central concept of the theory that is purported to be the crown jewel of all biology, it seems odd that there is no method for calculating or measuring its value. "Fitness" as a quantity is crucial to the entire theory, but it remains a strictly metaphysical concept, eluding empirical measurement. Oh, we can always find out who were the "fit" after the fact. They're the ones who survived. Just as theory predicts! Putting a value on CSI seems much more tractable to me.Matteo
May 21, 2008
May
05
May
21
21
2008
12:38 PM
12
12
38
PM
PDT
Todd Berkebile: "Likewise CSI might be a useful concept for, say, philosophy, but that doesn’t mean its a useful concept for science." CSI is extremely useful within the science of information theory. It is a quantifiable measure of the information content (measured probabilistically in bits) of a specified (or pre-specified) pattern measured against all available patterns and probabilistic resources. It also deals with the difficulty (again measured in bits) of finding a small target within a vast search space. CSI also sets the stage for a conservation of information -- a 4th law of thermodynamics -- which deals with the objective flow of information. Therefore, it is useful in science.CJYman
May 21, 2008
May
05
May
21
21
2008
12:35 PM
12
12
35
PM
PDT
I have to slightly disagree with you Barry. While the true information content of many designed objects is way out of our current reach and precise definition, the true information content of a properly defined "simple" sequence of data may well be within our reach. It From Bit Excerpt: But Zeilinger and Brukner noticed that it (Shannon Information) doesn’t take into account the order in which different choices or measurements are made. This is fine for a classical hand of cards. But in quantum mechanics, information is created in each measurement–and the amount depends on what is measured when–so the order in which different choices or measurements are made does matter, and Shannon’s formula doesn’t hold. Zeilinger and Brukner have devised an alternative measure that they call total information, which includes the effects of measurement. For an entangled pair, the total information content in the system always comes to two bits. -- me again: Thus it seems that if true information content can indeed be satisfactorily defined for any given "simple" sequence of data to the very foundation of reality itself (indeed it looks as if "true information" is the ultimate foundation of our reality) then when making an inference to design, the CSI explanatory filter may be able to be accurately quantified and brought in to play in greater detail. Indeed it seems it would be reasonable to refine the current CSI probability bound of 10^150 to a more precise and lower number (an actual quantification of CSI) for example, establishing a more realistic and concrete CSI, than 10^150, for small protein molecules using such a precise method. I believe this is a reasonable expectation on our part since, instead of starting with the flawed Shannon starting point to deduce total information content of a simple sequence, the search for CSI will actually start with the true "reality" information content of a known sequence. Refining the basic element of information, bit, to its true definition of reality, is of a prime necessity when trying to determine the actual threshold CSI, involved in a "simple" designed sequence. An essential element in this process will be separating the simple sequence of threshold CSI from its functional neighbors. i.e. the specific CSI information content of a "required simple protein" will most likely be very different from the information content of entire functional protein machine, such as the flagellum.bornagain77
May 21, 2008
May
05
May
21
21
2008
12:25 PM
12
12
25
PM
PDT
The argument about the requirement of a precise measure of CSI as a refutation of arriving at a design inference has always baffled me. This is really a pathological case of not seeing the forest for the trees.
I am new around here, so forgive me if I am treading on covered ground. But, without a calculated value of CSI, how does the EF provide anything different than a subjective assessment?soplo caseosa
May 21, 2008
May
05
May
21
21
2008
12:16 PM
12
12
16
PM
PDT
Jason1083, for example, in my title I say this site gives me 150 utils of utility. What does 150 utils mean? Nothing except in relation to my further statement that PT gives me only 3.BarryA
May 21, 2008
May
05
May
21
21
2008
12:15 PM
12
12
15
PM
PDT
Jason1083, I disagree that "util" is an exact measure. See the link in the post for the reasons why.BarryA
May 21, 2008
May
05
May
21
21
2008
12:14 PM
12
12
14
PM
PDT
CJYman, I don't know why you believe you are in partial disagreement with me. I said exactly the same thing you said: "In some cases we can estimate relative CSI if we are able to calculate the bits of information present in the two instances. But not usually."BarryA
May 21, 2008
May
05
May
21
21
2008
12:11 PM
12
12
11
PM
PDT
I've often had doubts about CSI as you know - my main problem is that if we can write a program to detect CSI, then CSI is probably algorithmically reproducible (otherwise, how could a program know when something meets the criteria), which means that a natural law could produce it. However, I can see how this can be handwaved away so I'm not going to argue the point. I shall have to argue with Gil's Hello World program, however, since this is an area that interests me. :) It is true that mutating a C program is very unlikely to make one that compiles, but in my opinion this is a bad example. DNA is not like C - it's more like a compiled program, that is, machine code. Mutating one letter of AGCTAGCACAACAGT won't necessarily wreck it. In some cases it will make no change, because of the redundancy of the code. Sometimes it puts a stop in early, and the result is an altered protein. This also sometimes has the effect of turning all the stuff after the stop into junk, if the altered protein turns out to be acceptable. Similarly, when you get down to the low level of code, it looks like 1011010101101011... . The operations are represented by strings of binary, so 10110101 could be a JUMP instruction, for example, and 01101011 could be an address to jump to. A mutation in the address would just send the program to a different place, which won't necessarily wreck it - the idea of genetic algorithms is that the code is flexible enough to allow odd tweaks to be made to it, which sometimes improve it. DNA has this same property of mutational robustness... whether by design, I leave you to decide. I'd also like to note that junk DNA forms naturally this way - if you have a jump command, or a way of disregarding nonsensical operations, or operations which cancel each other out, then they will accumulate simply by the result of random mutation.Venus Mousetrap
May 21, 2008
May
05
May
21
21
2008
12:07 PM
12
12
07
PM
PDT
"Utility" is a useful concept in economics, but economics is not science. Likewise CSI might be a useful concept for, say, philosophy, but that doesn't mean its a useful concept for science.Todd Berkebile
May 21, 2008
May
05
May
21
21
2008
12:03 PM
12
12
03
PM
PDT
I both agree and disagree. As far as I understand, when it comes to measuring CSI, all one needs to know is the probabilistic resources available (number and length of trials), and the probability (measured in bits) of the independent formulation of the specified pattern, and the number of specified patterns within all possible patterns. Furthermore, one needs to observe that the pattern doesn't flow as a necessity from the properties (attractive or repulsive) of the material in which case the information content would be extremely low (if there is any at all). Necessity/law = high probability and low information. The reason why it is hard, if not impossible, to measure CSI in some instances is that some patterns are hard to measure in bits. Take that "chair" example, where the tree is grown in a chair shape. When it comes to artistic shapes that merely "look" like something, I don't think that you can apply an objective measure of CSI, since the search space and the number of potentially specified targets is somewhat subjective and ambiguous. Let's take clouds for instance. What is the number of shapes that *look* specified that are possible and what is the total shape space? Now, don't get me wrong as I do think that a somewhat subjective, "Design-Matrix-esque" filter can be used to gauge the potential necessity of intelligence as a cause when analyzing some patterns. And yes, these shapes that have a high design inference associated with them are also highly specified or pre-specified and are complex as in having a low probability. In fact, specificity and complexity are criteria which an archaeologist would use to determine if a rock was not just a rock. Does its shape look highly improbable (complex) and does it fit within a functional target space (specificity -- does it match an independently given functional pattern). These observations in this case are somewhat subjective but still useful. And in these cases, the finer the resolution, the stronger the inference. However, that does not mean that all inference to intelligent design has to be subjective. As I briefly explained above, CSI can be measured objectively when the pattern itself permits. Some examples include all codes/cyphers and languages, number sequences and shapes which are regular and can be briefly described mathematically, the probability of arriving at any small target amid a huge search space at consistently better than chance performance, and even such complex things as circuits and potentially even functional systems of integrated units. Those functionally specific objects may be able to be measured objectively in bits (as an information theoretic measure of probability) since they are objectively created from high information external diagrams (known as blueprints or instructions) and must be created component by component. As long as there is an objective information measure associated with these external diagrams -- which as far as I understand can be seen as the independently formulated patterns -- and we if we can estimate the search space and probability of arriving at the said configuration, then there is no hindrance to measuring for CSI. " ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random;’ but whereas ordered systems are generated according to simple algorithms and therefore lack complexity, organized systems must be assembled element by element according to an external ‘wiring diagram’ with a high information content.” Jeffrey S. Wicken, “The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): 353, 349-65. “In the face of the universal tendency for order to be lost, the complex organization of the living organism can be maintained only if work – involving the expenditure of energy – is performed to conserve the order. The organism is constantly adjusting, repairing, replacing, and this requires energy. But the preservation of the complex, improbable organization of the living creature needs more than energy for the work. It calls for information or instructions on how the energy should be expended to maintain the improbable organization. The idea of information necessary for the maintenance and, as we shall see, creation of living systems is of great utility in approaching the biological problems of reproduction.” George Gaylord Simpson and William S. Beck, Life: An Introduction to Biology, 2nd ed. (London: Routledge and Kegan, 1965), 145. Even though not everything can be measured objectively as having CSI content, the things that can be measured as such are most probably the effects of previous intelligence.CJYman
May 21, 2008
May
05
May
21
21
2008
11:39 AM
11
11
39
AM
PDT
Gil, I don't think we should completely discount the effort to quantify/specify what goes into a design inference in a way that might help to make it more objective and algorithmic. I do, however, largely agree with your observation that, in practice, with most of what we see around us: "With such skyrocketing organized complexity a design inference becomes essentially a trivial exercise. A great deal of fancy footwork, rationalization, and excuse-making are required to avoid the obvious, which is done for obvious reasons."Eric Anderson
May 21, 2008
May
05
May
21
21
2008
11:37 AM
11
11
37
AM
PDT
Barry, interesting post. I don't know if you were also poking around at Telic Thoughts, but I was just there looking at Mike Gene's latest thread "Artificial or Natural" and then jumped over here and, lo and behold, I see that you are indirectly responding to some of the posts there (they are talking about a computer program being able to identify design through an algorithm, which presumably, means an ability to quantify in some way the design characteristics). I think this is a very interesting issue. If you are correct, then perhaps some of Dembski's efforts to precisely identify design from a mathematical perspective will not pan out?Eric Anderson
May 21, 2008
May
05
May
21
21
2008
11:34 AM
11
11
34
AM
PDT
Great observation Barry. The argument about the requirement of a precise measure of CSI as a refutation of arriving at a design inference has always baffled me. This is really a pathological case of not seeing the forest for the trees. I tried to demonstrate this with my Hello World program example here at UD. This little 66-character program represents as many possible combinations as there are subatomic particles in 10 trillion universes. When one considers even a single protein, the numbers becomes so large so quickly that they cause a cerebral shortcircuit. Then consider that proteins must interact with other proteins to form machines that must interact with other machines, etc. One soon needs exponents so large that they must be expressed with exponents. It is for such purposes that the googolplex was invented. If I can infer that an old-fashioned mechanical adding machine is designed, I can infer that a modern microprocessor with its millions of transistors is designed, without supplying a precise number that represents its CSI. With such skyrocketing organized complexity a design inference becomes essentially a trivial exercise. A great deal of fancy footwork, rationalization, and excuse-making are required to avoid the obvious, which is done for obvious reasons.GilDodgen
May 21, 2008
May
05
May
21
21
2008
11:13 AM
11
11
13
AM
PDT
But why would we need to quantify something in order to identify it. I can pick my wife out of a crowd without quantifying her. And as for this constant fretting over whether we're doing science or nonscience---remember that physicists often class biology as nonscience---it's all observation and no theory. Whatever we do---gardening, checkers, theoretical physics---employs varying degrees of observation, reason and authority. The myth of "the scientific method" is a pernicious myth. Anyway must we be able to exactly quantify CSI for it to be “scientific”? Of course not! It’s like prototype semantics. All we need are a sufficient number of identifying features.Rude
May 21, 2008
May
05
May
21
21
2008
11:11 AM
11
11
11
AM
PDT
(I forgot to mention the main point: the fact that CSI cannot be quantified very much does count against it as a scientific theory. CSI is at best as scientific as Benthamite utilitarianism and not in the same league as modern utility theory as used in economics)Jason1083
May 21, 2008
May
05
May
21
21
2008
10:46 AM
10
10
46
AM
PDT
That’s an interesting observation—how can you quantify CSI? Maybe it’s like quantifying logic and beauty and virtue. These come in greater or lesser quanties but may not be precisely quantifiable—perhaps because of the uniqueness of each instance. Reminds me of measuring information. We can measure the number of bits in a text, but it’s more difficult for the linguist to measure the quantity of what he calls "new information." Discourse procedes with something old and something new in every proposition or foregrounded clause. The old information provides coherence but the new information is why we speak. Imagine a machine which could scan texts for new information—information not already in its data banks. All it could do is look for identical strings of symbols, but new information is identifiable only by understanding both the old and the new—which no machine could ever do. I know a girl from Africa who remembers the people back home trying to describe snow. She remembers what they said but has forgotten the language in which they said it. This happens all the time. If you’re bilingual you will remember in one language what you were told in another. You can always paraphrase, simplify or expand. Information is not precisely measureable—that is, not unless there is some universal language (of which mathematics is a subset) to which everything is reducible to its least common denominator. It would be interesting if we could logically demonstrate that there are things which cannot in principle be precisely measured---not just that they are too complex for us to measure with our limited technology.Rude
May 21, 2008
May
05
May
21
21
2008
10:45 AM
10
10
45
AM
PDT
When you say that utility cannot be quantified, you are conflating two very different notions of utility: 1) Utility as a measure of overall happiness as used by Jeremy Bentham or (arguably) John Stuart Mill 2) Utility as a tool to represent revealed preferences as used by modern economists The second kind of utility absolutely can be quantified (although it is only defined up to monotonic transformations) and that is the whole basis of modern economics. You're absolutely right that comparisons based on 1) are unscientific and properly belong to the realm of philosophy. But this is not the case for comparisons based on 2) - utility of this sort absolutely can be quantified and economists quantify it all the time. Without making additional assumptions we can't make judgments like, "Taking $100 from Jon and giving it to Peter would improve social welfare", but we can make judgments like, "Lowering the gas tax is inefficient in the sense that alternative methods of redistribution would give more consumption to those who would benefit from such a move while also leaving more for everyone else."Jason1083
May 21, 2008
May
05
May
21
21
2008
10:44 AM
10
10
44
AM
PDT
I could posit a measure of CSI – call it an “info” – and say the space shuttle contains 100 infos of CSI and the bicycle contains only 10 infos. Actually, there's already a unit of measurement for it: the bit. But, other than that, you're right. It's impossible to figure out a precise measure of it for just about any complex object.Deuce
May 21, 2008
May
05
May
21
21
2008
10:18 AM
10
10
18
AM
PDT
The problem is that, while you can easily quantify lower levels of information in bits (which is useless for our purposes), higher levels of information (which is what we are interested in) are usually described in terms of rules, not quantities. Semantics are defined by interface rules, state transitions, etc. I see no way that such could usefully reduced to a number.johnnyb
May 21, 2008
May
05
May
21
21
2008
10:17 AM
10
10
17
AM
PDT
1 2

Leave a Reply