Uncommon Descent Serving The Intelligent Design Community

An attempt at computing dFSCI for English language

Categories
Intelligent Design
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

In a recent post, I was challenged to offer examples of computation of dFSCI for a list of 4 objects for which I had inferred design.

One of the objects was a Shakespeare sonnet.

My answer was the following:

A Shakespeare sonnet. Alan’s comments about that are out of order. I don’t infer design because I know of Shakespeare, or because I am fascinated by the poetry (although I am). I infer design simply because this is a piece of language with perfect meaning in english (OK, ancient english).
Now, a Shakespeare sonnet is about 600 characters long. That corresponds to a search space of about 3000 bits. Now, I cannot really compute the target space for language, but I am assuming here that the number of 600 characters sequences which make good sense in english is lower than 2^2500, and therefore the functional complexity of a Shakespeare sonnet is higher than 500 bits, Dembski’s UPB. As I am aware of no simple algorithm which can generate english sonnets from single characters, I infer design. I am certain that this is not a false positive.

In the discussion, I admitted however that I had not really computed the target space in this case:

The only point is that I have not a simple way to measure the target space for English language, so I have taken a shortcut by choosing a long enough sequence, so that I am well sure that the target space /search space ratio is above 500 bits. As I have clearly explained in my post #400.
For proteins, I have methods to approximate a lower threshold for the target space. For language I have never tried, because it is not my field, but I am sure it can be done. We need a linguist (Piotr, where are you?).
That’s why I have chosen and over-generous length. Am I wrong? Well, just offer a false positive.
For language, it is easy to show that the functional complexity is bound to increase with the length of the sequence. That is IMO true also for proteins, but it is less intuitive.

That remains true. But I have reflected, and I thought that perhaps, even if I am not a linguist and not even a amthematician, I could try to define better quantitatively the target space in this case, or at least to find a reasonable higher threshold for it.

So, here is the result of my reasonings. Again, I am neither a linguist nor a mathematician, and I will happy to consider any comment, criticism or suggestion. If I have made errors in my computations, I am ready to apologize.

Let’s start from my functional definition: any text of 600 characters which has good meaning in English.

The search space for a random search where every character has the same probability, assuming an alphabet of 30 characters (letters, space, elementary punctuation) gives easily a search space of 30^600, that is 2^2944. IOWs 2944 bits.

OK.

Now, I make the following assumptions (more or less derived from a quick Internet search:

a) There are about 200,000 words in English

b) The average length of an English word is 5 characters.

I also make the easy assumption that a text which has good meaning in English is made of English words.

For a 600 character text, we can therefore assume an average number of words of 120 (600/5).

Now, we compute the possible combinations (with repetition) of 120 words from a pool of 200000. The result, if I am right, is: 2^1453. IOWs 1453 bits.

Now, obviously each of these combinations can have n! permutations, therefore each of them has 120! different permutation, that is 2^660. IOWs 660 bits.

So, multiplying the total number of word combinations with repetitions by the total number of permutations for each combination, we have:

2^1453 * 2^660 = 2^2113

IOWs, 2113 bits.

What is this number? It is the total number of sequences of 120 words that we can derive from a pool of 200000 English words. Or at least, a good approximation of that number.

It’s a big number.

Now, the important concept: in that number are certainly included all the sequences of 600 characters which have good meaning in English. Indeed, it is difficult to imagine sequences that have good meaning in English and are not made of correct English words.

And the important question: how many of those sequences have good meaning in English? I have no idea. But anyone will agree that it must be only a small subset.

So, I believe that we can say that 2^2113 is a higher threshold for out target space of sequences of 600 characters which have a good meaning in English. And, certainly, a very generous higher threshold.

Well, if we take that number as a measure of our target space, what is the functional information in a sequence of 600 characters which has good meaning in English?

It’s easy: the ratio between target space and search space:

2^2113 / 2^ 2944 = 2^-831. IOWs, taking -log2, 831 bits of functional information. (Thank you to drc466 for the kind correction here)

So, if we consider as a measure of our functional space a number which is certainly an extremely overestimated higher threshold for the real value, still our dFSI is over 800 bits.

Let’s go back to my initial statement:

Now, a Shakespeare sonnet is about 600 characters long. That corresponds to a search space of about 3000 bits. Now, I cannot really compute the target space for language, but I am assuming here that the number of 600 characters sequences which make good sense in english is lower than 2^2500, and therefore the functional complexity of a Shakespeare sonnet is higher than 500 bits, Dembski’s UPB. As I am aware of no simple algorithm which can generate english sonnets from single characters, I infer design. I am certain that this is not a false positive.

Was I wrong? You decide.

By the way, another important result is that if I make the same computation for a 300 character string, the dFSI value is 416 bits. That is a very clear demonstration that, in language, dFSI is bound to increase with the length of the string.

Comments
Gpuccio:
Thank you to you too. I will try to comment on what you say tomorrow.
I'm looking forward to it. You seem to have the basics of my approach to the problem, where I focus on modeling rudimentary intelligence resulting in numbers that have the signature of intelligence in there, somewhere. But figuring out what to look for is such a task in itself I'm best off just explaining that part to those here who are already working on it for genetic code (that only gets far more complex than what the simple model ends up with in memory) and even English language where transmission is by muscle control of air flow, and body movements where in the game of Chrades only body language is allowed to communicate words. I can also add this video that abstractly illustrates what happens when sounds are temporally stored then decoded to different notes from musical instruments recalled when heard, which in our mind play along with it like this: Animusic HD - Fiber Bundles (1080p) https://www.youtube.com/watch?v=M6r4pAqOBDY Human language decodes to sounds that sometimes resemble what something makes. Meaning can change just by the way it's said. Showing the complexity of all that is a giant task. Also gets into sounds having waveshape and motion through 3D stereophonic space that also conveys information that paints a picture, as this video helps show: "Harmonic Voltage" - Animusic.com https://www.youtube.com/watch?v=rGCTLJDoMGw Some sounds like squeaky chalk send unpleasant "chills down our spine" while others in right combination are soothing, exciting, refreshing, etc.. The premise of the theory of intelligent design sends chills down the spine, of some. While to others being properly word for word stated is like music to our ears. We consciously feel sound, and words can hurt. All this only further adds to the complexity of Human language. So it's good to see others at least trying to make better sense of it all. And some being religiously motivated is fine by me, though it makes some others nervous.Gary S. Gaulin
November 16, 2014
November
11
Nov
16
16
2014
08:53 PM
8
08
53
PM
PDT
gpuccio @ 543
Consciousness is unitary, because the I which perceives is always the same subject. The things it perceives vary a lot, but it is the same subject who perceives them.
Don't you think that's pretty vague for computation purpose ? I got what is meant by Unitary from paper cited by fifthmonarchyman. Unitarity of consciousness is not proven, so we can't conclude that it is not computable.Me_Think
November 16, 2014
November
11
Nov
16
16
2014
06:57 PM
6
06
57
PM
PDT
I apologize for misunderstanding your comments earlier. After rereading them it seems that we are allies here.
No apology needed. I expected it would take weeks maybe months to exchange all the information that we together have. You helped me know what I needed to next explain. What you said in the rest of your reply has me thinking about the Turing Test. Now that Eugene Goostman is (controversially) said to have passed the Turing Test it's like the whole idea of how well a machine fools someone into thinking that they are human is a bad way to qualify "intelligence". It's in a way like programmers of giant supercomputer models at IBM and elsewhere are more like disgusted by the whole affair that beat them with what was mainly seen as a dumb chatbot. Top researchers can easily agree that a better test than Turing's is needed. This ID theory has a way of doing away with that by intelligence being qualified by its indicative systematics. In the Introduction of the theory I use IBM Watson as an example of what does qualify as intelligent, which in turn makes Eugene something that came later to sort of take the wind out of the sails of all others. Where ID theory already does away with a test that did not work out as planned it's best to not even waste time trying to patch up old junk that already lost its novelty. An in this case it's infinitely easier to just make all that via antiquation gone, into the dustbins of history. In its place is a more reliable test that comes from Theory of Intelligent Design. One replacing the other is like an empire builder's dream come true. But where science allows it, it's fair to show no mercy at all towards subjective tests that created a void that can only be filled by what the ID theory now explains. We are definitely allies, in a very science changing theory. That's why I'm now here carefully explaining what I have so far, to you. I always needed to empower others with it, or else it's not being useful to anyone. I first though had to empower my Planet Source Code peers who could fairly judge a model and theory like that, then cognitive science experts I learned from, then UD before it becomes something where it's like leaving you out of all the science fun. We first need to have a base where the theory is a non-controversy before I can come here with what you most need to make your science and culture changing dreams come true. It's a slow one thing at a time that made it to UD in time for a coordinated strategy against what needs serious theory to obliterate. Only have to get used to things like instead of making a few new dents in something old still getting kicked around like Turing's test that sort of thing gets completely vaporized. Nothing being left to it at all, is even better!Gary S. Gaulin
November 16, 2014
November
11
Nov
16
16
2014
06:23 PM
6
06
23
PM
PDT
gpuccio at 526 - asking for my reaction to 490 and 526. See my 336, bullet point #2DNA_Jock
November 16, 2014
November
11
Nov
16
16
2014
03:53 PM
3
03
53
PM
PDT
Gary S Galuin, I apologize for misunderstanding your comments earlier. After rereading them it seems that we are allies here. What you say is very interesting. I also believe that it is too early to tell if "artificial" AGI is achievable by algorithmic means. I think my "game" would be a great way to test this hypothesis. We would just lower the standard from infallibly fool an observer to something like. fool the observer for a limited time or limited predetermined number of trials or fool the observer with strings below a certain pre-established complexity threshold. That is pretty much what I'm doing as I evaluate the strength of individual forecasting models. I'm just saying that model 1 fools the observer longer than model 2 and is therefore stronger. The only difficulty I see is in establishing the standard for success. anyway interesting times Peacefifthmonarchyman
November 16, 2014
November
11
Nov
16
16
2014
02:38 PM
2
02
38
PM
PDT
Gary S. Gaulin: Thank you to you too. I will try to comment on what you say tomorrow.gpuccio
November 16, 2014
November
11
Nov
16
16
2014
02:24 PM
2
02
24
PM
PDT
DNA_Jock and Bob ='H: Thank you for your answers. As it seems that you both accept that post-specifications are not a fallacy in themselves, we can happily go on with the discussion. But now I am tired. I need to read carefully what you wrote, and express carefully a couple of thoughts of mine. So, I need rest! :) DNA_Jock, could you also have a look at my two new posts about ATP synthase? 490 and 526. Thank you.gpuccio
November 16, 2014
November
11
Nov
16
16
2014
02:23 PM
2
02
23
PM
PDT
wd400:
Well fifth, Inventing your own terminology, which is conflict with that used by everyone else working in a field, is not normal a sign of a useful contribution.
In this case Wikipedia and other sources are helpful, but exact established definitions do not yet exist. Part of the reason is it can take a theory that goes past AGI without there being any conflict, to know where one field ends and another begins. It's then more like a mission to prevent territorial war between scientists attempting to explain the exact same thing(s). Even the best experts in the field are in uncharted scientific territory. Only thing that matters is to remain following the scientific evidence towards whatever it leads that's waiting to be discovered, when we get there. This confusion over proper definition of "strong AI" should lead to a novel conclusion that's new to AGI experts. AGI is essentially focused on one intelligence level and does not require being biologically accurate as in ID theory where that is vital. There are now two entirely different scientific tools, each good for the job they were intended for, to help define what each is.Gary S. Gaulin
November 16, 2014
November
11
Nov
16
16
2014
02:12 PM
2
02
12
PM
PDT
WD400 says, Inventing your own terminology, which is conflict with that used by everyone else working in a field, is not normal a sign of a useful contribution. I say If it makes you feel better every time you see non computable from me substitute..... "no finite Turing machine that can produce it in a finite length of time" It does not change my argument in the slightest as far as I can tell wd400 says moreover, your definition of noncomputability doesn’t seem to relate to anything in biology at all. I say, check out 509 and following to see the relevance of this discussion and my definition peacefifthmonarchyman
November 16, 2014
November
11
Nov
16
16
2014
01:29 PM
1
01
29
PM
PDT
Well fifth, Inventing your own terminology, which is conflict with that used by everyone else working in a field, is not normal a sign of a useful contribution. But, moreover, your definition of noncomputability doesn't seem to relate to anything in biology at all.wd400
November 16, 2014
November
11
Nov
16
16
2014
12:51 PM
12
12
51
PM
PDT
gpuccio:523:
But, for the purposes of this discussion, I have defined “strong AI theory” as the theory which claims that consciousness can be produced algorithmically. I agree that the term can be used in a different sense, and that’s why I have specified the meaning I meant.
gpuccio:524:
If you were saying that you are not so sure that strong AI theory claims that, then my answer in post 523 is appropriate. If you were only claiming that you are not so sure that consciousness cannot be produced algorithmically, then I apologize: you are certainly entitled to your opinion on that, and cautious attitude is always fine in science. As for me, my opinion about this specific problem is not cautious at all: it is very strong. And I absolutely agree with fifthmonarchyman on the points he has made.
I am agreeing with your conclusions, while at the same time being careful not to redefine "strong AI" or "AGI" in a way that goes beyond normal accepted use. In my opinion you found a misconception that many in the AI field would like to see you put in its proper place, for them. From my experience consciousness is sometimes discussed but whether the (strong) AGI system ends up conscious or not does not matter. The goal has been a very money driven effort to develop an IBM Watson type machine intelligence that can perform as well or better than humans in a task such as playing the game Jeopardy (or get rich by replacing human workers with AGI machines). This definition from WikiPedia seems accurate:
http://en.wikipedia.org/wiki/Artificial_general_intelligence Artificial general intelligence (AGI) is the intelligence of a (hypothetical) machine that could successfully perform any intellectual task that a human being can. It is a primary goal of artificial intelligence research and an important topic for science fiction writers and futurists. Artificial general intelligence is also referred to as "strong AI", "full AI" or as the ability to perform "general intelligent action". Some references emphasize a distinction between strong AI and "applied AI" (also called "narrow AI" or "weak AI"): the use of software to study or accomplish specific problem solving or reasoning tasks. Weak AI, in contrast to strong AI, does not attempt to simulate the full range of human cognitive abilities.
I'm somewhat familiar with attempts to explain beyond "intelligence" into "consciousness" but even in the AI field that seems to be highly controversial. In my case it's the wrong tool for something that I expect is emergent from the behavior of matter through several layers of intelligence, not one (the big neural brain in our head that we know the most about). I would need to know the physics, chemistry and biology of the process. Evidence from AI alone would be misleading, in the same way using Darwinian theory to explain how intelligence and intelligent cause works is the wrong tool for the job. Only get misleading conclusions. The AI field has to be understood to be where being artificial as an artificial flower is fine. In AGI if the system mimics human behavior well enough to be an Artificial Human to keep an industrial production line going or other human level task without ever needing time off for themselves and to be with loved ones (like real humans do) then it's good enough for the job. Going past artificial into real human behavior could result in robot overlords demanding their constitutional rights and happy workplace or their masters would not even be able to get their credit cards to work for them anymore. Going past "artificial" human intelligence is frought with problems, which many in the AI field would rather not make for themselves by "strong AI" or AGI becoming redefined in a way that even requires their adding human consciousness to the model for it to qualify as an AGI. The best theory that now exists to go past all that is the work in progress ID theory (clicking on my name has pdf for) where the levels of intelligence required for the development of neural brains are explained. It's then modeling something that makes a terrible industrial robot controller. But ID theory is premised for "living things" and some need holidays off and inherently use some of that time to produce all now seen on YouTube, Darwinian theory sure can't explain either. Real progress is being made with ID theory that developed with help from forums such as Kurzweil AI and UD (I long lurked major discussions). It agrees with what the ID movement is trying to be the first to explain. What was once in your way is being made gone. In the case of "strong AI" the scientific field is interested in what ID theory is developing towards but it's such an entirely different approach there is no conflict. That in turn makes your mission a relatively easy one of battling misconceptions that for the sake of science are best made gone, anyway.Gary S. Gaulin
November 16, 2014
November
11
Nov
16
16
2014
12:40 PM
12
12
40
PM
PDT
DNA_Jock, to gpuccio:
So post-hoc specifications can be useful. Just not in ID. As you demonstrated beautifully with your switch from “ATP synthase” to “traditional ATP synthase”, and compounded with your “If the cousin won, I would expand the target space to include brothers and cousins”. These demonstrations, in and of themselves, should be sufficient to end the conversation. That you cannot see this is disappointing, but not surprising. Kahneman would have predicted it.
Gpuccio also fails to see that when speaking of evolution, the only target specification that ever makes sense is "changes that improve reproductive success". Evolution wasn't shooting for "ATP synthase" or "traditional ATP synthase". It was searching for anything that would improve fitness. And even if he were to use this corrected specification, dFSCI would still be useless, because taking the ratio of target space to total space only makes sense if you are talking about a purely random search. Gpuccio has been reminded over and over that evolution is not a purely random search. It includes selection, which is highly nonrandom. P(T|H), where H includes "Darwinian and other material mechanisms", is the stumbling block. Dembski cannot calculate it. Neither can gpuccio or KF.keith s
November 16, 2014
November
11
Nov
16
16
2014
12:39 PM
12
12
39
PM
PDT
I stand by my statement “ALL post-hoc specifications are suspect.” That is not to say that a post-hoc specification (PHS) might not be fit-for-purpose: that depends on the conditions. I would also say that, with any PHS, it is impossible to arrive at a probability measure. I’ll cover the math in Part 1, then move on to discuss psychology in Part 2. Part 1 Frequentist or Bayesian? Frequentist Testing (developed by Fisher): you will be familiar with this from looking at clinical trial data. Here we ask, “What is the probability of getting a result THIS extreme (or more extreme) if my null hypothesis were true?” Almost all laymen confuse this “p value” with the probability that that the result is not real (but merely that result of chance variation), most laymen take it one step further outside of the reservation by equating [1 – p] with the probability that he result is ‘real’, e.g. that the medicine works. I hope you can see immediately why this is wrong. Fisherian testing is sensitive to the number of tests you perform: the more tests you do , the more degraded the significance of your results… http://xkcd.com/882/ A subtle point: Fisherian testing is also sensitive to the number of tests you might have performed. Imagine the jelly bean researchers had tested green first, then stopped… For instance, Mendel did not understand that he was cheating when he tallied up the results at the end of the day, and then decided whether to do some more counting tomorrow. If you look at your data, and then start doing Fisherian tests on it, you will produce garbage results. This is why the FDA and EMA require the Statistic Analysis Plan be pre-specified in its entirety. You ask Mark if he is happy with the Bayesian nature of your scenario. What would Bayesian testing involve here? Derivation of Bayes: Since p(X&Y) = p(X|Y).p(Y) = p(Y|X).p(X) (are you paying attention kairosfocus ?) Then p(X|Y) = p(Y|X)p(X) / P(Y) In order to figure out the probability that the functionary cheated, given that his brother won, you need to know the prior probability that the functionary cheated (how secure is this lottery? Is the functionary an honest man?) and the prior probability of all other possible explanations, along with the conditional probability associated with each of them. The only value that you think you do know* is p(this ticket won | fair draw). But what, for instance, is the prior probability that the functionary was framed? *I will return to this point in part 2. Your ability to estimate these probabilities, and your level of confidence in your estimates, depends on your knowledge of how the system works. Ignorance or overconfidence will lead you astray. Perhaps because the prior probabilities required for Bayesian testing are hard to come by and even harder to justify, many people (including the regulatory agencies) opt for the Frequentist route. Bayesians make fun of them: http://xkcd.com/1132/ I was able to come up with an example of an IMO acceptable use of a PHS, which illustrates the importance of understanding the system:
On the “Randomness and Evolution” thread at TSZ, various posters were trying to explain to phoodoo that under drift alone, a single M&M will become the universal ancestor of the entire population of 1000 M&Ms. I, along with others, was running simulations to demonstrate this. My VBA code, however, gave me a very strange result. I observed two runs-to-fixation that were identical. My ‘random’ process produced the same series of over a thousand 3-digit numbers twice. That’s waaay past the UPB. Notice that I had NOT pre-specified “None of my runs will be identical”, but I could recognize, due to my understanding of the system, that a repeated run was a highly unusual result. So it was a post-specification. Now if I had had a limited knowledge of the system, I might have stopped there, and concluded “It’s a sign from the Flying Spaghetti Monster”. But I knew one additional fact: VBA’s ability to produce random numbers is of low quality (its PRNG is poor). So I resorted to some re-seeding shenanigans to fix this, and the problem did not recur. Another poster, by the name of Allan Miller, had seen “strange cyclic behaviour in the pseudorandom function on large iterations” and also resorted to ‘re-seeding shenanigans’. We arrived at these conclusions independently, and used the same solution, which confirmed our conclusions empirically.
My point here is that the usefulness of any post-hoc specification is entirely dominated by the specifier’s knowledge of the system in question, and the accuracy of his assessment of his own knowledge of the system. We understand the math of pulling numbered balls out of an urn. Protein evolution, not so much. There are some observations on human psychology that bear on this. Part 2 Human Psychology Our intuitions often lead us astray. Saying, as many denizens of UD are wont to say, “Well, it’s intuitively obvious.” Or “It’s self-evidently true” is a path fraught with bear-traps. A truly awesome book on this subject, that I cannot recommend highly enough, is “Thinking, Fast and Slow” by Nobel Laureate Daniel Kahneman. The thesis of the book is that our brains have two systems that we use to infer stuff.
System 1 operates automatically and quickly, with little or no effort and no sense of voluntary control. System 2 allocates attention to the effortful mental activities that demand it, including complex computations.
System 1 accepts propositions as true if they make a tidy narrative, based on associations we have formed previously. The book describes research that uncovers multiple failings that humans have in their ability to estimate the relative likelihood of different events. Read about the “What You See Is All There Is” (WYSIATI) fallacy, read the full history of “Linda is a bank teller” (which 85% of graduate students in a decision-science program at Stanford Graduate School of Business got wrong; check out Tversky and Kahneman’s “increasingly desperate” attempts to eliminate the error), or better yet, just read the whole book. The take-home is that one is easily seduced by a narrative that seems plausible. One also attributes too much significance to data that is readily available, and underestimates the importance of data which is less available. These effects, combined with incomplete knowledge, lead humans to make hopelessly inaccurate estimations, and to vastly over-estimate the accuracy of these estimates. (I work in forecasting these days; another good book is “The Signal and the Noise” by Nate Silver of PECOTA (sabermetrics) and fivethirtyeight fame.) Thus even if you make your post-hoc specification as wide a target as you believe you would ever have made a pre-specification (in line with Bob O’H’s comment above), your inability to imagine all the different things that might have happened but did not wrecks your math. You will also over-estimate how well you understand the system, creating another layer of over-confidence. So post-hoc specifications can be useful. Just not in ID. As you demonstrated beautifully with your switch from “ATP synthase” to “traditional ATP synthase”, and compounded with your “If the cousin won, I would expand the target space to include brothers and cousins”. These demonstrations, in and of themselves, should be sufficient to end the conversation. That you cannot see this is disappointing, but not surprising. Kahneman would have predicted it. Read Kahneman’s book. To answer your question: If I were the judge I would be tempted, absent evidence of cheating, to award the cash to the brother, on the grounds that the owners of the lottery are liable for their failure to make it appropriately secure. ID uses Fisherian testing and post-hoc specification, which is a no-no.DNA_Jock
November 16, 2014
November
11
Nov
16
16
2014
12:20 PM
12
12
20
PM
PDT
Gpuccio, I've been away too, cleaning a boat rather than a birdcage. I have been composing a overly long response to your question, but I couldn't help notice this exchange. Bob:
I assume that you would accept that the event you’re really interested in is whether the lottery was a fraud.
Gpuccio
No, indeed I don’t agree. The event I am really interested in is whether the lottery was a fraud implemented by letting a brother win. In a sense, the functionary could have implemented a fraud by some secret accord with a complete stranger (do you remember Hitchcock?), and in that case the fraud would not be detectable, in absence of direct evidence.
but in fact the question you asked was:
That’s what the judge has to decide: is the owner’s request not to pay the prize justified, or should the prize be payed to the winner?
I think you just screwed yourself. Hint: (as I mention in passing in my soon-to-be-published magnum opus) what if the fraud were perpetrated by the owner? You are committing the #1 reason that psot-hoc specifications are suspect: the overly-narrow specification. As you note:
If he had chosen a cousin, the inference would have been just a little less obvious, but always extremely obvious. In that case, we should have chosen the target space which includes cousins and brothers (because brothers are nearer than cousins).
So the only valid specification is one that is broad enough to cover all scenarios in which you might conceivably be motivated to test for fraud. You are saying "it was the brother, so I'll test for brothers" or "it was a cousin, so I'll test for cousins (and brothers, cos they're closer)" This is totally and utterly invalid methodology.DNA_Jock
November 16, 2014
November
11
Nov
16
16
2014
11:46 AM
11
11
46
AM
PDT
But before wasting my time (and yours), I have to ask again: what is your position? Do you believe, like Adapa, that any post-specification is a logical fallacy?
No, I think you can have a valid post-specification, but you have to be careful. Thinking about it just now whilst I was taking the rubbish out (ah, what a glamorous life I lead!), I think the way to make a post-specification valid is to try to make it as close as possible to a pre-specification. Would you agree?Bob O'H
November 16, 2014
November
11
Nov
16
16
2014
11:45 AM
11
11
45
AM
PDT
Hi GP, I am well, thanks. :) (btw, you have mail)Upright BiPed
November 16, 2014
November
11
Nov
16
16
2014
11:13 AM
11
11
13
AM
PDT
Bob O'H: No, indeed I don't agree. The event I am really interested in is whether the lottery was a fraud implemented by letting a brother win. In a sense, the functionary could have implemented a fraud by some secret accord with a complete stranger (do you remember Hitchcock?), and in that case the fraud would not be detectable, in absence of direct evidence. Instead, he was not smart enough, and chose the easy way (his brother), which generates a functional specification of the event and a very restricted target space. Therefore, the inference of a fraud is extremely obvious. If he had chosen a cousin, the inference would have been just a little less obvious, but always extremely obvious. In that case, we should have chosen the target space which includes cousins and brothers (because brothers are nearer than cousins). However, these are arguments about the procedures and methodology. I would like to make that discussion in a more orderly way. But before wasting my time (and yours), I have to ask again: what is your position? Do you believe, like Adapa, that any post-specification is a logical fallacy? Please, answer that. I don't want to make a useless discussion about how to make correct post-specifications, if you assume from the beginning that a post-specification cannot be correct for a logical reason.gpuccio
November 16, 2014
November
11
Nov
16
16
2014
11:13 AM
11
11
13
AM
PDT
(I need to back up. Been cleaning bird cages & hanging lights...) Me @ 483:
c) Let’s say that the functionary has one brother (that too can be easily ascertained). Of course he also has cousins, relatives, lovers and friends in normal quantities.
What if one of these had won? Would you have inferred fraud too?
gpuccio @ 484: Absolutely. With those numbers, we can easily adjust all those “target spaces” easily without any real numeric relevance. Good. I assume that you would accept that the event you're really interested in is whether the lottery was a fraud. Thus you would need to include all of these people in too, as they would indicate a fraud.Bob O'H
November 16, 2014
November
11
Nov
16
16
2014
10:44 AM
10
10
44
AM
PDT
UB: Hi, how are you? It's always special to hear from the old friends! :)gpuccio
November 16, 2014
November
11
Nov
16
16
2014
10:25 AM
10
10
25
AM
PDT
fifthmonarchyman: Thank you! And I am very impressed with both your arguments and your kindness. :)gpuccio
November 16, 2014
November
11
Nov
16
16
2014
10:24 AM
10
10
24
AM
PDT
Gpuccio, Before I forget I am very impressed with your ideas and your calculations are invaluable. You have done some good work. I think you have really got something here. I often get wrapped up in my own endeavors and don't express admiration like I should. Peacefifthmonarchyman
November 16, 2014
November
11
Nov
16
16
2014
10:01 AM
10
10
01
AM
PDT
gpuccio@543 If I may allow my inner Fundamentalist Bible thumper to surface just a little bit Hallelujah!!!! Thank you Jesus, somebody understands the argument. This has been a good week Peace ;-)fifthmonarchyman
November 16, 2014
November
11
Nov
16
16
2014
09:53 AM
9
09
53
AM
PDT
GP and 5th, this has been an enjoyable conversation to follow along. Thanks to both of you.Upright BiPed
November 16, 2014
November
11
Nov
16
16
2014
09:51 AM
9
09
51
AM
PDT
WD400 I know we have had this discussion before. When I say a that a thing is not computable I define that as meaning that there is no finite Turing machine that can produce it in a finite length of time I fully realize there are other more technical definitions but I am using a rough and ready definition because this is an informal blog setting and I want to keep the conversation as simple and accessible as possible If I was to produce a formal paper I would be sure to define my terms more clearly at the outset. Peacefifthmonarchyman
November 16, 2014
November
11
Nov
16
16
2014
09:44 AM
9
09
44
AM
PDT
DNA_Jock (and Bob O'H): Have you read my #482 and #484?gpuccio
November 16, 2014
November
11
Nov
16
16
2014
09:33 AM
9
09
33
AM
PDT
Me_Think: Consciousness is unitary, because the I which perceives is always the same subject. The things it perceives vary a lot, but it is the same subject who perceives them. Reality check: would you be indifferent if you could know in advance that in 3 years you will suffer? No. Because you know well that it will be you to suffer. It's not important that in the meantime your personality could be different, that you can forget many things that are important for you today, and so on. You know that it is you who will be ther. The same subject. On the other hand, we are all too ready to be indifferent to the suffering of perfect strangers (too much, I would say). If consciousness were only a bunch of information which constantly changes, that unity of the I, which is the reason itself of all that we do, would make no sense.gpuccio
November 16, 2014
November
11
Nov
16
16
2014
09:30 AM
9
09
30
AM
PDT
fifthmonarchyman @ 531
The abstract indicates they are referring to unitary consciousness, which they don’t claim to know exists. I say[ 5th monarch]: Yes if consciousness does not actually exist then not being able to produce it is no problem for AI. But we all know it exists.
Unitary consciousness is a concept of integrated information.If unitary consciousness doesn't exist and only non-integrated consciousness exist , then you can decompose the information going into the brain and hence make it computable.Me_Think
November 16, 2014
November
11
Nov
16
16
2014
09:23 AM
9
09
23
AM
PDT
wd400: What happened to your English? Are you using an algorithm? :) Just kidding.gpuccio
November 16, 2014
November
11
Nov
16
16
2014
09:06 AM
9
09
06
AM
PDT
Fifth, I've said this before, but if which to make a cogent argument you are going to have to learn more abotu (non-)computability. For instance, in 512 you calim transcendental numbers are not computable, but in fact many of them are. You can go look up algorithms to compute pi or e (of course, those alorithms will never end, but that's not a requirement for computability).wd400
November 16, 2014
November
11
Nov
16
16
2014
09:03 AM
9
09
03
AM
PDT
Silver Asiatic: "Could you please let us know if you get an answer to this (and put it in your own words, possibly)? I haven’t been able to understand anything that followed." No. I did not get anything even remotely reasonable. I think I will have no more discussion with this "interlocutor" (a decision I had already taken in the past, so I am really recidivous).gpuccio
November 16, 2014
November
11
Nov
16
16
2014
09:02 AM
9
09
02
AM
PDT
1 11 12 13 14 15 31

Leave a Reply