Uncommon Descent Serving The Intelligent Design Community

An attempt at computing dFSCI for English language

Categories
Intelligent Design
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

In a recent post, I was challenged to offer examples of computation of dFSCI for a list of 4 objects for which I had inferred design.

One of the objects was a Shakespeare sonnet.

My answer was the following:

A Shakespeare sonnet. Alan’s comments about that are out of order. I don’t infer design because I know of Shakespeare, or because I am fascinated by the poetry (although I am). I infer design simply because this is a piece of language with perfect meaning in english (OK, ancient english).
Now, a Shakespeare sonnet is about 600 characters long. That corresponds to a search space of about 3000 bits. Now, I cannot really compute the target space for language, but I am assuming here that the number of 600 characters sequences which make good sense in english is lower than 2^2500, and therefore the functional complexity of a Shakespeare sonnet is higher than 500 bits, Dembski’s UPB. As I am aware of no simple algorithm which can generate english sonnets from single characters, I infer design. I am certain that this is not a false positive.

In the discussion, I admitted however that I had not really computed the target space in this case:

The only point is that I have not a simple way to measure the target space for English language, so I have taken a shortcut by choosing a long enough sequence, so that I am well sure that the target space /search space ratio is above 500 bits. As I have clearly explained in my post #400.
For proteins, I have methods to approximate a lower threshold for the target space. For language I have never tried, because it is not my field, but I am sure it can be done. We need a linguist (Piotr, where are you?).
That’s why I have chosen and over-generous length. Am I wrong? Well, just offer a false positive.
For language, it is easy to show that the functional complexity is bound to increase with the length of the sequence. That is IMO true also for proteins, but it is less intuitive.

That remains true. But I have reflected, and I thought that perhaps, even if I am not a linguist and not even a amthematician, I could try to define better quantitatively the target space in this case, or at least to find a reasonable higher threshold for it.

So, here is the result of my reasonings. Again, I am neither a linguist nor a mathematician, and I will happy to consider any comment, criticism or suggestion. If I have made errors in my computations, I am ready to apologize.

Let’s start from my functional definition: any text of 600 characters which has good meaning in English.

The search space for a random search where every character has the same probability, assuming an alphabet of 30 characters (letters, space, elementary punctuation) gives easily a search space of 30^600, that is 2^2944. IOWs 2944 bits.

OK.

Now, I make the following assumptions (more or less derived from a quick Internet search:

a) There are about 200,000 words in English

b) The average length of an English word is 5 characters.

I also make the easy assumption that a text which has good meaning in English is made of English words.

For a 600 character text, we can therefore assume an average number of words of 120 (600/5).

Now, we compute the possible combinations (with repetition) of 120 words from a pool of 200000. The result, if I am right, is: 2^1453. IOWs 1453 bits.

Now, obviously each of these combinations can have n! permutations, therefore each of them has 120! different permutation, that is 2^660. IOWs 660 bits.

So, multiplying the total number of word combinations with repetitions by the total number of permutations for each combination, we have:

2^1453 * 2^660 = 2^2113

IOWs, 2113 bits.

What is this number? It is the total number of sequences of 120 words that we can derive from a pool of 200000 English words. Or at least, a good approximation of that number.

It’s a big number.

Now, the important concept: in that number are certainly included all the sequences of 600 characters which have good meaning in English. Indeed, it is difficult to imagine sequences that have good meaning in English and are not made of correct English words.

And the important question: how many of those sequences have good meaning in English? I have no idea. But anyone will agree that it must be only a small subset.

So, I believe that we can say that 2^2113 is a higher threshold for out target space of sequences of 600 characters which have a good meaning in English. And, certainly, a very generous higher threshold.

Well, if we take that number as a measure of our target space, what is the functional information in a sequence of 600 characters which has good meaning in English?

It’s easy: the ratio between target space and search space:

2^2113 / 2^ 2944 = 2^-831. IOWs, taking -log2, 831 bits of functional information. (Thank you to drc466 for the kind correction here)

So, if we consider as a measure of our functional space a number which is certainly an extremely overestimated higher threshold for the real value, still our dFSI is over 800 bits.

Let’s go back to my initial statement:

Now, a Shakespeare sonnet is about 600 characters long. That corresponds to a search space of about 3000 bits. Now, I cannot really compute the target space for language, but I am assuming here that the number of 600 characters sequences which make good sense in english is lower than 2^2500, and therefore the functional complexity of a Shakespeare sonnet is higher than 500 bits, Dembski’s UPB. As I am aware of no simple algorithm which can generate english sonnets from single characters, I infer design. I am certain that this is not a false positive.

Was I wrong? You decide.

By the way, another important result is that if I make the same computation for a 300 character string, the dFSI value is 416 bits. That is a very clear demonstration that, in language, dFSI is bound to increase with the length of the string.

Comments
gpuccio #532
I have calculated a lower threshold of complexity. Can you understand what that means? Evidently not. It is not important how many of those sequences have a good meaning. We are computing the set of those which are made of English words. Why? Because it is bigger than the subset of those which have good meaning, and therefore is a lower threshold to the complexity of the meaningful sequences. IOWs there are more sequences made with English words than sequences which have meaning in English. As a small child would easily understand. Not you.
Sure I understand all this. The number of words in the dictionary is always far bigger than what is used in a text with "good meaning", if that means syntax, i.e. English sentences. I understand that this is how language works by design, and no calculation about it changes anything. Nor does any calculation prove anything about it. It's there in the edifice of language. Therefore (and for all the reasons previously stated) your calculation was pointless by design. gpuccio
I say that starting with “Let’s go back to my initial statement:” and ending with “Was I wrong? You decide.”, after a post in which I have given the calculations which prove what I had only assumed in the initial statement. So, I am not “assuming once more” “after after all these calculations”. I am only restating the initial assumption, so that readers may judge if my calculations have confirmed it.
Nice of you to let me judge. I did so. gpuccio
Either you are unable to read, or you are simply lying.
By your own admission, you were unable to compute. You admit it every time when you say "I assume" and "I cannot really compute". Therefore whatever you did, you did it just for show, with no meaningful outcome. Case closed.E.Seigner
November 16, 2014
November
11
Nov
16
16
2014
08:43 AM
8
08
43
AM
PDT
gpuccio #528
So, please have the courage to state explicitly the thing that you don’t agree with
A clear, honest and simple request. Could you please let us know if you get an answer to this (and put it in your own words, possibly)? I haven't been able to understand anything that followed.Silver Asiatic
November 16, 2014
November
11
Nov
16
16
2014
08:26 AM
8
08
26
AM
PDT
Zacriel: "Not sure if you can show there is an operational difference." Well, a big difference there is, certainly. A difference as big as the whole human cognition and the sense of our existence itself. No mathematics, no philosophy, no atheism, no religion would exist without subjective experiences. Maybe that is not "operational", after all. However, Penrose's and Bartlett's arguments are about that point. I will just mention that humans generate tons of original dFSCI, and algorithms don't.gpuccio
November 16, 2014
November
11
Nov
16
16
2014
08:26 AM
8
08
26
AM
PDT
fifthmonarchyman: I envy you. You still have a reasonable interlocutor. Me, no more! :)gpuccio
November 16, 2014
November
11
Nov
16
16
2014
08:21 AM
8
08
21
AM
PDT
fifthmonarchyman: "Yes if consciousness does not actually exist then not being able to produce it is no problem for AI. But we all know it exists." :)gpuccio
November 16, 2014
November
11
Nov
16
16
2014
08:20 AM
8
08
20
AM
PDT
gpuccio: Do you think that algorithms can create “internal representations” which are subjective experiences? Not sure if you can show there is an operational difference.Zachriel
November 16, 2014
November
11
Nov
16
16
2014
08:19 AM
8
08
19
AM
PDT
E.Seigner: "My position: When you have no idea how many of those sequences have a “good meaning” in English, then can you say what it is you are calculating? Hardly. Therefore your “anyone will agree” does not follow." I have calculated a lower threshold of complexity. Can you understand what that means? Evidently not. It is not important how many of those sequences have a good meaning. We are computing the set of those which are made of English words. Why? Because it is bigger than the subset of those which have good meaning, and therefore is a lower threshold to the complexity of the meaningful sequences. IOWs there are more sequences made with English words than sequences which have meaning in English. As a small child would easily understand. Not you. "Because you explicitly state by the end: “Now, I cannot really compute the target space for language, but I am assuming here…” So, after all these calculations, you had to assume once more to arrive at your conclusion." I say that starting with "Let’s go back to my initial statement:" and ending with "Was I wrong? You decide.", after a post in which I have given the calculations which prove what I had only assumed in the initial statement. So, I am not "assuming once more" "after after all these calculations". I am only restating the initial assumption, so that readers may judge if my calculations have confirmed it. Either you are unable to read, or you are simply lying.gpuccio
November 16, 2014
November
11
Nov
16
16
2014
08:16 AM
8
08
16
AM
PDT
Zac, I think we are finally getting to the point where some real productive discussion can happen. I will do my very best to keep a my frustration in check please do your very best to follow the argument I know you are an intelligent person please don't feign obtusness Zac said Not sure why you keep referring to the original string, if we aren’t replicating the string. I say, There are only two ways to produce a Shakespearean sonnet. 1)Be Shakespeare 2)copy Shakespeare The reason I am careful to rule out borrowing information from the original string is to eliminate the second option Zac said The first statement says “reproduce the string”; the second statement says “no one is asking to recreate the same sequence.” I say The algorithm is simply asked to produce a sonnet that an observer will be unable to distinguish from a work of Shakespeare. You say We’re also still confused on why you want to change it to numbers. I say, Because representing the sonnet numerically removes it from it's context this prevents you from cheating by borrowing information from the string on the sly. You say, We may have to wait for your simulation to be completed, but if you can’t express what you want in detail, it’s quite possible your simulation will be flawed I say. Your inability to comprehend the simple rules is perhaps evidence of a problem on your part rather than with the stipulations themselves. Zac says Furthermore, if your own efforts fail at emulating Shakespeare, it doesn’t mean that all such efforts are bound to fail. I say, I could not agree more. The "game" does not prove that emulations are impossible it simply evaluates their strength. The power of the "game" is the cumulative realization that each step you make toward Shakespeare requires exponential increases in the complexity of the algorithm. You say, Natural selection encompasses the environment, which may represent a non-algorithmic component. I say, I completely agree but the process to incorporate information from the environment is necessarily algorithmic. There is no getting around this. You say, The abstract indicates they are referring to unitary consciousness, which they don’t claim to know exists. I say Yes if consciousness does not actually exist then not being able to produce it is no problem for AI. But we all know it exists. Peacefifthmonarchyman
November 16, 2014
November
11
Nov
16
16
2014
08:13 AM
8
08
13
AM
PDT
gpuccio #528
What a mess! I don’t know if it is even worthwhile to answer.
Ditto. gpuccio
b) I can’t see what is “bad” in the concept of “good meaning” in English.
You were supposed to be calculating something. When you issue undefined terms which are not even terms, then what it is you are calculating? Nothing worth while, I can safely assume. Certainly not anything scientific. gpuccio
I assumed 200000 English word. When someone suggested that there are 500000, I did the computation again with that number. What should I do: count them one by one? I assumed that “The average length of an English word is 5 characters.” That was the lowest value I found mentioned. Have you a better value?
Perhaps you could at least leave out things that mean nothing, such "good meaning". If you are counting words rather than meanings, it should be easy to leave meanings out. Better values for your variables are not my problem. They are completely your problem. gpuccio
“And the important question: how many of those sequences have good meaning in English? I have no idea. But anyone will agree that it must be only a small subset.”. So, please have the courage to state explicitly the thing that you don’t agree with: You don’t agree that the set of all sequences which have meaning in English is a small subset of the set of all the sequences which are made of English words? Is that your positions?
My position: When you have no idea how many of those sequences have a "good meaning" in English, then can you say what it is you are calculating? Hardly. Therefore your "anyone will agree" does not follow. gpuccio
Then why do you say: “So, in conclusion after all the heavy computation you still just assume to infer design. “?
Because you explicitly state by the end: "Now, I cannot really compute the target space for language, but I am assuming here..." So, after all these calculations, you had to assume once more to arrive at your conclusion. Your conclusion: " I am certain that this is not a false positive." In the title you say you'd attempt to calculate, but what you really do is assume and acknowledge that you cannot calculate. Yet by the end you declare as if the calculation had been meaningful to any degree. Sorry, but it wasn't. It wasn't even ridiculous. It was painfully silly.E.Seigner
November 16, 2014
November
11
Nov
16
16
2014
07:51 AM
7
07
51
AM
PDT
Zachriel: "Not sure that follows. An algorithm can certainly create internal representations, including of itself." Zachriel: don't dance around the "representation" word! Do you think that algorithms can create "internal representations" which are subjective experiences? Do you think that an algorithm can subjectively understand if a statement is right or wrong? Do you think that an algorithm can recognize that some process can be used to obtain a desirable outcome? Do you think that an algorithm can do all that, beyond the boundaries of the meanings and functions which have already been coded into its configuration, and the computational derivations of that coded information? You are elegant, but don't be too elegant. :)gpuccio
November 16, 2014
November
11
Nov
16
16
2014
07:13 AM
7
07
13
AM
PDT
E.Seigner: What a mess! I don't know if it is even worthwhile to answer. In brief: a) My confutation of the circularity is not in this thread. b) I can't see what is "bad" in the concept of "good meaning" in English. c) You say: "This is a whole bunch of assumptions. Plus it looks like the bad concept of “good meaning” is playing quite a role here in a crucial computation. Most reasonable readers would have stopped by now, but I force myself to continue." I assumed 200000 English word. When someone suggested that there are 500000, I did the computation again with that number. What should I do: count them one by one? I assumed that "The average length of an English word is 5 characters." That was the lowest value I found mentioned. Have you a better value? Finally, I assumed "that a text which has good meaning in English is made of English words." Have you a different opinion? Do you usually build your English discourses by random character sequences, or using greek words? Should I analyze any of your posts here and see what you are using in them? "A whole bunch of assumptions"! Bah! d) You say: "Again, on a question deemed important by yourself you are boldly saying you have no idea, but you think anyone will agree with you and that in the end you have proved something. Amazing how things are always on your side even when you have no idea about them." This is utter nonsense. What I said was: "And the important question: how many of those sequences have good meaning in English? I have no idea. But anyone will agree that it must be only a small subset.". So, please have the courage to state explicitly the thing that you don't agree with: You don't agree that the set of all sequences which have meaning in English is a small subset of the set of all the sequences which are made of English words? Is that your positions? e) You say: "So, in conclusion after all the heavy computation you still just assume to infer design. " And in support of that, you quote me in this way:
Let’s go back to my initial statement: Now, a Shakespeare sonnet is about 600 characters long. That corresponds to a search space of about 3000 bits. Now, I cannot really compute the target space for language, but I am assuming here…As I am aware of no simple algorithm which can generate english sonnets from single characters, I infer design. I am certain that this is not a false positive.
But that is an explicit and completely unfair misrepresentation. My statement was:
Let’s go back to my initial statement: Now, a Shakespeare sonnet is about 600 characters long. That corresponds to a search space of about 3000 bits. Now, I cannot really compute the target space for language, but I am assuming here that the number of 600 characters sequences which make good sense in english is lower than 2^2500, and therefore the functional complexity of a Shakespeare sonnet is higher than 500 bits, Dembski’s UPB. As I am aware of no simple algorithm which can generate english sonnets from single characters, I infer design. I am certain that this is not a false positive. Was I wrong? You decide. It should be clear to anyone who understands sequences with good meaning in English (apparently, not to you) that I am speaking here of "my initial statement". Can you read? Then why do you say: "So, in conclusion after all the heavy computation you still just assume to infer design. "? (Emphasis mine) f) Finally, to close in glory, you say: "You say, “I am aware of no simple algorithm which can generate english sonnets from single characters.” Here you talk about single characters, while the basis of your computation was “a pool of 200000 English words”. " OK, you have understood nothing at all. Please, read again my OP. The search space is defined by characters. The probability of getting a sequence of 600 characters which is made of English words. The target space is defined as the set of sequences of 600 characters which is made of English words. Read again, maybe you will understand, After all, my post has good meaning in English.
gpuccio
November 16, 2014
November
11
Nov
16
16
2014
07:08 AM
7
07
08
AM
PDT
gpuccio: If Penrose and others are right, and human cognition cannot be explained algorithmically That's something that's not been shown. gpuccio: that is bad news for strong AI theory. There's nothing to say that AI has to be algorithmic. gpuccio: If consciousness is not only an aside f objective computations, and if the subjective reaction to conscious representations is an integral part of cognition (which is exactly what I believe), then a designer can do things that no algorithm, however complex, will ever be able to do: IOWs, generating new specifications, new functional definitions, and building original specified complexity linked to them. Not sure that follows. An algorithm can certainly create internal representations, including of itself. fifthmonarchyman: Again the algorithm can have access to anything it wants in the entire universe it just can’t borrow information from the original string. Not sure why you keep referring to the original string, if we aren't replicating the string. Shakespeare had knowledge of many other artists, and certainly integrated this knowledge into his own work. A Shakespeare emulator should certainly be able to do this. fifthmonarchyman: For all the programer knows the string of numbers could represent a protein string or the temperature fluctuation in a heat source. If we were to make a Shakespeare emulator, we would certainly work in English, just like Shakespeare, and would try different rhymes in English, just like Shakespeare. fifthmonarchyman: The algorithm’s job is to reproduce the string sufficiently enough to fool an observer with out borrowing information from the original string. fifthmonarchyman (from above): No one is asking to recreate the same sequence. In fact an exact recreation would be strong evidence of cheating. All I’m looking for is a string that is sufficiently “Shakespearean” to fool an observer. This is why we are confused. The first statement says "reproduce the string"; the second statement says "no one is asking to recreate the same sequence." We're also still confused on why you want to change it to numbers. We may have to wait for your simulation to be completed, but if you can't express what you want in detail, it's quite possible your simulation will be flawed. Furthermore, if your own efforts fail at emulating Shakespeare, it doesn't mean that all such efforts are bound to fail. fifthmonarchyman: I feel the frustration rising again Relax. It's just a discussion about ideas. fifthmonarchyman: Once you understand that strong AI is a fools errand Darwinian evolution is shown to be impossible by definition. Wouldn't that understanding come from evidence? As of this point, there is no proof for your position, while artificial intelligence seems to be progressing long past where people once only dreamed. Consider chess, once considered the pinnaculum æstimationis of human intelligence. fifthmonarchyman: Darwinism claims that an algorithm(RM/NS+whatever)can explain everything related to biology including human consciousness. Natural selection encompasses the environment, which may represent a non-algorithmic component. fifthmonarchyman: http://arxiv.org/abs/1405.0126 The abstract indicates they are referring to unitary consciousness, which they don't claim to know exists.Zachriel
November 16, 2014
November
11
Nov
16
16
2014
06:46 AM
6
06
46
AM
PDT
REC and DNA_Jock: I have refined and checked the analysis on the Clustal alignment of the ATP synthase sequences. My numbers now are as follows: Positions analyzed: 447 (out of about 500). Mean conservation at the analyzed positions: 72% Median conservation: 77%. That means that 50% of the positions have at least 77% conservation. FSI according to the Durston method: 1480 bits. Original approximation made by me by the three sequences shortcut: 761 bits. Difference: 719 Just for the record.gpuccio
November 16, 2014
November
11
Nov
16
16
2014
06:44 AM
6
06
44
AM
PDT
gpuccio #518
I am interested in what is true, and I have already clearly shown that CSI, at least if correctly defined empirically, is certainly not circular. And again, CSI and dFSCI are only two different subsets of the same thing.
You mean you showed it in OP here? Let's see. From OP:
In a recent post, I was challenged to offer examples of computation of dFSCI for a list of 4 objects for which I had inferred design. One of the objects was a Shakespeare sonnet. [...] In the discussion, I admitted however that I had not really computed the target space in this case...
"Not really computed"? Not a good start. But let's see further.
So, here is the result of my reasonings. Again, I am neither a linguist nor a mathematician, and I will happy to consider any comment, criticism or suggestion. If I have made errors in my computations, I am ready to apologize. Let’s start from my functional definition: any text of 600 characters which has good meaning in English. The search space for a random search where every character has the same probability, assuming an alphabet of 30 characters (letters, space, elementary punctuation) gives easily a search space of 30^600, that is 2^2944. IOWs 2944 bits. OK.
Well, if you are not a linguist, then I understand why you use the unscientific term "good meaning" as if it meant something. But if you are also not a mathematician, and neither am I, then what are we talking about? I remember, we are talking about that you "have already clearly shown that CSI, at least if correctly defined empirically, is certainly not circular." The problem here is that you start already with at least one bad concept: "good meaning". Anyway, I hope your definition of target space is correct, so let's move on.
Now, I make the following assumptions (more or less derived from a quick Internet search: a) There are about 200,000 words in English b) The average length of an English word is 5 characters. I also make the easy assumption that a text which has good meaning in English is made of English words. For a 600 character text, we can therefore assume an average number of words of 120 (600/5).
This is a whole bunch of assumptions. Plus it looks like the bad concept of "good meaning" is playing quite a role here in a crucial computation. Most reasonable readers would have stopped by now, but I force myself to continue.
IOWs, 2113 bits. What is this number? It is the total number of sequences of 120 words that we can derive from a pool of 200000 English words. Or at least, a good approximation of that number. It’s a big number.
Astonishingly, I find something to agree with: "What is this number?... It's a big number." :) Feeling generous, I think I can also agree that "we can derive from a pool of 200000 English words". But it will soon be clear that I don't agree with you on what we derive from the pool of English words and for what purpose.
And the important question: how many of those sequences have good meaning in English? I have no idea. But anyone will agree that it must be only a small subset.
Again, on a question deemed important by yourself you are boldly saying you have no idea, but you think anyone will agree with you and that in the end you have proved something. Amazing how things are always on your side even when you have no idea about them.
It’s easy: the ratio between target space and search space: 2^2113 / 2^ 2944 = 2^-831. IOWs, taking -log2, 831 bits of functional information. (Thank you to drc466 for the kind correction here)
An easy thing that required a correction. Noted. In conclusion:
Let’s go back to my initial statement: Now, a Shakespeare sonnet is about 600 characters long. That corresponds to a search space of about 3000 bits. Now, I cannot really compute the target space for language, but I am assuming here...As I am aware of no simple algorithm which can generate english sonnets from single characters, I infer design. I am certain that this is not a false positive.
So, in conclusion after all the heavy computation you still just assume to infer design. You made a bunch of assumptions all along, so what's one more, right? But why the computation then? I know, you were supposed to show something. And you did - you showed off. It's been quite a show. Thanks. Unfortuntately none of this proves anything. You didn't compute your brand of FIASCO. You assumed it. You assumed it right off the bat with "good meaning in English". In conclusion after all the computation you declare that you are certain that this is not a false positive, while all you did was make assumptions every step of the way. You say, "I am aware of no simple algorithm which can generate english sonnets from single characters." Here you talk about single characters, while the basis of your computation was "a pool of 200000 English words". Now, I am not a mathematician, but I am a linguist and I notice a glaring difference like this. Characters are not words, and I am sure they make all the difference in computation. Well, not in your case, because you were not really computing anyway. As a final note, let's recall you said, "I am interested in what is true, and I have already clearly shown that CSI, at least if correctly defined empirically, is certainly not circular." Actually, you clearly showed that you are unable to define anything correctly: - You brought in undefined "good meaning", therefore missing an opportunity to define something crucial in your attempt at computation. - In the end, you mixed up "words" and "characters". - Every step of your demonstration - including the conclusion - involved assumptions. - In OP you were computing something called dFSCI for a Shakespeare sonnet, not showing whether CSI was circular or not. - Therefore you didn't show, clearly or otherwise, the non-circularity of CSI. - Therefore it's obvious that you don't know what "clearly shown" means. - You most likely are not interested in what's true. Have a lovely rest of the weekend.E.Seigner
November 16, 2014
November
11
Nov
16
16
2014
12:41 AM
12
12
41
AM
PDT
Gary S. Gaulin: I don't know if I have misinterpreted what you were saying: If you were saying that you are not so sure that strong AI theory claims that, then my answer in post 523 is appropriate. If you were only claiming that you are not so sure that consciousness cannot be produced algorithmically, then I apologize: you are certainly entitled to your opinion on that, and cautious attitude is always fine in science. As for me, my opinion about this specific problem is not cautious at all: it is very strong. And I absolutely agree with fifthmonarchyman on the points he has made.gpuccio
November 15, 2014
November
11
Nov
15
15
2014
10:45 PM
10
10
45
PM
PDT
Gary S. Gaulin: ""Strong AI claims that human consciousness can be produced algorithmically." I’m not so sure. Too early to know either way." But, for the purposes of this discussion, I have defined "strong AI theory" as the theory which claims that consciousness can be produced algorithmically. I agree that the term can be used in a different sense, and that's why I have specified the meaning I meant.gpuccio
November 15, 2014
November
11
Nov
15
15
2014
10:39 PM
10
10
39
PM
PDT
Me_Think: "Every process has an algorithm. If you disprove an algorithm , all it means is there is a better algorithm which you don’t know. It doesn’t mean the process doesn’t exist, and what do you mean by ‘Strong’ algorithm ?" Let's say that some processes can only be described by using a Turing Oracle. The idea is that consciousness can act as a Turing Oracle in cognitive algorithms, but that the Oracle itself is not an event which can be explained algorithmically and is not computable.gpuccio
November 15, 2014
November
11
Nov
15
15
2014
10:36 PM
10
10
36
PM
PDT
fifthmonarchyman at #509: Exactly!gpuccio
November 15, 2014
November
11
Nov
15
15
2014
10:33 PM
10
10
33
PM
PDT
Gary S. Gaulin: I absolutely agree that AI theories and models are important, both for ID and in general. I would say that AI theories have a lot to say about the "easy problem" of consciousness (according to Chalmers). But they can say nothing, and have said nothing, about the "hard problem": what consciousness is, why it exists, why subjective experiences take place in us. That's why I have specified: "let’s call it strong AI theory (according to how Penrose uses the term). A theory which very simply assumes that consciousness, one of the main aspects of our reality, can be explained by complex configurations of matter." My reasoning applies only to this definition of "strong AI theory", and not to AI theory in general.gpuccio
November 15, 2014
November
11
Nov
15
15
2014
10:31 PM
10
10
31
PM
PDT
514 & 515 gpuccio That was funny! Thank you. :)Dionisio
November 15, 2014
November
11
Nov
15
15
2014
10:26 PM
10
10
26
PM
PDT
Me_Think: "So if CSI is circular (as per ID proponent himself in another thread), does it mean dFSCI / FSCI/O too are circular ?" None is circular. And I don't agree that an "ID proponent himself in another thread" has said anything like that (although he has probably expressed things badly). I have not even read that thread (no time), but frankly I am not interested in threads about the opinions of a person, or about how he says things. I am interested in what is true, and I have already clearly shown that CSI, at least if correctly defined empirically, is certainly not circular. And again, CSI and dFSCI are only two different subsets of the same thing.gpuccio
November 15, 2014
November
11
Nov
15
15
2014
10:24 PM
10
10
24
PM
PDT
Me_Think: Penrose is playing a difficult game: defending a right argument and trying just the same an explanation which does not depend on the simple recognition that consciousness cannot be explained by some configuration of matter. IOWs the consequences of his Godel argument are deeper that he himself thinks, or is ready to admit. That reminds me of some more "open" scientists (see Shapiro) who are ready to criticize aspects of neo darwinism, but are not "ready" to accept ID as a possible alternative, and recur to abstruse theories which are even worse than neo darwinism.gpuccio
November 15, 2014
November
11
Nov
15
15
2014
10:19 PM
10
10
19
PM
PDT
fifthmonarchyman: "Once you understand that strong AI is a fools errand Darwinian evolution is shown to be impossible by definition. It’s pretty much that simple." And I wholeheartedly agree! :)gpuccio
November 15, 2014
November
11
Nov
15
15
2014
10:13 PM
10
10
13
PM
PDT
Dionisio: "scientific? What empirical evidences is it based on? Sci-Fi literature?" OK, again I admit my error! :)gpuccio
November 15, 2014
November
11
Nov
15
15
2014
10:12 PM
10
10
12
PM
PDT
Dionisio: "Did you mean “…strong AI theory…” ?" Ehm... yes! Thank you for correcting me. :) I suppose someone will say that was a freudian slip! :)gpuccio
November 15, 2014
November
11
Nov
15
15
2014
10:11 PM
10
10
11
PM
PDT
Gary Gaulin says I’m not so sure. Too early to know either way. I say, What evidence are you waiting for? What would possibly convince you of the futility of the strong AI endeavor? The paper I just linked to provides mathematical proof that strong AI is impossible would that sort of thing help you to make a decision? Just curious peacefifthmonarchyman
November 15, 2014
November
11
Nov
15
15
2014
10:09 PM
10
10
09
PM
PDT
Me think says Every process has an algorithm. If you disprove an algorithm , all it means is there is a better algorithm which you don’t know. I say what evidence do you have for this? What possible evidence could you ever have for such a claim? This statement is simply metaphysics and very poor long discredited metaphysics at that. lot's of things are are demonstrably not the result of algorithms. Transcendental numbers and consciousness for example check this out http://arxiv.org/abs/1405.0126 Me think says, what do you mean by ‘Strong’ algorithm ? I say, I don't think I ever used that term. Strong AI maybe but not strong algorithm. peacefifthmonarchyman
November 15, 2014
November
11
Nov
15
15
2014
10:02 PM
10
10
02
PM
PDT
Strong AI claims that human consciousness can be produced algorithmically.
I'm not so sure. Too early to know either way.Gary S. Gaulin
November 15, 2014
November
11
Nov
15
15
2014
09:45 PM
9
09
45
PM
PDT
fifthmonarchyman @ 509 Every process has an algorithm. If you disprove an algorithm , all it means is there is a better algorithm which you don't know. It doesn't mean the process doesn't exist, and what do you mean by 'Strong' algorithm ?Me_Think
November 15, 2014
November
11
Nov
15
15
2014
09:43 PM
9
09
43
PM
PDT
Me think asks How is a strong AI related to unguided evolution? I say, Lets start with this. The processes producing AI and unguided evolution are each algorithmic. Darwinism claims that an algorithm(RM/NS+whatever)can explain everything related to biology including human consciousness. Strong AI claims that human consciousness can be produced algorithmically. The two ideas are functionally equivalent. Disprove one and the other fails necessarily. Suppose you were to acknowledge that there are things like consciousness that algorithms like (RM/NS+ whatever) are not equipped to produce. I would say welcome to the ID camp that is what we've been saying all along ;-) peacefifthmonarchyman
November 15, 2014
November
11
Nov
15
15
2014
09:31 PM
9
09
31
PM
PDT
1 12 13 14 15 16 31

Leave a Reply