Uncommon Descent Serving The Intelligent Design Community

Mark Perakh again and again

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Mark Perakh has weighed in with yet another screed against my work (go here). He seems out of his element. I’m still awaiting his detailed critique of “Searching Large Spaces” — does he even understand the relevant math?

Comments
"However, we have tons of examples of how natural selection and evolution have occurred in the world." Agreed. I've little if any doubt in descent with modification from a universal common ancestor. The question is whether RM+NS is the primary mechanism. There is no empirical evidence that it is. It's an extrapolation of empirical observation. The undeniable truth of the matter is that no one has observed RM+NS creating a novel cell type, tissue type, organ, or body plan. It has been observed creating only trivial change. That these trivial changes can add up to remarkably complex new structures is pure unadulterated speculation propped up by a big fat argument from ignorance "if not RM+NS then what else could be responsible". Intelligent design could be responsible. We know that intelligent designers exist in nature and we know these designers can cause directed genetic changes to living organisms. We know these designers exist because we are the known case of designers. Thus the possibility of design is proven. The question is no longer "is intelligent design possible?" because we now know that it is. The question today is "when did intelligent design first appear in nature".DaveScot
August 18, 2005
August
08
Aug
18
18
2005
07:24 AM
7
07
24
AM
PDT
DaveScot, Whoa ! I don't think those ideas follow one another at all ! The ID argument fundamentally relies (as far as I can tell) on showing that there is almost zero probability that what we see in the world arrived by chance alone. To do so, the probabilities have to be calculated correctly. However, we have tons of examples of how natural selection and evolution have occurred in the world. It is perfectly logical to assume that those processes, over billions of years, have produced what we see. Let us suppose there are two parameters: Natural Selection and Intelligent Design. There is tons of evidence for natural selection, so we can confidently add it to our model. To add another parameter to the equation, the burden is on the new parameter. It's like doing a likelihood ratio test. You add another parameter if the fit is significantly better. ID hasn't proven itself significantly better than just the evolution parameter alone, which we know is an essential parameter. So why add it ? So, I think the burden is on you.blockheadster
August 17, 2005
August
08
Aug
17
17
2005
02:29 PM
2
02
29
PM
PDT
Salvador I'm afraid you lost me re; syntax and semantics. I understand quite well the concepts in computer languages but I don't see the connection to amino acid sequences in proteins. What do you propose are the syntax rules that proteins follow? The syntax of computer languages are specified in exacting detail but I don't know of any syntax rules that have been discovered in regard to protein construction. I'm not saying protein syntax doesn't exist I'm saying nobody has a clue what it might be or if it does even exist. blockheadster If I understand you correctly you're saying it's up to IDists to characterize the potential number of protein sequences that can do the same job as a given sequence and that until then we should assume that it could be a large number and thus make the odds of accidental occurence of useful proteins reasonable. By the same logic then it's up to Darwinists to solve the DNA/protein chicken/egg paradox and until that time we should assume that it can't be done by chance and that design is true until proven false. You can't have your cake and eat it too. What's good for the goose is good for the gander, in other words. There are "gaps" in ID theory that IMO hinge around the bounding of probablistic resources just as there are gaps in standard evolutionary theory. Putting design in the gaps is not satisfactory. Neither is putting chance in the gaps satisfactory. One should acknowledge that chance and/or design are potential gap fillers and hope further discovery will cast more light on the situation. IMO being able to predict how an arbitrary amino acid chain will fold and function is a requisite to further understanding the probabilities that chance must overcome. On the other bit about specification - I think it's irrelevant whether specification happens by problems finding solutions or solutions finding problems. In either case functionality is found and employed. In either case it can be by design or accident. Design detection seeks to quantify the probabilities whether it be the probability of a solution finding a problem or a problem finding a solution is not really salient.DaveScot
August 17, 2005
August
08
Aug
17
17
2005
02:01 PM
2
02
01
PM
PDT
I should note, Jason Rosenhouse thinks your calculations are impeccable. "As an exercise in formal mathematics the paper seems unobjectionable. I have never questioned Dembski's ability to manipulate symbols in accordance with the rules of algebra and calculus." http://evolutionblog.blogspot.com/2005/08/dembski-and-perakh.html Jason does point out the unaswered question, "I do not know if Mark has bothered to slog through Dembski's paper". Indeed, Mark could comment on whether he understands the math, and whether Bill's calculations are unobjectionable.scordova
August 17, 2005
August
08
Aug
17
17
2005
11:58 AM
11
11
58
AM
PDT
The words "function" and "metabolize" are specifications. They are gleaned from our everyday ideas of "function". Function is a generalized blueprint. There is a coincidence between our concept of function (conceptual information) and the physical evidence of function (physical information). We are able to use human defined ideas and concepts and project them quite naturally onto biotic reality. Is this a fluke of post-diction or is this evidence of design? Though I believe protein function is evidence for design, within biotic reality, a more clear cut case for design is the existence of computers, language processors, digital signal processors, operating systems, decoders, encoders, compilers, memory storage systems, digital error correction, feed-back control architectures, etc. in biology. One is hard pressed to say this is a post-dictive projection as Michael Shermer might argue, as the complexity is just too high.... If it is not post diction, is it darwinian mechanisms and chemical evolution. No. Thus, design is a reasonable possibility. It is not "agrument from ignoarance" it is "proof by contradiction". We have contradicted the adequacy of Darwinian mechanism as a causal explanation. Does that automatically mean design? Maybe, but the bottom line is Darwinian mechanisms, in and of themselves, are shown insufficient.scordova
August 17, 2005
August
08
Aug
17
17
2005
08:42 AM
8
08
42
AM
PDT
Davescot, I think the issue of a target is outside the domain of biochemistry. From a biochemists perspective, there is a specified function, say, metabolizing lactose. Then, from a biochemist ID perspective, one can say "what is the probability that a protein would form that metabolizes lactose ?" But there are two problems with that. First, natural selection does not act to evolve a targeted function that is pre-specified. Natural selection didn't act to "metabolize lactose." Secondly, even if this were true, the evolution of an enzyme that metabolizes lactose, from the perspective of probability, is not a search starting from an arbitrary location within probability space. One would have to consider the evolution of b-galactosidase considering that the machinery that produces lactose, galactose, glucose etc. are already available and may provide an important starting points. Finally, the argument that Giff makes seems reasonable. While we don't know, for example, if YZZ or ZZY or, for that matter, DKNY, can perform the same function or not, knowledge of whether this is the case is essential for calculating the probabilities that a certain function would evolve (ignoring the above arguments). You are right, this doesn't falsify ID, but the burden is on ID to come up with reasonable numbers. These calculations are essential and all the probabilities must be conditioned on them. Until then, the probability statements don't have any weight. You could try a Bayesian approach - that way you could include a prior distribution for the likelihoods. Then, the argument will just boil down to an argument about the prior.blockheadster
August 17, 2005
August
08
Aug
17
17
2005
08:19 AM
8
08
19
AM
PDT
(I mis-spelled in my previous post the word "comiler", it should be "compiler".) A good APPROXIMATE example of a system that can recognize conceptual information without reference to meaing is Microsoft Word's ability to detect spelling and grammar errors as you type whatever meaninguful or meaningless sentences you wish. Human language is not a formal language, so the Microsoft Word example is only approximate. A better example would be computer language processors, which in contrast, are more EXACT in their "front ends" which check spelling and syntax. These "front ends" serve as examples of systems capable of detecting conceptual information without reference to meaing. Front ends of computer language processors are excellent examples of systems being able to detect conceptual information (language constructs) in physical information (electronic bit stream of characters). The coincidence of the conceptual and physical information, is evidence the physical bit stream of characters in a computer program evidences CSI. Salvador Cordovascordova
August 17, 2005
August
08
Aug
17
17
2005
08:12 AM
8
08
12
AM
PDT
The order does not matter, and neither can there be repeats. I'm afraid Mark Perakh showed a rather distorted understanding of specifications (conceptual information). He may be a physicist, but it does not exempt him from making a basic mistake in understanding ID literature. Perakh, Shallit, and Elsberry might do well to revise their understanding of CSI, as it is grossly distorted. Until they can even show they can represent the rudimentary basics of CSI accurately, they're not in a position to critique the larger issues in Dembski's latest papers. They do not have to agree CSI exists, but they should be able to accurately represent what the claims IDists are before trying to refute them. As it stands they're only tearing down straw man definitison of CSI, not the definitions which IDists have given. Again, I emphasize, Perhakh was the only one among the three that gave Bill the courtesy of using Dembski's wording for Dembski's key concepts, namely CSI. That would seem to be a self-evident approach if one wishes to critique Dembski's work, but Shallit and Elsberry could not even bring themselves to quote a few sentences from Dembski's book that are crucial in the definition of CSI. Now why is that? To Perakh's credit, he at least quoted Dembski's definition of CSI verbatim, however Perakh inaccurately represented a crucial component of CSI, namely, conceptual information. Perakh said, "Therefore, insofar as we deal with text, the term conceptual information seems to coincide with the meaning of that text." Perakh has it wrong, I'm afraid. Conceptual information may or may not have meaning, and meaning is not what qualifies a specification as having conceptual information. For examples, look at strings in in sets T1 and T2 in my ARN thread. Does one have to infer MEANING to be able to classify those strings as conceptual information? Absolutely not. This point seems to have been missed by Perakh. Anyone with a background in comiler theory and language processors understand that grammatically correct constructs can be recognized independent of meaning. In computer languages we have what are known as lexical analyzers and parsers which deal with syntax not semantics. They are able to recognize conceptual constructs devoid of meaning. This is standard operating practice in computer science in the construction of the front end of computer language processors. Basic stuff... Conceptual information deals with syntax. Dembski is following, in that regard, what are common place understandings in computer and information sciences regarding pattern matching. I'm afraid, Dr. Perakh's physics are irrelevant to the discussion of formal language theory, and his error in describing Dembski's work suggest a degree of unfamiliarity with ideas that are common place in computer science and information theory. Regarding Dembski's latest papers, they may dispute certain assumptions being offered, but they have yet to say whether the mathematical derivations offered by Bill are logically correct deductions from the starting assumptions. You can see, Perakh's thread has just has become another thread of vitriol and derision, with the happy exception of Elsberry's participation, little technical substance is being offered. Though I do disagree with Elsberry and Perakh, I do credit them with not descending to the low level derogatory meaness exemplified by the other participants at Pandas Thumb. I applaud their good faith attempts at civil discussion. Perakh has shown that he is bright and capable, and I have commended his work where I feel appropriate. But regarding his understanding and ability to represent Dembski's work accurately, his ideas need some revision. Salvador Cordovascordova
August 17, 2005
August
08
Aug
17
17
2005
08:01 AM
8
08
01
AM
PDT
Giff What you describe is a problem in probabilistic resources and it's a valid criticism but the criticism is an argument from ignorance right from the word go. Correct me if I'm wrong but you're basically saying that if the probalistic target is protein with sequence XYZZX how do we know that (for example) proteins XYZZXX, XXYZZX, and etcetera ad infinitum might not fulfill the specification (function) for that particular protein. In the archery scenario this would be equivalent to the archer having many (perhaps a nearly infinite) number of targets his arrow can hit. It's a valid criticism and it's an area where more research is required. The criticism is an argument from ignorance as it posits that an unknown number might be significant i.e. that a significant number of other protein sequences could work as well or well enough. The research that is required is pretty much the holy grail of biochemistry - being able to predict how any arbitrary amino acid sequence will fold. Once we can do that we should be able to, in principle, bound the number of alternative protein shapes that would work as well or well enough as any given protein. However, due to the nature of arguments from ignorance that criticism can never go completely away because one can always make the claim that not every other possible protein shape has been tested for efficacy. But at some point we have to accept limitations on knowledge and reject arguments from ignorance lest all progress stalls. Is that point now on this argument? Not in my mind. I at least want to see the protein folding problem solved so we can model with a computer the properties of arbitrary amino acid sequences and get a good idea of how many other targets would get the job done. Once that number is known (instead of guessed at out of ignorance) the it can be cranked into the design detection formulae and see what the result is. What I want to make clear is that the criticism in no way falsifies design detection in protein based structures. What it does is it points out an area where the data that design detection relies on for reliable output needs to be more robust. No algorithm can produce reliable output data without reliable input data. Fortunately for all of the quest to solve the protein folding problem isn't driven by design detection's need for better data nor evolution's need for better data. Its solution will foster a great leap forward for medicine and genetic engineering and that's why I called it the holy grail of biochemistry.DaveScot
August 17, 2005
August
08
Aug
17
17
2005
07:48 AM
7
07
48
AM
PDT
Does the order of the 8 bit subsets in T2 matter ? Can there be repeats of an 8 bit string within T2 ? Can a set of T2 be like this: { 0000,0000,0000,0000 }blockheadster
August 17, 2005
August
08
Aug
17
17
2005
07:03 AM
7
07
03
AM
PDT
It's a set, not a solid number. ;-)Giff
August 17, 2005
August
08
Aug
17
17
2005
06:47 AM
6
06
47
AM
PDT
Perhaps not the place for this discussion, but regarding those calculations for T1 and T2 on that web page: You say: "T2 occupies 4 of the 256 possibilities in Omega Space" I don't understand. For each T1 string, there are 256 possibilities, yes. But for T2, there are 2^32 possibilities. T2 doesn't represent 4/256, unless I'm misunderstanding something. Maybe I don't understand this Omega function.blockheadster
August 17, 2005
August
08
Aug
17
17
2005
06:46 AM
6
06
46
AM
PDT
Giff, His definition of CSI accounts for multiple strings in that a specification can have multiple target strings. His book No Free Lunch describes it. I've gone through sample calculations to illustrate how to count the number of bits and improbability in an multi-target specification. And that will also show one describes the situation where there is more than one acceptable string: http://www.arn.org/cgi-bin/ubb/ultimatebb.cgi/ubb/get_topic/f/12/t/001549/p/4.html?#000141 Blockhedser, The requirement that CSI is specified by non-postdictive specifications prevents the problem you are concerned about, but that is better addressed by reading Bill's books rather than trying to learn through exchanges in a blog. The details are too involved to answer in the short space we have ehre. Salvador Cordovascordova
August 17, 2005
August
08
Aug
17
17
2005
12:40 AM
12
12
40
AM
PDT
Hey - this is a bit off topic, but I had a question for someone familiar with Dr. Dembski's work on a deeper level than myself. A criticism I've heard is that he works from a specific instance and bases his probabilities on that, rather than all workable instances. For instance, it is very unlikely that a random 18-character string will come up with "to be or not to be" - but it is far less unlikely that a random 18-character string will come up with some intelligable english phrase. The universal probability bound for SCI seems based on the former rather than the latter. Can someone point me to where Dr. Dembski deals with this objection?Giff
August 16, 2005
August
08
Aug
16
16
2005
07:52 PM
7
07
52
PM
PDT
I wonder if Perakh would debate with Dembski regarding the math and only the math? That would be *most* interesting to see.jzs
August 16, 2005
August
08
Aug
16
16
2005
06:18 PM
6
06
18
PM
PDT
Is he supposed to gain the moral upper hand with that last paragraph about scientific papers and langauges? Has he ever seen any of Richard Dawkin's replies to David Berlinski? Anyone find it ironic that he won't address Dembski's mathematical arguments because he declares they have nothing to do with evolution offhand, and then says Dembski was something akin to immoral for ignoring some of his opponents' critiques?Ben Z
August 16, 2005
August
08
Aug
16
16
2005
04:44 PM
4
04
44
PM
PDT
Mark Perakh: still hubristic and still spreading t Mark Perakh, the "Boris Yeltsin of Higher Education," posted this diatribe to Panda's Thumb...Huperborea
August 16, 2005
August
08
Aug
16
16
2005
02:03 PM
2
02
03
PM
PDT
You guys are not correct. Google scholar isn't the authority. He has 21 articles in Web of Science, from 1975 to 1988. Also, can someone explain to me how the search for a small target in large probability space is not the same problem as, say, flipping a coin a billion times, and then determining a strategy to determine the exact sequence of coin flips ? Thanks.blockheadster
August 16, 2005
August
08
Aug
16
16
2005
11:12 AM
11
11
12
AM
PDT
I must correct you, Dave. It seems that if you filter out all of Perakh's anti-ID articles, a Google Scholar search DOES turn up a small but signifigant body of work in his true area of specialization. http://tinyurl.com/7uylndave
August 16, 2005
August
08
Aug
16
16
2005
10:33 AM
10
10
33
AM
PDT
Correction: http://scholar.google.com/scholar?num=100&hl=en&lr=&q=%22m+perakh%22 Mark did publish a few things back in the 1970's in his area of expertise. My initial search missed it. His useful output in material science came to a screeching halt 30 years ago with one lone exception in 1985. Mark's doctoral degree dates back to 1949 and his only significant outburst of useful work was in the 1970's. It appears he was out to lunch the rest of his life and now in his dotage he's diddling with NeoDarwinism. Maybe he's operating under the theory "if at first you don't succeed, try try again".DaveScot
August 16, 2005
August
08
Aug
16
16
2005
07:25 AM
7
07
25
AM
PDT
http://scholar.google.com/scholar?as_q=&num=100&btnG=Search+Scholar&as_epq=Mark+Perakh&as_oq=&as_eq=&as_occt=any&as_sauthors=&as_publication=&as_ylo=&as_yhi=&hl=en&lr= It appears that Perakh has published nothing in his area of expertise (physics and materials science). A rather dismal record for a professor emeritus of physics. http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=/netahtml/search-bool.html&r=19&f=G&l=50&co1=AND&d=ptxt&s1=perakh&OS=perakh&RS=perakh He is one of five named inventors on a single patent granted in 1980. Woo-woo! Mark appears to be your usual outspoken Darwinist - a nobody that's done nothing notable in his field of expertise vainly trying to make a mark outside their field of expertise. Fits right in with poseurs Wesley Elsberry, Ed Darrel, Nick Matzke, Ed Brayton, Eugenie Scott, et al. But hey, he got a special doctoral degree in the Soviet Union that's greater than any doctoral degree you can get in the United States. With Russian physics PhD's like Mark it's no wonder the Soviet Union lost the cold war.DaveScot
August 16, 2005
August
08
Aug
16
16
2005
07:06 AM
7
07
06
AM
PDT
1 2

Leave a Reply