Uncommon Descent Serving The Intelligent Design Community

Oh, you mean, there really is a bias in academe against common sense and rational thought?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Jonathan Haidt decided, for some reason, to point out the obvious to a group of American academics recently, that they are overwhelmingly modern materialist statists (liberals).

He polled his audience at the San Antonio Convention Center, starting by asking how many considered themselves politically liberal. A sea of hands appeared, and Dr. Haidt estimated that liberals made up 80 percent of the 1,000 psychologists in the ballroom. When he asked for centrists and libertarians, he spotted fewer than three dozen hands. And then, when he asked for conservatives, he counted a grand total of three.

“This is a statistically impossible lack of diversity,” Dr. Haidt concluded, noting polls showing that 40 percent of Americans are conservative and 20 percent are liberal. In his speech and in an interview, Dr. Haidt argued that social psychologists are a “tribal-moral community” united by “sacred values” that hinder research and damage their credibility — and blind them to the hostile climate they’ve created for non-liberals.

Why anyone would bother pointing that out, I don’t know. It’s not a bias against conservatives, anyway; it’s a bias against rationality, which they don’t believe in. Our brains, remember, are shaped for fitness, not for truth. Indeed, these are the very people who channel Barney Rubble and Fred Flintstone for insights into human psychology, and anyone who doubts the validity of such “research” should just shut up and pay their taxes, right?

Well, his talk had attracted  John Tierney’s attention at the New York Times (February 7, 2007), who drew exactly the right conclusion (for modern statists and Darwinists):

“If a group circles around sacred values, they will evolve into a tribal-moral community,” he said. “They’ll embrace science whenever it supports their sacred values, but they’ll ditch it or distort it as soon as it threatens a sacred value.” It’s easy for social scientists to observe this process in other communities, like the fundamentalist Christians who embrace “intelligent design” while rejecting Darwinism.

[ … ]

For a tribal-moral community, the social psychologists in Dr. Haidt’s audience seemed refreshingly receptive to his argument. Some said he overstated how liberal the field is, but many agreed it should welcome more ideological diversity. A few even endorsed his call for a new affirmative-action goal: a membership that’s 10 percent conservative by 2020. The society’s executive committee didn’t endorse Dr. Haidt’s numerical goal, but it did vote to put a statement on the group’s home page welcoming psychologists with “diverse perspectives.” It also made a change on the “Diversity Initiatives” page — a two-letter correction of what it called a grammatical glitch, although others might see it as more of a Freudian slip.

I have friends here in Canada who make bets on when the Times will finally, mercifully shut down.

Meanwhile, Megan McArdle weighs in at Atlantic Monthly, driving home the shame:

It is just my impression, but I think what conservatives want most of all is simply recognition that they are being shut out. It is a double indignity to be discriminated against, and then be told unctuously that your group’s underrepresentation is proof that almost none of you are as good as “us”. Haidt notes that his correspondence with conservative students (anonymously) “reminded him of closeted gay students in the 1980s”:

He quoted — anonymously — from their e-mails describing how they hid their feelings when colleagues made political small talk and jokes predicated on the assumption that everyone was a liberal. “I consider myself very middle-of-the-road politically: a social liberal but fiscal conservative. Nonetheless, I avoid the topic of politics around work,” one student wrote. “Given what I’ve read of the literature, I am certain any research I conducted in political psychology would provide contrary findings and, therefore, go unpublished. Although I think I could make a substantial contribution to the knowledge base, and would be excited to do so, I will not.”
Beyond that, mostly they would like academics to be conscious of the bias, and try to counter it where possible. As the quote above suggests, this isn’t just for the benefit of conservatives, either.

All together now, class, spell W-I-M-P.

Someone else writes

I have a good friend–I won’t name out him here though–who is a tenured faculty member in a premier humanities department at a leading east coast university, and he’s . . . a conservative! How did he slip by the PC police? Simple: he kept his head down in graduate school and as a junior faculty member, practicing self-censorship and publishing boring journal articles that said little or nothing. When he finally got tenure review, he told his closest friend on the faculty, sotto voce, that “Actually I’m a Republican.” His faculty friend, similarly sotto voce, said, “Really? I’m a Republican, too!”

That’s the scandalous state of things in American universities today. Here and there–Hillsdale College, George Mason Law School, Ashland University come to mind–the administration is able to hire first rate conservative scholars at below market rates because they are actively discriminated against at probably 90 percent of American colleges and universities. Other universities will tolerate a token conservative, but having a second conservative in a department is beyond the pale.

All together now, class, spell the plural, W-I-M-P-S.

Oh, heck, let me be honest, not snarky: Nothing stops the Yanks from freeing themselves from this garbage unless my British  mentor is right, and I hope he isn’t: Americans are happy to be serfs, but they don’t like being portrayed in the media as hillbillies.

So whenever the zeroes they all gladly pay taxes for threaten to do just that, they promptly cave.

If I die tonight, I want this on the record: If I couldn’t be a Canuck and managed to bear the unbearable sorrow, I’d be a true Yankee hillbilly and proud of it. Do you think we Canucks have so far stood off the Sharia lawfare crowd, with all their money and threats, by worrying much what smarmy (and sometimes vicious) tax burdens think?

Comments
The thermodynamics and probability backed assessment accepted by design thought is that we will not pass stage one: 143 ASCII characters worth of coherent text in English.
I'll make the point again - all you get out of random number generators is random numbers, you don't get FSCI, and you also don't get any other patterns (Non functional, non specified information) that match anything that the observed cosmos naturally generates (apart from complete randomness) If your example proves that FSCI can't be the result of natural forces then it also proves that any other non-random pattern seen in nature can't be the result of random forces - and this to me seems obviously wrong!DrBot
February 10, 2011
February
02
Feb
10
10
2011
05:29 AM
5
05
29
AM
PDT
PPPS: The Wiki Infinite Monkeys article has some enlightening remarks: _____________________ >> The theorem concerns a thought experiment which cannot be fully carried out in practice, since it is predicted to require prohibitive amounts of time and resources. Nonetheless, it has inspired efforts in finite random text generation. One computer program run by Dan Oliver of Scottsdale, Arizona, according to an article in The New Yorker, came up with a result on August 4, 2004: After the group had worked for 42,162,500,000 billion billion monkey-years, one of the "monkeys" typed, “VALENTINE. Cease toIdor:eFLP0FRjWK78aXzVOwm)-‘;8.t" The first 19 letters of this sequence can be found in "The Two Gentlemen of Verona". Other teams have reproduced 18 characters from "Timon of Athens", 17 from "Troilus and Cressida", and 16 from "Richard II".[19] A website entitled The Monkey Shakespeare Simulator, launched on July 1, 2003, contained a Java applet that simulates a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line from Henry IV, Part 2, reporting that it took "2,737,850 million billion billion billion monkey-years" to reach 24 matching characters: RUMOUR. Open your ears; 9r"5j5&?OWTY Z0d... Due to processing power limitations, the program uses a probabilistic model (by using a random number generator or RNG) instead of actually generating random text and comparing it to Shakespeare. When the simulator "detects a match" (that is, the RNG generates a certain value or a value within a certain range), the simulator simulates the match by generating matched text. Questions about the statistics describing how often an ideal monkey should type certain strings can motivate practical tests for random number generators as well; these range from the simple to the "quite sophisticated". Computer science professors George Marsaglia and Arif Zaman report that they used to call such tests "overlapping m-tuple tests" in lecture, since they concern overlapping m-tuples of successive elements in a random sequence. But they found that calling them "monkey tests" helped to motivate the idea with students. They published a report on the class of tests and their results for various RNGs in 1993.[20] . . . . Primate behaviorists Cheney and Seyfarth remark that real monkeys would indeed have to rely on chance to have any hope of producing Romeo and Juliet. Monkeys lack a theory of mind and are unable to differentiate between their own and others' knowledge, emotions, and beliefs. Even if a monkey could learn to write a play and describe the characters' behavior, it could not reveal the characters' minds and so build an ironic tragedy.[21] In 2003, lecturers and students from the University of Plymouth MediaLab Arts course used a £2,000 grant from the Arts Council to study the literary output of real monkeys. They left a computer keyboard in the enclosure of six Celebes Crested Macaques in Paignton Zoo in Devon in England for a month, with a radio link to broadcast the results on a website. One researcher, Mike Phillips, defended the expenditure as being cheaper than reality TV and still "very stimulating and fascinating viewing".[22] Not only did the monkeys produce nothing but five pages[23] consisting largely of the letter S, the lead male began by bashing the keyboard with a stone, and the monkeys continued by urinating and defecating on it. Phillips said that the artist-funded project was primarily performance art, and they had learned "an awful lot" from it. He concluded that monkeys "are not random generators. They're more complex than that. … They were quite interested in the screen, and they saw that when they typed a letter, something happened. There was a level of intention there."[22][24] >> _____________________ We can easily see that the result after feasible lengths of time and computing effort, is to make chains that are an order of magnitude below the cutoff level in mind. Let us observe:
128^25 = 4.79*10^52 128^143 = 2.14*10^301
Let us not forget, a typical protein is 300 or so AA long, corresponding to 900 bases, or a config space of:
4^900 = 7.15*10^541
kairosfocus
February 10, 2011
February
02
Feb
10
10
2011
05:28 AM
5
05
28
AM
PDT
KF, you don't seem to have understood my point.
Sedimantary layers may be complex but they are not specified, and certainly not functionally specified on a digital code.
They are patterns that we observe in nature, so is FSCI. I am not claiming that sedimentary layers have any function or complexity, just that they are observed, and most importantly - THEY ARE NOT RANDOM. If your random noise generator doesn't produce patterns that we know ARE the result of natural processes then WHY is it any good as an example of how natural processes can't account for other patterns (FSCI) - If we take your proposition seriously and deal with the implications on their merits then the inability of a random noise generator to produce ANY structured output (sediments, crystals) regardless of FSCI content implies that ANY NON RANDOM PATTERN cannot be the result of natural processes - all because a random noise generator won't, in the lifetime of the cosmos, produce those types of patterns.DrBot
February 10, 2011
February
02
Feb
10
10
2011
05:23 AM
5
05
23
AM
PDT
PPS: If you want, we could feed the infinite monkeys result into a test for simple text, and see if we can get a phrase or sentence of relevant length, against say the database of 1 - 2 million books in the Gutenberg free ebook record. Sentences that pass the 1,000 bit [~ 143 ASCII character] test could then go into a pool and see if we can get a paragraph, then a book by shuffling them or their elements. The thermodynamics and probability backed assessment accepted by design thought is that we will not pass stage one: 143 ASCII characters worth of coherent text in English. If we do so in a credible case -- there will have to be an audit, given the cases such as Weasel etc -- then the inference from FSCI to design is dead, and with it both CSI and IC, thus design theory.kairosfocus
February 10, 2011
February
02
Feb
10
10
2011
05:16 AM
5
05
16
AM
PDT
PS: The test I proposed is a test as to whether [d]FSCI -- which is what we see in DNA -- is credibly a product of chance plus necessity without intelligent direction. If that can be shown on a computer, the design inference on dFSCI and wider FSCI as sign of intelligent design is dead. So would be the 2nd law of thermodynamics. A random number circuit driving a computer is a convenient way to implement the test, and is a form of the infinite monkeys type test. A type of test commonly raised by evolutionary materialism advocates to give the impression that such chance can create modest variations that will be picked up by natural selection and will then give rise to evolutionary development; cf Dawkins' metaphor of the easy back slope up Mt Improbable; when he should be addressing getting to the shores of Isle improbable in a sea of non-functional configs. But in all these cases the issue of isolated islands of function is not properly addressed, e.g. notoriously in Dawkins'; Weasel, where non-functional codes are tested for proximity to target and are rewarded for increments in proximity. Avida and ev etc beg the questions of getting to islands of function, and the issue of injecting active information. The incidence of function among configs is far too high, for instance.kairosfocus
February 10, 2011
February
02
Feb
10
10
2011
05:03 AM
5
05
03
AM
PDT
Dr Bot: Pardon, but your slip is showing again. Sedimantary layers may be complex but they are not specified, and certainly not functionally specified on a digital code. (If you dispure this, kindly provide the code, and the function the code leads sedimentary layers to produce, algorithmically or linguistically. WHAT YOU HAVE PROVIDED IS A CASE WHERE WE COME ALONG AND LABEL SEDIMENTARY LAYERS, AND TRANSLATE THE AS IS FACTS ON THE GROUND SYMBOLICALLY. THE RESULTING CODE IS INDEED FUNCTIONAL AND SPECIFIC BUT IT IS DUE TO OUR HIGHLY INTELLIGENT INTERVENTION.) DNA is dFSCI, especially as regards protein coding. Accurate description and associated objective distinction, is the first step in proper scientific reasoning. GEM of TKIkairosfocus
February 10, 2011
February
02
Feb
10
10
2011
04:49 AM
4
04
49
AM
PDT
KF, I'm addressing your specific claim that a random noise generator can be used as evidence that natural processes cannot produce FSCI. Your last post didn't address the argument on its merits, I'll try and re-state the issue to make it more clear. Your hypothesis appears to be: Natural Processes cannot produce FSCI - intelligence is required. The experiment proposed is: Measure the output of a random noise generator for a (long) period of time, if it doesn't produce FSCI then we have proven that FSCI can't be generated by natural forces. If you want to test your random noise idea and the specific claim that, because it won't produce FSCI therefore natural processes can't produce FSCI, then you ought to start by testing it against natural processes that produce specific patterns. Lets say we can describe sedimentary layers thus: aaabbbbaaabbaaabbbaabbbbbaaaaaabbbbb aaabbbaabbbaabababbbaabbbaaabbabbbab (Under 1000 bits if we are using ASCII chars) Where a and b can be any letter in the alphabet and the number of consecutive a's and b's can vary to a degree. We then re-state your hypothesis: Sedimentary layers are not the result of natural processes - Intelligence is required. And the experiment: If a random noise generator doesn't produce a pattern that matches that shown above then we have demonstrated that sedimentary layers are not the result of natural processes. The probability of a true random noise generator producing this type of output is vanishingly low so have we proven the hypothesis - sedimentary layers require intelligence - or is our experiment flawed?? I'm not actually disagreeing with the idea that FSCI requires intelligence, I'm arguing that your proposed experimental proof is flawed and you shouldn't use it in its current form.DrBot
February 10, 2011
February
02
Feb
10
10
2011
04:35 AM
4
04
35
AM
PDT
F/N: Onlookers, above Dr Bot inadvertently fell afoul of a distinction made as long ago as 1973 by Orgel in the very first usage of "specified complexity" in the context of origin of cell based life: __________________ >> In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [Source: L.E. Orgel, 1973. The Origins of Life. New York: John Wiley, p. 189. Emphases added. Crystals, of course, would by extension include snow crystals, and order enfolds cases such as vortexes, up to and including hurricanes etc. Cf. here.] >> __________________ Just like "[l]umps of granite or random mixtures of polymers" stratigraphical layers due to sedimentation or volcanic deposition etc, are complex but not specified. In particular, the layers that form at a given time and place are a stste of affairs tracing to chance plus necessity, but they are an as is proposion, i.e there is nothing beyond this is just what happened here and some other pattern would do just as well. In the living cell we have very specific, tightly coupled functional organisation and associated information, some of it on digital codes that feed into algorithmic processes such as protein translation. The failure to properly mark such observationally based distinctions is a source of much confusion in this matter. I suggest Dr Bot should read and respond to the ID Foundations 4 post, here. GEM of TKIkairosfocus
February 10, 2011
February
02
Feb
10
10
2011
04:35 AM
4
04
35
AM
PDT
Dr Bot: You know that the issue is chance plus necessity, and that in teh case of DNA, the issue is the genrator of informaiton. You know that natural selection, the second half of the usual expression culls, it does not create information. So, kindly address the source of digitally coded,functionally specific complex information. GEM of TKIkairosfocus
February 10, 2011
February
02
Feb
10
10
2011
04:08 AM
4
04
08
AM
PDT
MathGrrl & DrBot state, 'That isn’t at all analogous to the mechanisms identified by modern evolutionary theory, so it wouldn’t prove much of anything. MathGrrl is right,' Are you guys finally admitting that computers, using evolutionary algorithmns, have failed to violate Dembski and Marks (LCI) Law of Conservation of Information,,, but isn't Dawkins 'Methinks It Is Like A Weasel', or similar goal directed programs thereof, suppose to be slam dunk proof for you guys that computers can do what you now say they can't do? As to this,, 'The model doesn’t reflect reality so it’s not a good basis for an experiment.' So please show, ANYWHERE IN REALITY, especially in life, where functional prescriptive information has been generated. Three subsets of sequence complexity and their relevance to biopolymeric information - Abel, Trevors Excerpt: Shannon information theory measures the relative degrees of RSC and OSC. Shannon information theory cannot measure FSC. FSC is invariably associated with all forms of complex biofunction, including biochemical pathways, cycles, positive and negative feedback regulation, and homeostatic metabolism. The algorithmic programming of FSC, not merely its aperiodicity, accounts for biological organization. No empirical evidence exists of either RSC of OSC ever having produced a single instance of sophisticated biological organization. Organization invariably manifests FSC rather than successive random events (RSC) or low-informational self-ordering phenomena (OSC). http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1208958/ The GS (genetic selection) Principle – David L. Abel – 2009 Excerpt: Stunningly, information has been shown not to increase in the coding regions of DNA with evolution. Mutations do not produce increased information. Mira et al (65) showed that the amount of coding in DNA actually decreases with evolution of bacterial genomes, not increases. This paper parallels Petrov’s papers starting with (66) showing a net DNA loss with Drosophila evolution (67). Konopka (68) found strong evidence against the contention of Subba Rao et al (69, 70) that information increases with mutations. The information content of the coding regions in DNA does not tend to increase with evolution as hypothesized. Konopka also found Shannon complexity not to be a suitable indicator of evolutionary progress over a wide range of evolving genes. Konopka’s work applies Shannon theory to known functional text. Kok et al. (71) also found that information does not increase in DNA with evolution. As with Konopka, this finding is in the context of the change in mere Shannon uncertainty. The latter is a far more forgiving definition of information than that required for prescriptive information (PI) (21, 22, 33, 72). It is all the more significant that mutations do not program increased PI. Prescriptive information either instructs or directly produces formal function. No increase in Shannon or Prescriptive information occurs in duplication. What the above papers show is that not even variation of the duplication produces new information, not even Shannon “information.” http://www.scitopics.com/The_GS_Principle_The_Genetic_Selection_Principle.html Dr. Don Johnson explains the difference between Shannon Information and Prescriptive Information, as well as explaining 'the cybernetic cut', in this following Podcast: Programming of Life - Dr. Donald Johnson interviewed by Casey Luskin - audio podcast http://www.idthefuture.com/2010/11/programming_of_life.html ,,, and since 'reality' has never been observed generating any information above what was already present in life,,,, Is Antibiotic Resistance evidence for evolution? - 'The Fitness Test' - video http://www.metacafe.com/watch/3995248 “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain - Michael Behe - December 2010 Excerpt: In its most recent issue The Quarterly Review of Biology has published a review by myself of laboratory evolution experiments of microbes going back four decades.,,, The gist of the paper is that so far the overwhelming number of adaptive (that is, helpful) mutations seen in laboratory evolution experiments are either loss or modification of function. Of course we had already known that the great majority of mutations that have a visible effect on an organism are deleterious. Now, surprisingly, it seems that even the great majority of helpful mutations degrade the genome to a greater or lesser extent.,,, I dub it “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain.(that is a net 'fitness gain' within a 'stressed' environment i.e. remove the stress from the environment and the parent strain is always more 'fit') http://behe.uncommondescent.com/2010/12/the-first-rule-of-adaptive-evolution/ ,,,and since we know for a fact that information is generated by conscious human intelligence,,, Stephen C. Meyer - The Scientific Basis For Intelligent Design - video http://www.metacafe.com/watch/4104651/ ,,, in fact DrBot and MathGrrl, both of you individually, in your short posts, have just greatly exceeded the information capacity of the entire universe from what we can expect over the entire history of the universe!!!,,, ,,, then why in blue blazes is intelligence excluded as a rational explanation? Shoot you guys as 'naturalists/materialists' cannot even prove that the universe itself, at its most foundational level, is 'natural'; Alain Aspect and Anton Zeilinger by Richard Conn Henry - Physics Professor - John Hopkins University Excerpt: Why do people cling with such ferocity to belief in a mind-independent reality? It is surely because if there is no such reality, then ultimately (as far as we can know) mind alone exists. And if mind is not a product of real matter, but rather is the creator of the "illusion" of material reality (which has, in fact, despite the materialists, been known to be the case, since the discovery of quantum mechanics in 1925), then a theistic view of our existence becomes the only rational alternative to solipsism (solipsism is the philosophical idea that only one's own mind is sure to exist). (Dr. Henry's referenced experiment and paper - “An experimental test of non-local realism” by S. Gröblacher et. al., Nature 446, 871, April 2007 - “To be or not to be local” by Alain Aspect, Nature 446, 866, April 2007 (personally I feel the word "illusion" was a bit too strong from Dr. Henry to describe material reality and would myself have opted for his saying something a little more subtle like; "material reality is a "secondary reality" that is dependent on the primary reality of God's mind" to exist. Then again I'm not a professor of physics at a major university as Dr. Henry is.) http://henry.pha.jhu.edu/aspect.html Professor Henry's bluntness on the implications of quantum mechanics continues here: Quantum Enigma:Physics Encounters Consciousness - Richard Conn Henry - Professor of Physics - John Hopkins University Excerpt: It is more than 80 years since the discovery of quantum mechanics gave us the most fundamental insight ever into our nature: the overturning of the Copernican Revolution, and the restoration of us human beings to centrality in the Universe. And yet, have you ever before read a sentence having meaning similar to that of my preceding sentence? Likely you have not, and the reason you have not is, in my opinion, that physicists are in a state of denial… https://uncommondescent.com/intelligent-design/the-quantum-enigma-of-consciousness-and-the-identity-of-the-designer/ As Professor Henry pointed out, it has been known since the discovery of quantum mechanics itself, early last century, that the universe is indeed 'Mental', as is illustrated by these quotes from Max Planck. "As a man who has devoted his whole life to the most clear headed science, to the study of matter, I can tell you as a result of my research about atoms this much: There is no matter as such. All matter originates and exists only by virtue of a force which brings the particle of an atom to vibration and holds this most minute solar system of the atom together. We must assume behind this force the existence of a conscious and intelligent mind. This mind is the matrix of all matter." Max Planck - The Father Of Quantum Mechanics - Das Wesen der Materie [The Nature of Matter], speech at Florence, Italy (1944)(Of Note: Max Planck was a devout Christian, which is not surprising when you realize practically every, if not every, founder of each major branch of modern science also 'just so happened' to have a deep Christian connection.)bornagain77
February 10, 2011
February
02
Feb
10
10
2011
03:55 AM
3
03
55
AM
PDT
Dr Bot: Natural processes embrace both mechanical necessity and chance, as I have long since discussed in the first post in the ID foundations series; you may find the flow chart helpful. Information is a highly contingent phenomenon, and the two credible sources of high contingency are chance and intelligence. The point of the FSCI principle, is that we routinely observe intelligence giving rise to FSCI, so it is an empirically credible sign of design. We do not observe FSCI originating from chance plus necessity without intelligent direction. And, we have good analytical reason why that is so. Can you show a contradiction where on observation digitally coded, functionally specific complex information of at least 1,000 bits [dFSCI, the relevant subset] originated in our observation by chance plus necessity without intelligent design as a material factor? In particular, have you seen at least 170 AA worth of protein coding DNA [~ 500 base pairs] originate by chance + necessity without intelligent direction? If not then only if you can rule out design as a possibility can you claim that however remore the possibility, the dFSCI in the living cell, original and for the various body plans, came about by chance and necessity. In short, we are back at the question of a priori imposition of Lewontinian materialism on origins science as a censoring constraint on inference to best explanation:
To Sagan, as to all but a few other scientists, it is self-evident that the practices of science provide the surest method of putting us in contact with physical reality, and that, in contrast, the demon-haunted world rests on a set of beliefs and behaviors that fail every reasonable test . . . . It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door. [[From: “Billions and Billions of Demons,” NYRB, January 9, 1997.]
What is really self-evident is that materialist censorship imposed as an a priori on science cripples it from being able to address the truth on origins in light of the evidence, since it must fit in the materialistic stratightjacket. Science in a materialistic straightjacket is an ideology, politics not what science should be:
. . . an unfettered (but ethically and intellectually responsible) progressive pursuit of the truth about our world, based on observation, experiment, analysis, theoretical modelling and informed, reasoned discussion.
GEM of TKIkairosfocus
February 10, 2011
February
02
Feb
10
10
2011
03:54 AM
3
03
54
AM
PDT
MathGrrl & DrBot state, 'That isn’t at all analogous to the mechanisms identified by modern evolutionary theory, so it wouldn’t prove much of anything. MathGrrl is right,' Are you guys finally admitting that computers, using evolutionary algorithmns, have failed to violate Dembski and Marks (LCI) Law of Conservation of Information,,, but isn't Dawkins 'Methinks It Is Like A Weasel', or similar goal directed programs thereof, suppose to be slam dunk proof for you guys that computers can do what you now say they can't do? As to this,, 'The model doesn’t reflect reality so it’s not a good basis for an experiment.' So please show, ANYWHERE IN REALITY, especially in life, where functional prescriptive information has been generated. Three subsets of sequence complexity and their relevance to biopolymeric information - Abel, Trevors Excerpt: Shannon information theory measures the relative degrees of RSC and OSC. Shannon information theory cannot measure FSC. FSC is invariably associated with all forms of complex biofunction, including biochemical pathways, cycles, positive and negative feedback regulation, and homeostatic metabolism. The algorithmic programming of FSC, not merely its aperiodicity, accounts for biological organization. No empirical evidence exists of either RSC of OSC ever having produced a single instance of sophisticated biological organization. Organization invariably manifests FSC rather than successive random events (RSC) or low-informational self-ordering phenomena (OSC). http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1208958/ The GS (genetic selection) Principle – David L. Abel – 2009 Excerpt: Stunningly, information has been shown not to increase in the coding regions of DNA with evolution. Mutations do not produce increased information. Mira et al (65) showed that the amount of coding in DNA actually decreases with evolution of bacterial genomes, not increases. This paper parallels Petrov’s papers starting with (66) showing a net DNA loss with Drosophila evolution (67). Konopka (68) found strong evidence against the contention of Subba Rao et al (69, 70) that information increases with mutations. The information content of the coding regions in DNA does not tend to increase with evolution as hypothesized. Konopka also found Shannon complexity not to be a suitable indicator of evolutionary progress over a wide range of evolving genes. Konopka’s work applies Shannon theory to known functional text. Kok et al. (71) also found that information does not increase in DNA with evolution. As with Konopka, this finding is in the context of the change in mere Shannon uncertainty. The latter is a far more forgiving definition of information than that required for prescriptive information (PI) (21, 22, 33, 72). It is all the more significant that mutations do not program increased PI. Prescriptive information either instructs or directly produces formal function. No increase in Shannon or Prescriptive information occurs in duplication. What the above papers show is that not even variation of the duplication produces new information, not even Shannon “information.” http://www.scitopics.com/The_GS_Principle_The_Genetic_Selection_Principle.html http://www.us.net/life/index.htm Dr. Don Johnson explains the difference between Shannon Information and Prescriptive Information, as well as explaining 'the cybernetic cut', in this following Podcast: Programming of Life - Dr. Donald Johnson interviewed by Casey Luskin - audio podcast http://www.idthefuture.com/2010/11/programming_of_life.html ,,, and since 'reality' has never been observed generating any information above what was already present in life,,,, Is Antibiotic Resistance evidence for evolution? - 'The Fitness Test' - video http://www.metacafe.com/watch/3995248 “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain - Michael Behe - December 2010 Excerpt: In its most recent issue The Quarterly Review of Biology has published a review by myself of laboratory evolution experiments of microbes going back four decades.,,, The gist of the paper is that so far the overwhelming number of adaptive (that is, helpful) mutations seen in laboratory evolution experiments are either loss or modification of function. Of course we had already known that the great majority of mutations that have a visible effect on an organism are deleterious. Now, surprisingly, it seems that even the great majority of helpful mutations degrade the genome to a greater or lesser extent.,,, I dub it “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain.(that is a net 'fitness gain' within a 'stressed' environment i.e. remove the stress from the environment and the parent strain is always more 'fit') http://behe.uncommondescent.com/2010/12/the-first-rule-of-adaptive-evolution/ ,,,and since we know for a fact that information is generated by conscious human intelligence,,, Stephen C. Meyer - The Scientific Basis For Intelligent Design - video http://www.metacafe.com/watch/4104651/ ,,, in fact DrBot and MathGrrl, both of you individually, in your short posts, have just greatly exceeded the information capacity of the entire universe from what we can expect over the entire history of the universe!!!,,, ,,, then why in blue blazes is intelligence excluded as a rational explanation? Shoot you guys as 'naturalists/materialists' cannot even prove that the universe itself, at its most foundational level, is 'natural'; Alain Aspect and Anton Zeilinger by Richard Conn Henry - Physics Professor - John Hopkins University Excerpt: Why do people cling with such ferocity to belief in a mind-independent reality? It is surely because if there is no such reality, then ultimately (as far as we can know) mind alone exists. And if mind is not a product of real matter, but rather is the creator of the "illusion" of material reality (which has, in fact, despite the materialists, been known to be the case, since the discovery of quantum mechanics in 1925), then a theistic view of our existence becomes the only rational alternative to solipsism (solipsism is the philosophical idea that only one's own mind is sure to exist). (Dr. Henry's referenced experiment and paper - “An experimental test of non-local realism” by S. Gröblacher et. al., Nature 446, 871, April 2007 - “To be or not to be local” by Alain Aspect, Nature 446, 866, April 2007 (personally I feel the word "illusion" was a bit too strong from Dr. Henry to describe material reality and would myself have opted for his saying something a little more subtle like; "material reality is a "secondary reality" that is dependent on the primary reality of God's mind" to exist. Then again I'm not a professor of physics at a major university as Dr. Henry is.) http://henry.pha.jhu.edu/aspect.html Professor Henry's bluntness on the implications of quantum mechanics continues here: Quantum Enigma:Physics Encounters Consciousness - Richard Conn Henry - Professor of Physics - John Hopkins University Excerpt: It is more than 80 years since the discovery of quantum mechanics gave us the most fundamental insight ever into our nature: the overturning of the Copernican Revolution, and the restoration of us human beings to centrality in the Universe. And yet, have you ever before read a sentence having meaning similar to that of my preceding sentence? Likely you have not, and the reason you have not is, in my opinion, that physicists are in a state of denial… https://uncommondescent.com/intelligent-design/the-quantum-enigma-of-consciousness-and-the-identity-of-the-designer/ As Professor Henry pointed out, it has been known since the discovery of quantum mechanics itself, early last century, that the universe is indeed 'Mental', as is illustrated by these quotes from Max Planck. "As a man who has devoted his whole life to the most clear headed science, to the study of matter, I can tell you as a result of my research about atoms this much: There is no matter as such. All matter originates and exists only by virtue of a force which brings the particle of an atom to vibration and holds this most minute solar system of the atom together. We must assume behind this force the existence of a conscious and intelligent mind. This mind is the matrix of all matter." Max Planck - The Father Of Quantum Mechanics - Das Wesen der Materie [The Nature of Matter], speech at Florence, Italy (1944)(Of Note: Max Planck was a devout Christian, which is not surprising when you realize practically every, if not every, founder of each major branch of modern science also 'just so happened' to have a deep Christian connection.) http://en.wikiquote.org/wiki/Max_Planckbornagain77
February 10, 2011
February
02
Feb
10
10
2011
03:54 AM
3
03
54
AM
PDT
The Zener diode noise writing a functional text test wold show, in a very convenient way, that undirected forces of chance and necessity, on teh gamut of the cosmos, are capable of originating FSCI.
Just to make the point again - If your random number generator can't produce specific patterns that we actually observe being generated by natural forces then why is it a good test of a hypothesis about natural forces generating other specific patterns that we observe? By the same argument you use we could say that sedimentary rocks aren't the result of natural forces because a random number generator doesn't produce that kind of ordered output.DrBot
February 10, 2011
February
02
Feb
10
10
2011
03:35 AM
3
03
35
AM
PDT
You need to look at from a more realistic chemical perspective. We are not talking about individual bits coming together in the right order but rather groups of bits (molecules) coming together to form a functional structure. If the 1000 bit description is proper and complete then it describes how each atom is configured and connected. Natural processes produce complex molecules so rather than having a random number generator spit out random bits it would, at the very least, be more realistic for it to spit out structured groups of bits (corresponding to molecules that occur naturally) But that still wouldn't capture chemistry in action, for example even spitting out random molecular bit descriptions wouldn't lead to larger structures being generated, for example molecular chains. Natural chemical interactions are far from random so testing a hypothesis with a pure random number generator has no relevance to real chemical systems, it is a red herring.
The Zener diode noise writing a functional text test wold show, in a very convenient way, that undirected forces of chance and necessity, on teh gamut of the cosmos, are capable of originating FSCI.
Incorrect, all it would show is that a pure random noise generator generates random noise.DrBot
February 10, 2011
February
02
Feb
10
10
2011
03:28 AM
3
03
28
AM
PDT
MG: The Zener diode noise writing a functional text test wold show, in a very convenient way, that undirected forces of chance and necessity, on teh gamut of the cosmos, are capable of originating FSCI. That would be a direct refutation of the heart of the design inference, on both origin of complex specified information, and on the origin of irreducibly complex systems [as the implication of such is that the quantum of functional complexity to meet the criteria C1 - 5 discussed here, will require passing the design inference threshold]. Y'see, the natural selection part of the Chance variation plus natural selection expression only culls out less or non functional variants. The proposed source of the variation in the end is chance. So, the decisive issue is the implied claim on the power of chance to create novel digital information in genes and associated regulatory elements in the living cell; and before that to explain the origin of the living cell. Thence the significance of the 500 - 1,000 bit limit as a threshold for the capacity of chance on the gamut of our observable cosmos. As my current post here discusses, that threshold is passed by the time we get to a single new protein of 200 AA's. And, of course, given the frequency with which GA or evolutionary algorithm type programs are advanced to show the capacity of evolutionary mechanisms, it is a little rich to object to making sure the random information to be added to such an algorithm is genuinely random. That is what a zener driving a PRBS type shift register would do. (Such zener ckts are now routinely used as genuine random number generators.) The red herring --> strawman subject changing rebuttal (nb probably inadvertent) fails. GEM of TKIkairosfocus
February 10, 2011
February
02
Feb
10
10
2011
03:22 AM
3
03
22
AM
PDT
MathGrrl is right, but the point applies even more generally. Natural processes (which we actually observe) produce structure, take for example the way sediments deposit in layers to produce sedimentary rock. A random generator like the one KF described would not produce this kind of organised pattern. If the random noise generator can't even produce patterns that we know occur as a result of the forces of nature then it is unreasonable to use it as an experimental apparatus to test the idea that FSCI can arise from natural forces and laws. The model doesn't reflect reality so it's not a good basis for an experiment.DrBot
February 10, 2011
February
02
Feb
10
10
2011
03:02 AM
3
03
02
AM
PDT
kairosfocus writes:
A simple way would be to set up a million PCs with floppy drives and use zener noise sources to spew noise across them. Then, every 30 s or so, check for a readable file that has 1,000 or more bits of functional information.
That isn't at all analogous to the mechanisms identified by modern evolutionary theory, so it wouldn't prove much of anything. This is an excellent example of why it is essential to clearly state one's hypothesis and make testable predictions entailed by that hypothesis that would serve to refute it if they fail. Doing this for ID, and documenting it in a FAQ, would eliminate the "ID is not science" claim immediately. I would think that a number of people here would be interested in doing that work.MathGrrl
February 10, 2011
February
02
Feb
10
10
2011
12:57 AM
12
12
57
AM
PDT
MF, MG et al (and onlookers): At 7 above, I showed a specific and direct way in which the GENERAL claim of the design theory could in principle be falsified, quite similarly to how the 2nd law of thermodynamics could also be falsified. (Indeed, such a falsification of the design inference would be a big step to falsifying the 2nd law of thermodynamics.) BA 77 has also provided several more specific cases of potential falsification. Inability to be falsified is not a problem for the design inference, save insofar as it is a convenient strawman tactic used to try to discredit design theory. The insistent putting up of such strawman based talking points while ignoring correction, does not say much about the strength of the evolutionary materialist case on its merits. GEM of TKIkairosfocus
February 10, 2011
February
02
Feb
10
10
2011
12:48 AM
12
12
48
AM
PDT
...what is falsifiable (and has been falsified in some cases) is **specific** claims about how biological diversity arose. Is the claim that complex life arose by random mutation (and natural selection) falsifiable? Is the claim that life started by a "random event" falsifiable? In theory, "randomness" can produce anything.Therefore, a "randomness-theory" will never be falsifiable. But will it still be scientific?Dala
February 10, 2011
February
02
Feb
10
10
2011
12:36 AM
12
12
36
AM
PDT
#11 #12 Meleagar "the claim that non-intelligent, non-teleological forces/processes are sufficient to explain biological diversity is necessarily unfalsifiable as well" Absolutely. However, what is falsifiable (and has been falsified in some cases) is **specific** claims about how biological diversity arosemarkf
February 9, 2011
February
02
Feb
9
09
2011
11:02 PM
11
11
02
PM
PDT
Mathgrrl: It would be hard to find a more valid dichotomy than X and not-X, as I explained in #3.Meleagar
February 9, 2011
February
02
Feb
9
09
2011
05:05 PM
5
05
05
PM
PDT
F/N: On the explanatory filter, cf here.kairosfocus
February 9, 2011
February
02
Feb
9
09
2011
04:47 PM
4
04
47
PM
PDT
MathGrrl, In addition to kf's post, Please tell me exactly where Abel's Null hypothesis fails to provide an exact point of falsification/verification for both ID and neo-Darwinism: The Law of Physicodynamic Insufficiency - Dr David L. Abel - November 2010 Excerpt: “If decision-node programming selections are made randomly or by law rather than with purposeful intent, no non-trivial (sophisticated) function will spontaneously arise.”,,, After ten years of continual republication of the null hypothesis with appeals for falsification, no falsification has been provided. The time has come to extend this null hypothesis into a formal scientific prediction: “No non trivial algorithmic/computational utility will ever arise from chance and/or necessity alone.” http://www.scitopics.com/The_Law_of_Physicodynamic_Insufficiency.html The Capabilities of Chaos and Complexity: David L. Abel - Null Hypothesis For Information Generation - 2009 To focus the scientific community’s attention on its own tendencies toward overzealous metaphysical imagination bordering on “wish-fulfillment,” we propose the following readily falsifiable null hypothesis, and invite rigorous experimental attempts to falsify it: "Physicodynamics cannot spontaneously traverse The Cybernetic Cut: physicodynamics alone cannot organize itself into formally functional systems requiring algorithmic optimization, computational halting, and circuit integration." A single exception of non trivial, unaided spontaneous optimization of formal function by truly natural process would falsify this null hypothesis. http://www.mdpi.com/1422-0067/10/1/247/pdf Can We Falsify Any Of The Following Null Hypothesis (For Information Generation) 1) Mathematical Logic 2) Algorithmic Optimization 3) Cybernetic Programming 4) Computational Halting 5) Integrated Circuits 6) Organization (e.g. homeostatic optimization far from equilibrium) 7) Material Symbol Systems (e.g. genetics) 8.) Any Goal Oriented bona fide system 9) Language 10) Formal function of any kind 11) Utilitarian work http://mdpi.com/1422-0067/10/1/247/ag MathGrrl, I would expect the demonstrated generation of ANY functional prescriptive information whatsoever to be a exceedingly modest threshold for neo-Darwinism to prove to be true, as well as an extremely modest threshold to prove ID false, especially seeing as the integrated prescriptive information found in life far surpasses what our best programmers can do in computers!! Not to mention the fact that most neo-Darwinian college professors mercilessly ridicule anyone who dares question the almighty power of Random Mutations filtered by Natural Selection to generate such unmatched levels of integrated complexity found within all life on earth. Perhaps Mathgrrl you would like to be the first to falsify Abel's Null Hypothesis???bornagain77
February 9, 2011
February
02
Feb
9
09
2011
04:47 PM
4
04
47
PM
PDT
Longer version: So much confusion over such a simple concept- determining design in a natural world. This is all about answering one of science's three basic questions- "How did it come to be this way?". Intelligent Design is based on three premises and the inference that follows (DeWolf et al., Darwinism, Design and Public Education, pg. 92):
1) High information content (or specified complexity) and irreducible complexity constitute strong indicators or hallmarks of (past) intelligent design. 2) Biological systems have a high information content (or specified complexity) and utilize subsystems that manifest irreducible complexity. 3) Naturalistic mechanisms or undirected causes do not suffice to explain the origin of information (specified complexity) or irreducible complexity. 4) Therefore, intelligent design constitutes the best explanations for the origin of information and irreducible complexity in biological systems.
The criteria for inferring design in biology is, as Michael J. Behe, Professor of Biochemistry at Leheigh University, puts it in his book Darwin ' s Black Box:
"Our ability to be confident of the design of the cilium or intracellular transport rests on the same principles to be confident of the design of anything: the ordering of separate components to achieve an identifiable function that depends sharply on the components.”
He goes on to say:
” Might there be some as-yet-undiscovered natural process that would explain biochemical complexity? No one would be foolish enough to categorically deny the possibility. Nonetheless, we can say that if there is such a process, no one has a clue how it would work. Further, it would go against all human experience, like postulating that a natural process might explain computers.”
That said we have the explanatory filter to help us determine the cause of the effect we are investigating. On to the Explanatory Filter: The (design) explanatory filter is a standard operating procedure used for detecting basic origins of cause. It or some reasonable facsimile is used when a dead body turns up or a fire is reported. With the dead body we want to determine if it was a natural death, an accident, a suicide or a homicide (what caused the death?) and in with the fire, the investigator wants to know how it started- arson, negligence, accident or natural causes, i.e. lightning, lava, meteorite, etc. Only through investigation can those not present hope to know about it. When investigating/ researching/ studying an object/ event/ structure, we need to know one of three things in order to determine how it happened: 1. Did it have to happen? 2. Did it happen by accident? 3. Did an intelligent agent cause it to happen? A fire is investigated before an arson is. First we must make this clarification by Wm. Dembski:
”When the Explanatory Filter fails to detect design in a thing, can we be sure no intelligent cause underlies it? The answer to this question is No. For determining that something is not designed, the Explanatory Filter is not a reliable criterion. False negatives are a problem for the Explanatory Filter. This problem of false negatives, however, is endemic to detecting intelligent causes. One difficulty is that intelligent causes can mimic law and chance, thereby rendering their actions indistinguishable from these unintelligent causes. It takes an intelligent cause to know an intelligent cause, but if we don't know enough, we'll miss it.”
This is why further investigation is always a good thing. Initial inferences can either be confirmed or falsified by further research. Intelligent causes always entail intent. Natural causes never do. (page 13 of No Free Lunch shows the EF flowchart. It can also be found on page 37 of The Design Inference, page 182 of Signs of Intelligence: Understanding Intelligent Design, and page 88 of The Design Revolution) The flowchart for the EF is set up so that there are 3 decision nodes, each node capable only of a Yes or No decision. As are all filters it is eliminative. It eliminates via consideration/ examination. START ? CONTINGENCY? ?No ? Necessity (regularity/ law) ?yes COMPLEXITY? ?No ? Chance ?yes SPECIFICATION? ?No ? Chance ? yes Design The event/ object/ phenomena in question is what we start with. Then we ask, in sequence, those 3 questions from above- 1st Did this event/ phenomena/ object have to happen? IOW is this the result of the laws of nature, regularity, or some other pre-determining (natural) factors? If it is then we don’t infer design with what we have. If it isn’t then we ask about the likely-hood of it coming about by some chance/ coincidence? Chance events do happen all the time, and absent some blatant design marker, we must take into account the number of factors required to bring it about. The more factors the more complex it is. The more parts involved the more complex it is. By getting to the final decision node where we separate that which is merely complex from intentional design (an event/ object that has a small probability of occurring by chance and fits a specified pattern), means we have looked into the possibility of X to have occurred by other means. May we have dismissed/ eliminated some too soon? In the realm of anything is possible, possibly. However not only is it impractical to attempt every possible, but by doing so we would no longer have a design inference. By eliminating every possible other cause design would be a given. What we are looking for is a reasonable inference, not proof. IOW we only have to eliminate every possible scenario if we want absolute proof. We already understand that people who ask that of the EF are not interested in science. It took our current understanding in order to make it to that, the final decision node and it takes our current understanding to make the inference. Future knowledge will either confirm or falsify the inference. The research does not and was never meant to stop at the last node. Just knowing something was the result of intentional design offers no more about it. IOW design detection is the first step in the two step process- detection and understanding of the design. Just because the answer is 42* that doesn’t tell us what was on the left-hand side of the equal sign.
"Thus, Behe concludes on the basis of our knowledge of present cause-and-effect relationships (in accord with the standard uniformitarian method employed in the historical sciences) that the molecular machines and complex systems we observe in cells can be best explained as the result of an intelligent cause. In brief, molecular motors appear designed because they were designed” Pg. 72 of Darwinism, Design and Public Education
IOW the design inference is all about our knowledge of cause and effect relationships. We do not infer that every death is a homicide nor every rock an artifct. Parsimony- no need to add entities and the design inference is all about requirements, as in is agency involvement required or not? Threfor to refute any given design inference all one has to do is demonstrate that nature, operating freely, can produce it. (*Hitchhiker's Guide to the Galaxy reference)Joseph
February 9, 2011
February
02
Feb
9
09
2011
04:41 PM
4
04
41
PM
PDT
How to falisfy Intelligent Design- Intelligent Design is the claim that some designing intelligence is required to explain some effects observed in the universe (and the universe itself)-> REQUIRED. Therefor to falsfy any given design inference all one has to do is demonstrate that nature, operating freely can produce the effect in question. This is proven by the explanatory filter which mandates that chance and necessity be given first shot at demonstrating their powers. If chance and/ or necessity can explain it then the design node is never even considered.Joseph
February 9, 2011
February
02
Feb
9
09
2011
04:39 PM
4
04
39
PM
PDT
MG: Please, look at 7 above. GEM of TKIkairosfocus
February 9, 2011
February
02
Feb
9
09
2011
04:28 PM
4
04
28
PM
PDT
Meleagar writes:
If ID is unfalsifiable, tnen the claim that non-intelligent, non-teleological forces/processes are sufficient to explain biological diversity is necessarily unfalsifiable as well, because both claims would necessarily be determined by the same metric.
That is a false dichotomy. Falsifiability has a very precise meaning and mechanism. If you want to demonstrate that ID is falsifiable, you need to clearly state the ID hypothesis and make one or more testable predictions based on entailments of that hypothesis. Falsifying the modern synthesis, while it might get you a trip to Stockholm, would do nothing to support ID. Can you state the ID hypothesis and one or more testable predictions it entails?MathGrrl
February 9, 2011
February
02
Feb
9
09
2011
04:22 PM
4
04
22
PM
PDT
It should be noted that the same holds true for any claim of ID: it would require a falsifying metric - which it does. It is interesting that mainstream evolutionary biologists have claimed RM & NS as sufficient and is a valid scientific assertion of fact, and then claim that there is no valid scientific ID metric, because the former can only be validated by the latter.Meleagar
February 9, 2011
February
02
Feb
9
09
2011
04:10 PM
4
04
10
PM
PDT
MarkF, If ID is unfalsifiable, tnen the claim that non-intelligent, non-teleological forces/processes are sufficient to explain biological diversity is necessarily unfalsifiable as well, because both claims would necessarily be determined by the same metric.Meleagar
February 9, 2011
February
02
Feb
9
09
2011
04:00 PM
4
04
00
PM
PDT
OT kairos, this 'short' video may interest you very much: Quantum Information In DNA & Protein Folding - video http://www.metacafe.com/watch/5936605/bornagain77
February 9, 2011
February
02
Feb
9
09
2011
02:09 PM
2
02
09
PM
PDT
1 4 5 6 7

Leave a Reply