Uncommon Descent Serving The Intelligent Design Community

A Practical Medical Application of ID Theory (or, Darwinism as a Science-Stopper)

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In a previous UD thread, a dude named Poachy (where do these guys get these screen names?), with much sarcasm about a comment I made, proposed:

We need to start voting with our feet and eschew all but the medical advances that come from application of the ID paradigm.

Here’s a prediction and a potential medical application from ID theory: Design a chemical or protein which would require a triple CCC to defeat its toxic effects on a bacterium, and it will exhaust the probabilistic resources of blind-watchmaker mechanisms to counteract the toxic effects.

Such a success could and will only come from engineering and reverse-engineering efforts, not from Darwinian theory.

In the meantime, medical doctors should prescribe multiple antibiotics for all infections, since this will decrease the likelihood that infectious agents can develop resistance through stochastic processes. Had the nature of the limits of Darwinian processes been understood at the outset, the medical community would not have replaced one antibiotic with another in a serial fashion, but would have prescribed them in parallel.

This represents yet another catastrophic failure of Darwinian presumption, which is based on hopelessly out-of-date 19th century scientific naïveté.

Comments
dacook #87, "That’s interesting stuff you’ve found. I’ll have to look at it in more detail, especially about the Cephalosporins. "It will be a challenge to test those against MRSA because they are notorious for looking like they’re the bacteria in vitro but then not working in vivo. My patient with the presumed MRSA is improving on Vanco and Levaquin." Please let me know if you find anything interesting, I would really like to know if this line of research is panning out. As well, I am glad your patient is responding well to treatment. bornagain77
J, As you know the evidence for cichlids is just starting to come in. So we have much left to learn. Yet we already know several things are likely. The older lakes will have several sub-species of cichlids that have lost the functionality of their unused cones as they became more "reproductively isolated" through time. This will be due to the principle of Genetic Entropy. Much like the cave dwelling spiders, scorpions etc..etc.. have lost their ability to see. We already know that the older lakes cichlids have more genetic diversity than the younger lakes. We also know the younger lakes' cichlids radiated from a more "ancient" lineage of cichlids. This all conforms to ID/Genetic Entropy. Thus we can predict that the greater genetic diversity is due to the fact that the deleterious mutational load has built up in the older lakes sub-species at a substantial (and constant?) rate. We can do repeatable experiments that come directly from the ID/Genetic Entropy mo^del, Using consistent environmental cues, The parent species will have a greater propensity for consistent adaptive radiation. Thus ruling out any "random" variation producing the consistent adaptive radiation and validating the Front-Loaded portion of ID. Intense genetic study of cichlids may even find markers that tell us many useful genetic markers for when the genome is vibrant and when it has been subject to Genetic Entropy. bornagain77
The cichlid "speciation" data clearly implies a few things: - As admitted in the Sci Am article, it's obviously not due to the typical Darwinian random variation + natural selection scheme. - It doesn't qualify as true speciation as everyone understands it -- it's simply variation. (The genetic variability in hundreds of cichlid "species" is less than that of the (single) human species.) - The variability appears to be due to activation of genetic "programs" that are already present in the genomes of the cichlids. (Apparently, the cichlids have a branching genome, with heritable switches that can be set to steer development along certain genetic pathways, depending upon the environment.) I agree that cichlids epitomize the concept of front loading. The question (beyond the scope of ID theory, per se) is how the front loaded information got into them. Were the first front-loaded cichlid gametes fabricated by an intelligent agent and set loose into the lakes of Gondwanaland? Or did the information in their genomes evolve over time? There is, as you know, no evidence that Darwinian evolution is feasible. (There has never been a demonstration that nonteleological evolution results in any kind of functioning, so it's silly for anyone to propose that as an explanation of how the cichlids came to "feed on other fish or on eggs and larvae, to nip off fins, scrape algae, tear off scales, crush mollusks or any of myriad other functions.") But teleological evolution -- in a system designed for evolution, and with a driving purpose built in -- has been shown viable by (intelligently designed) evolutionary computation. If the information did evolve (teleologically), then the next question is: How did numerous such "programs" wind up in a single genome? Logically, it would seem that there are only the following possibilities: - The programs evolved independently, in different lineages, and were brought together through recombination(?), without obliterating the (temporarily) unused programs -- somehow. - The pathways evolved serially in one lineage, without obliterating the previously evolved programs -- somehow. - A combination of these two occurred. j
I'm really beginning to think these cichlids should become the poster child for the front loaded portion of the ID theory: Multiple Genes Permit Closely Related Fish Species To Mix And Match Their Color Vision http://www.sciencedaily.com/releases/2005/10/051011072648.htm of special note: In the new work, the researchers performed physiological and molecular genetic analyses of color vision in cichlid fish from Lake Malawi and demonstrated that differences in color vision between closely related species arise from individual species' using different subsets of distinct visual pigments. The scientists showed that although an unexpectedly large group of these visual pigments are available to all the species, each expresses the pigments selectively, and in an individual way, resulting in differences in how the visual world is sensed. The researchers identified a total of seven "cone" (color-sensing) visual pigments underlying color vision in these cichlids. They have measured the sensitivities of the cones to different wavelengths of light and isolated the seven genes that give rise to the pigment proteins. The seven cone types have maximum sensitivities ranging from the red end of the spectrum right through to the ultraviolet--light outside the range of human sensitivity. The researchers showed that in order to tune its color vision, each cichlid species primarily expresses three of the seven cone pigment genes encoded by their genomes. It is not clear why such closely related cichlid species have evolved such different visual sensitivities, I don't know what they think but this example is screaming front loading to me. bornagain77
bornagain77: That's interesting stuff you've found. I'll have to look at it in more detail, especially about the Cephalosporins. It will be a challenge to test those against MRSA because they are notorious for looking like they're killing the bacteria in vitro but then not working in vivo. My patient with the presumed MRSA is improving on Vanco and Levaquin. dacook
This is interesting Janice and Dacook; Princeton scientists break cholera's lines of communication http://www.eurekalert.org/pub_releases/2007-11/pu-psb111407.php Breakthrough research suggests a way to fight bacteria without Antibiotics http://isuman.blogspot.com/2005/11/breakthrough-research-suggests-way-to.html I wonder if this technique could be used as a "stand alone" method for fighting all types of bacterial infections or if it could at least be used in conjunction with current treatments to make them more effective in fighting infection? bornagain77
Janice you stated: "The clavulinic acid binds irreversibly to the beta-lactamase enzymes that inactivate other penicillin-derived antibiotics and this allows the amoxycillin to do its job." I have to say I am really impressed with this. This is really taking the fight to the bug! Do you know of any sites that give a little background on how the antibiotic was developed and who was responsible for developing such a wonderful and specific response to bacterial resistance? bornagain77
George DW, @ 50 Re sickle cell anaemia, you wrote
It seems to me rather more indicative of evolution, just like the adaptations in bacteria that are the focus of Gil’s post: an adaptation beneficial in limited situations that comes at an expense of function in a wider environment.
I would say indicative of devolution, or loss of information, rather than adaptation. People with sickle cell anaemia have abnormal haemoglobin (HbS) that sticks together under conditions of low oxygen concentration or dehydration. This deforms the red cell and makes its surface rigid. Apart from making the cell useless for oxygen transport it also makes it hard for cells to pass through fine capillaries so they get blocked and the tissue downstream dies. The disease is a problem for malaria parasites because they can't colonise sickled red cells. It is a worse problem for homozygous carriers of the mutation who are sick, off an on, for all of their short lives. See the Wikipedia article. Others, re design of antibiotics, I'm rather impressed with the design of the drug known as "Augmentin". It's a combination of clavulanic acid and amoxycillin. The clavulinic acid binds irreversibly to the beta-lactamase enzymes that inactivate other penicillin-derived antibiotics and this allows the amoxycillin to do its job. Also I don't think we should be surprised that something as fundamental to bacterial survival as their cell walls would be protected by (I believe, designed) mechanisms of some sort. How we use antibiotics comes down to stewardship with the problem being that we, being greedy, are quite bad at that. Janice
As for Behe, I think that maybe what forced him to say that ID is like astrology was that he probably tried to present ID as a complete explanation for the origin of species rather than as just a criticism of evolution theory (this time it is OK for me to say “I think,” Lenny, because I don’t know). I don’t know — I have not read a transcript of his testimony.
http://www.pandasthumb.org/archives/2005/12/kudos-to-the-nc.html I still don't get why evolutionists and/or committed naturalists insist on using such dishonest tactics to persuade others to believe their message? Why? Why is their first instinct to resort to dishonesty? Wy can't they just be honest? You see, the honest thing to do would be allow students to be exposed to opposing sides of the issue. Instead, they resort to dishonestly taking people out of context. This is supposed to be scientific? Does science endorse this kind of dishonesty? It sure seems like many naturalists do. Bettawrekonize
ellazimm
Nochange and Gil: Participants of this blog are quite right to ask critics to support their statements in the face of criticism so I think Gil should defend himself against the Panda’s Thumb critics. Let’s keep it about the science acknowledging that sometimes we can get it wrong if need be.
I think that Intelligent Design proponents and Creationists should have an equil opportunity to defend themselves in tax funded public schools. ID proponents and creationists are quite right to ask evolutionists and comitted naturalists to support their statements in public schools by exposing students to opposing sides of the issue instead of using lies and unscientifically brainwashing them with one side and censoring all others using stolen tax dollars. ID proponents and creationists are quite right when they demand that students should be allowed to see the pros and cons of opposing sides and choose for themselves what to believe.
As for Behe, I think that maybe what forced him to say that ID is like astrology was that he probably tried to present ID as a complete explanation for the origin of species rather than as just a criticism of evolution theory (this time it is OK for me to say “I think,” Lenny, because I don’t know). I don’t know — I have not read a transcript of his testimony.
http://www.pandasthumb.org/archives/2005/12/kudos-to-the-nc.html When the people at the pandas thumb stop resorting to using strawman and dishonestly taking people out of context then we'll talk. Bettawrekonize
I asked a physician and pathologist about this. Multiple "shotgun doses" of antibiotics is not economically sound, or chemically efficient. Not efficient as the interactions of using several at once can be dangerous to some people, and not likely to do what you want in the first place, as even without dangerous mixes, one antibiotic can take over the task at hand and leave the others impotent. Drug interaction is a nasty risk. Just ask your local pharmacist too. S Wakefield Tolbert
J, In this following study they (the evolutionists) puzzle over several lines of evidence. Evidence which happens to fit the ID/Genetic Entropy mo^del very well, As well the environmental cues of the ID provide the missing primary driving mechanism at the genetic level for rapid speciation that is grossly and insufficiently explained by natural selection of neo-Darwinism. http://golab.unl.edu/publications/Farias1999/Farias_JME.html Of special note: Neotropical (south american) cichlids might have experienced lower rates of extinction and speciation than their African counterparts, preserving primitive characteristics and thereby accumulating higher levels of genetic divergence in some of these lineages. also of note; However, this rapid speciation rate is not correlated with high genetic divergence, since African cichlids are known to exhibit an overall low amount of genetic variation (e.g., Meyer et al. 1990). Recently, Zardoya et al. (1996) found twice as much variation at a nuclear marker (TmoM27) among South American than African cichlids. Our path length analysis and relative rate tests confirm this finding based on nuclear DNA that, although considerably less speciose than their African counterparts, the Neotropical cichlids are extremely more variable at the molecular level. The rate of nucleotide divergence at the 16S mitochondrial fragment in Neotropical cichlids was significantly higher than in the African lineages (Fig. 4). However, the rate acceleration was only significantly higher among geophagines. Hmm, Yet again no explanation from neo-Darwinism, indeed not even a viable mechnism, but it fits easily into our nascent mod^el. bornagain77
If my UD posts were as scientifically and logically vacuous as the Pandas claim, no response would be required, but they seem to have a passion for devoting both tomes and insults to refuting my arguments. While I agree with the substance of your original post (I think we should put antibiotics in the water - let's defeat the germs *before* they infect us). I don't agree with the statement you wrote above here. By that logic, we could just ignore everything the Darwinists say. If logic is wrong, we should point it out to people. That said, I'm pretty sure the Panda people (where are they? are they science teachers?) would be wrong to criticize you. As you say, you're a scientist. What the hell do they know? They're probably just high school science teachers (and we all know how much an education degree is worth). Nochange
Correction dacook, the last sentence should read: ",a set of events that leads to effective inhibition of PBP 2a and the attendant kil^ling of the MRSA strains." bornagain77
OOPS, sorry for the 3 time repeat Gil bornagain77
dacook you ask, What would really be nice is if less toxic antibiotics like the Cephalosporins could also be modified to kill MRSA. Looks like someone has already thought the same thing dacook, Action of New Cephalosporin Antibiotics Effective Against MRSA & VRSA http://tahilla.typepad.com/mrsawatch/vrsa_the_coming_threat/index.html of special note: Three cephalosporins (compounds 1–3) have been studied herein, which show antibacterial activities against MRSA, including the clinically important vancomycin-resistant strains....These cephalosporins exhibit substantially smaller dissociation constants for the preacylation complex compared with the case of typical cephalosporins, but their pseudo-second-order rate constants for encounter with PBP 2a (k2/Ks) are not very large (?200 M–1 s–1). It is documented herein that these cephalosporins facilitate a conformational change in PBP 2a, a process that is enhanced in the presence of a synthetic surrogate for cell wall, resulting in increases in the k2/Ks parameter and in more facile enzyme inhibition. These findings argue that the novel cephalosporins are able to co-opt interactions between PBP 2a and the cell wall in gaining access to the active site in the inhibition process, a set of events that leads to effective inhibition of PBP 2a and the attendant of the MRSA strains. Do you think this may be of help for you dacook? bornagain77
dacook you ask, What would really be nice is if less toxic antibiotics like the Cephalosporins could also be modified to kill MRSA. Looks like someone has already thought the same thing dacook, Action of New Cephalosporin Antibiotics Effective Against MRSA & VRSA http://tahilla.typepad.com/mrsawatch/vrsa_the_coming_threat/index.html of special note: Three cephalosporins (compounds 1–3) have been studied herein, which show antibacterial activities against MRSA, including the clinically important vancomycin-resistant strains....These cephalosporins exhibit substantially smaller dissociation constants for the preacylation complex compared with the case of typical cephalosporins, but their pseudo-second-order rate constants for encounter with PBP 2a (k2/Ks) are not very large (?200 M–1 s–1). It is documented herein that these cephalosporins facilitate a conformational change in PBP 2a, a process that is enhanced in the presence of a synthetic surrogate for cell wall, resulting in increases in the k2/Ks parameter and in more facile enzyme inhibition. These findings argue that the novel cephalosporins are able to co-opt interactions between PBP 2a and the cell wall in gaining access to the active site in the inhibition process, a set of events that leads to effective inhibition of PBP 2a and the attendant of the MRSA strains. Do you think this may be of help for you dacook? bornagain77
dacook you ask, What would really be nice is if less toxic antibiotics like the Cephalosporins could also be modified to kill MRSA. Looks like someone has already thought the same thing dacook, Action of New Cephalosporin Antibiotics Effective Against MRSA & VRSA http://tahilla.typepad.com/mrsawatch/vrsa_the_coming_threat/index.html of special note: Three cephalosporins (compounds 1–3) have been studied herein, which show antibacterial activities against MRSA, including the clinically important vancomycin-resistant strains....These cephalosporins exhibit substantially smaller dissociation constants for the preacylation complex compared with the case of typical cephalosporins, but their pseudo-second-order rate constants for encounter with PBP 2a (k2/Ks) are not very large (?200 M–1 s–1). It is documented herein that these cephalosporins facilitate a conformational change in PBP 2a, a process that is enhanced in the presence of a synthetic surrogate for cell wall, resulting in increases in the k2/Ks parameter and in more facile enzyme inhibition. These findings argue that the novel cephalosporins are able to co-opt interactions between PBP 2a and the cell wall in gaining access to the active site in the inhibition process, a set of events that leads to effective inhibition of PBP 2a and the attendant of the MRSA strains. Do you think this may be of help for you dacook? bornagain77
Just for the entertainment value, I checked out the Panda response to my post. I seem to have a penchant for arousing their ire and vitriol, and I can explain why. If my UD posts were as scientifically and logically vacuous as the Pandas claim, no response would be required, but they seem to have a passion for devoting both tomes and insults to refuting my arguments. I am perpetually labeled as being completely ignorant about science (even though empirically verifiable science is what I do every day for a living). Yet, whenever I post an incisive comment about the empirical, observational, mathematical, or computational problems with blind-watchmaker Darwinism, the Pandas go into a feeding frenzy. I've struck a nerve. The personal insults and claims that I know nothing about science are very telling. GilDodgen
Thanks a lot J; That study goes well with this study: 2006 The Royal Society African cichlid fish: a system in adaptive radiation research http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1635482 It seems that when these two studies are stripped of their evolutionary garbage, they will be a key piece of proof that will verify the ID/Genetic Entropy , maybe even enough proof to show that environmental cues are providing the primary mechanism for the rapid adaptive radiation from a parent species. As well, the rate of accumulated mutations (Genetic Entropy), if nailed down, could explain the much older lakes much larger divergence of genetic diversity, as well as explaining the lack of the ability of the "older" species in the older lake to radiate. (note; genetic diversity of the younger lakes cichlids is shown to be much less than the older lakes cichlids) It may now even be possible to prove when the ID was implemented, whether at one time or whether at separate times for the cichlids and at what approximate time/times. It looks like it will be a bit of a project, to get all the data straight from all the studies listed. bornagain77
bornagain77 (62): I will have to look deeper into this, and make sure I have all my bases covered... From "Cichlids of the Rift Lakes" by Melanie L. J. Striassny and Axel Meyer, Scientific American, February 1999:
The exceptional diversity of the family Cichlidae has elevated it to the status of an icon in textbooks of evolutionary biology. Cichlids are spiny-rayed freshwater fishes that come in a vast assortment of colors, forms and habits [around the world]... ... Until very recently, biologists did not know how these hundreds of cichlid species were related. Modern molecular techniques have now answered some of these questions and raised many others. Although the genetic research confirms several early hypotheses based on anatomy, it sometimes conflicts spectacularly with entrenched ideas. As initially suggested by Mutsumi Nishida of Fukui Prefectural University, early lineages of cichlids from West Africa first colonized Lake Tanganyika. The cichlids of this ancient lake are genetically diverse, corresponding to 11 lineages (that is, deriving from 11 ancestral species). Much later some of these fishes left the lake's confines and invaded East African river systems, through which they reached Lakes Victoria and Malawi. Studies of the genetic material called mitochondrial DNA conducted by one of us (Meyer) and our colleagues show that the cichlids in Lake Victoria are genetically very close to one another -- far closer than to morphologically similar cichlids in the other two lakes... This scenario implies that almost identical evolutionary adaptations can and did evolve many times independently of one another. Cichlids with singular anatomical features -- designed to feed on other fish or on eggs and larvae, to nip off fins, scrape algae, tear off scales, crush mollusks or any of myriad other functions -- occur in all three lakes. To some of us biologists, such features had seemed so unique and so unlikely to evolve more than once that we had held that fishes with the same specializations should be closely related. If that were so, the predilection to scrape algae (for instance) would have evolved only once, its pactitioners having later dispersed. But algae scrapers in Lake Victoria and Lake Malawi have evolved independently of those in Lake Tanganyika, from an ancestor with more generalized capabilities. The genetic studies thus show that evolution repeatedly discovers the same solutions to the same ecological challenges. It also appears that morphological characteristics can evolve at an incredibly uneven pace, sometimes completely out of step with genetic changes. ...[T]he cichlids of Lake Victoria -- with their diversity in size, pattern, and shape -- evolved in an extremely short time span. Amazingly, the lake's more than 400 species contain less genetic variation than the single species Homo sapiens. Molecular clocks...suggest that the entire assemblage of Lake Victoria cichlids arose within the past 200,000 years. Recent paleoclimatological data from Thomas C. Johnson of the University of Minnesota and his colleagues point to an even more restricted window for the origin of the Victoria cichlid flock: the lake seems to have dried out almost completely less than 14,000 years ago. No more than a small fraction of the individual cichlids, let alone species, could have survived such an ordeal.
j
Thanks Dacook, I looked up Vancomycin and it appears scientists have been tweaking it, trying to make it resistant proof. http://pubs.acs.org/cen/news/84/i07/8407notw8.html Vancomycin Redesigned of special note: Vancomycin works as an antibiotic by binding to a peptidoglycan that is essential to the biosynthesis of bacterial cell walls. The most common form of resistance is due to a modification that changes a peptidoglycan amino acid from d-alanine to d-lactate. This mutation greatly reduces the affinity of vancomycin for the peptidoglycan, rendering it ineffective. Crowley and Boger compensated with a comparably simple change to vancomycin. They synthesized an analog in which a carbonyl group positioned deep inside the vancomycin molecule has been converted to a methylene. This replacement enhances the compound's binding affinity for the modified peptidoglycan in vancomycin-resistant bacteria yet preserves a substantial amount of the antibiotic's affinity for the normal peptidoglycan in vancomycin-sensitive bacteria. The analog is thus 100-fold more active than vancomycin against vancomycin-resistant bacteria but only 3% as active as the against vancomycin-sensitive bacteria. This is really exciting dacook in that they targeted the actual molecule that had mutated and compensated the binding for the mutation that prevented interaction with vancomycin. This is really taking the battle to the bug I would say! bornagain77
MRSA is obnoxious because it is resistant to not only Methicillin and its cousins (Nafcillin, Oxacillin etc.) but to the whole class of Cephalosporins (Keflex, Ceclor et.al.) The standard antibiotic used against MRSA is Vancomycin, which has been around a long time and is potentially quite toxic, has to be given IV (except for GI Clostridia which is another issue), has to be monitored and is risky and is just generally a pain to use if you don't have to. Even back when I was in medical school and MRSA was rare there was concern that it would acquire Vancomycin resistance from enterocci, which are common in the gut, and commonly resistant to Vanco. There now exist some strains of Staph resistant to Vancomycin. A Staph that learned to be resistant to both Vanco and Methicillin would really be a problem. The antibiotic that's usually used in addition to Vanco is actually a combination of Trimethoprim and Sulfamethoxazole (brand names Bactrim or Septra). This is also an old drug, which has been used for UTIs (urinary tract infections) for decades. One of the reasons to use two Abx is a double attack to make sure the dang bug dies before it learns to be resistant to anything else, especially Vanco. Also to get it under control quicker than Vanco may be able to do alone before its toxicity kicks in. dacook
Dear Touchstone (or whoever you were here), I apologize for any lapse of memory I had about the Fat Albert episode titled "The Hospital". I was six years old when the episode in question first aired. I also remember the Brady Bunch show airing an episode around that time on the exact same subject. It was probably a rerun when I first remember seeing it. I remember these things even though I was fairly young at the time because I had chronic throat problems as a child, and my parents resisted contemporary medical advice to have me undergo a tonsillectomy. Being unable to afford the operation was a large contributing factor, I might add. As for your accusation that I blame "the Devil" for this strangely recurring theme in family TV shows from 1971-1972, you can surely do better than stereotyping your opponents as Bible-thumping ignoramuses (though I shouldn't expect you to do so). I don't need to go to supernatural explanations when good old greed and Bernaysian brainwashing will suffice. Also, in the little time I had to research it, I found at least one recent source (anecdotal, sure, but from an M.D.) about the frequency of and reasoning behind tonsillectomies in the 1970s. Another much older medical source (from 1917, when Darwinists were a little less shy and politically correct) states in very bold terms the exact attitude I was alluding to. Before you launch into another stereotypical attack, yes, I support surgery when it is necessary. Even things like arms and legs should be removed if their existence is killing the person attached to them. What I don't support is writing off a body part as worthless and having it removed because of a little discomfort. Especially if it's for profit's sake. angryoldfatman
That is pretty elegant. I hadn't heard of that. I suspect the new Vanco will still be bad for ears and kidneys, however. What would really be nice is if less toxic antibiotics like the Cephalosporins could also be modified to kill MRSA. dacook
Thanks for taking the time to answer dacook. I thought MRSA was resistant to multiple antibiotics. Is there something special (new) about the two antibiotics he uses to treat MRSA with? bornagain77
37 ari-freedom until the organism was identified. - and after it’s identified, only one antibiotic was used, right?
Depends on the organism and the clinical situation: if the organism is very susceptible to a low-toxicity antibiotic and the infection is not life or limb threatening one antibiotic is usually enough. If the infection is more serious, often two will still be used even after culture and sensitivities are back. A good example is MRSA, which my ID consultant always treats with two antibiotics.
So Dacook, In diagnosing a patient, you are saying it is not always possible to know the exact pathogen your dealing with?
This is correct. It usually takes at least 2-3 days for a culture to grow something identifiable, often longer. Sometimes you know there's an infection but you can't get a culture. An example is a patient I'm consulting on right now who has pain in the hip and groin, an elevated white cell count and C-Reactive protein, and an MRI showing probable infection in the iliopsoas muscle, but with no abscess fluid to aspirate for culture. Slicing the patient open just to get a piece of muscle to culture is not really practical or justifiable at this point so we're just going to have to go with best guess. There is a history of a urinary tract infection with MRSA a few months ago so that is the most likely organism and so antibiotics will be aimed at that. If the person gets better we will never know what it really was. If not we will repeat the MRI and if there's fluid we'll try to get a needle into it for a culture specimen. dacook
Corey, Here is another quote from the scientists who conducted the study: "The deletion of these elements likely has relatively mild effects on fitness that are gradually selected against over time — several or more generations from when they arise — but not on observable time scales. Thus even the very scientists who conducted the study admit their test were not extensive. We can speculate all we want, but the fact is that without extensive tests, conducted over several generations testing for offspring variability, as well as test testing for molecular robustness on the same time scale, establishing with 100% certainty that no fitness was lost at all in the deleted DNA mice, then the test is not substantial enough to establish a solid case for the radical front-loading mo^del and defeat the genetic entropy mo^del. As with bacteria that show their true colors for genetic entropy when they are forced to change, I believe this mice will show their loss of information (Genetic Entropy) when forced to vary. bornagain77
Thanks for showing that part of the sentence twice, but we can see that he was saying that in this setting they were basically normal mice as far as their tests could tell. If you're interested in more of the implications of this study there was a good piece by Erika Check in Nature in September. Corey
Corey, I'm so sorry I missed your question on post #46: Could you clarify what you mean when you say “bacteria. . . are apparently designed optimally.” And then you give this following example to say that bacteria are not designed optimally: For example, lets take the chemistry of the ?-lactamase that gives bacteria resistance to penicillins. It isn’t a damaged piece of the cellular machinery that stops penicillin from working, but rather it breaks the penicillin molecule down so that it can’t function. That is the bacteria doesn’t just throw out (or break) the piece of itself that penicillin is attacking. Using this enzyme it actually detoxifies the antibiotic. This following site has your specific example listed: http://www.answersingenesis.org/docs2002/0408lab_evolution.asp Biologists mimic evolution in the lab? of special note: "This is evolution? The gene that produces ‘penicillin killer’ enzymes (TEM-1 ß-lactamase) easily mutates into different forms that break down a variety of antibiotic molecules. Hall was able to get eight bacterial cultures in his laboratory to ‘develop’ resistance to various antibiotics. In seven cases, bacteria in nature already had this resistance," Thus Corey it is not a novel molecular ability it is only the refining of a preexisting molecular ability that was present in the bacteria. And to make matters worse for this "evolved" ability, once the antibiotic is removed from influencing the bacteria population, the "evolved bacteria population will soon be out-competed by its "parent" bacteria relatively quickly and return to a "normal" optimal population state. As esteemed French scientist Pierre P. Grasse has stated “What is the use of their unceasing mutations, if they do not change? In sum, the mutations of bacteria are merely hereditary fluctuations around a median position; a swing to the right, a swing to the left, but no final evolutionary effect.” Needless to say, this limit to the variability of bacteria is extremely bad news for the naturalists. This following site has some of the loss of functions listed for other bacteria/antibiotics http://www.trueorigin.org/bacteria01.asp of special note: Antibiotic Phenotype Providing Resistance Actinonin -Loss of enzyme activity Ampicillin -SOS response halting cell division Azithromycin -Loss of a regulatory protein Chloramphenicol -Reduced formation of a porin or a regulatory protein Ciprofloxacin -Loss of a porin or loss of a regulatory protein Erythromycin -Reduced affinity to 23S rRNA or loss of a regulatory protein Fluoroquinolones -Loss of affinity to gyrase Imioenem -Reduced formation of a porin Kanamycin -Reduced formation of a transport protein Nalidixic Acid -Loss or inactivation of a regulatory protein Rifampin -Loss of affinity to RNA polymerase Streptomycin -Reduced affinity to 16S rRNA or reduction of transport activity Tetracycline -Reduced formation of a porin or a regulatory protein Zittermicin A -Loss of proton motive force Hope I helped Corey. bornagain77
Corey busted me on not looking up a study: "When you say “and I would be willing to bet that severe flaws could be found in their study when thoroughly hashed out” it seems malicious to me. Rather than assuming flaws, you could actually check I’m sure. If there aren’t flaws, then wouldn’t you feel silly saying what you did?" So Corey I looked into it and found: http://www.sciencedaily.com/releases/2007/09/070904151351.htm Of special note: "While I don't think we can conclude that the mice we created with the ultraconserved elements deleted are normal, we can confidently conclude that the presence of the ultraconserved elements are not required for the viability of the organism." Edward Rubin, Director of the Joint Genome Institute. Note this sentence Corey; "While I don't think we can conclude that the mice we created with the ultraconserved elements deleted are normal.. Thus Corey even the scientists who conducted the study conceded that he could not conclude that they were "normal". Thus for the purpose of trying to establish a solid evidence to base the radical front-loading scenario on, the evidence, according to the scientists very own words, is clearly not substantial enough. bornagain77
Thanks ari-freedom, That was very interesting. Actually, I think this "nonrandom variation" evidence of Dr. Spetner's, that you pointed out, fits in very well with the ID/Genetic Entropy mo^del. Whereas environmental cues contradict the neo-Darwinism dogma (and is actually ignored by them according to Spetner), Environmental cues acting on already present information in the "parent" species genome readily explains the "rapid" burst of radiative adaptation following the novel "parent" species abrupt appearance that we consistently see in the fossil record. i.e. Evolution has no solid answers for this enigma whereas we do. In fact I have seen several papers pointing out the insufficiency of natural selection to produce rapid changes in and of itself (Sanford; Genetic Entropy: natural selection "noise"). Thus "nonrandom variation" fills a gap in the ID/Genetic Entropy mo^del by supplying the proper mechanism for the rapid sub-speciation witnessed. But for a starting position I will hold that all "nonrandom variation" will also conform to the overriding principle of genetic entropy since this position is more in harmony with the Law of Conservation of Information and the Second Law of Thermodynamics, as well with the evidence in the fossil record, than the non-random front-loaded mod^el is. I just keep thinking about the enormous amount of time on earth that "unused" genetic information had to survive without any damage in "primitive organisms" and the whole thing just doesn't "feel" right to me at all. As well the fossil record shows a very consistent pattern. "As Niles Eldredge and Stephen Jay Gould pointed out almost three decades ago, the general pattern for the evolution of diversity (as shown by the fossil record) follows precisely this pattern: a burst of rapid diversity following a major ecological change, and then a gradual decline in diversity over relatively long periods of time." Allan MacNeill PhD Another trouble the with radical front-loading mo^del, is that all adaptive radiations of sub-species can be persuasively argued to lose genetic information, for there is a detectable loss in variability from parent species, as with this cichlid study; African cichlid fish: a system in adaptive radiation research http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1635482 of special note: Interestingly, ecological opportunity (the availability of an unoccupied adaptive zone), though explaining rates of diversification in radiating lineages, is alone not sufficient to predict whether a radiation occurs. The available data suggest that the propensity to undergo adaptive radiation in lakes evolved sequentially along one branch in the phylogenetic tree of African cichlids, but is completely absent in other lineages. Instead of attributing the propensity for intralacustrine speciation to morphological or behavioural innovations, it is tempting to speculate that the propensity is explained by genomic properties that reflect a history of repeated episodes of lacustrine radiation: the propensity to radiate was significantly higher in lineages whose precursors emerged from more ancient adaptive radiations than in other lineages. Thus as you can see, the evolutionists are mystified that the radiations are not happening for the "sub-species" of cichlids but are always radiating from the "more ancient" parent lineage. This fact is totally contrary to what we would expect to find if the variation found in the sub-species were truly wrought by random mutations in the DNA generating novel information for variability! And this result is to be totally expected if the parent species were indeed created with a certain amount of flexibility for adaptation to differing environments already programmed in its genetic code! I will have to look deeper into this, and make sure I have all my bases covered, but I feel Spetner's work will fit in very well with ID/Genetic Entropy mo^del. bornagain77
Bettawrekonize I am sure that garlic plays a part in a well balanced diet but the key phrase is "well balanced." I do not buy for one minute the idea of a "super" food or herb, just as I don't believe that pharmaceutical drugs will solve all our problems. ari-freedom
When you say "and I would be willing to bet that severe flaws could be found in their study when thoroughly hashed out" it seems malicious to me. Rather than assuming flaws, you could actually check I'm sure. If there aren't flaws, then wouldn't you feel silly saying what you did? There probably would be some sort of "flaw" but that's kind of how science works, eh? You do an experiment and see what it tells you. Then you realize that isn't the final answer and you go do another experiment and see what that tells you. Corey
yes you. thanks. I'm thinking about the experiments in chapter 7, page 187-191 Or for example the cichlid/guppy experiment on page 206. ari-freedom
ari-freedom you ask, I would like to know how you would explain the examples shown in Spetner’s book. Is the question for DaveScot or me? If me, I happen to have Spetner's book, so if you could give me a page number I would be happy to look it up and see how it fits into Sanford's work. bornagain77
For the environmental cue to have something to work on in the radical front-loading scenario, should not the scenario be required to demonstrate a tangible amount of radically novel and beneficial morphology through these mutational studies? i.e. Should not radically new structures appear at least some time in any one of these extensive studies?
The trick would be to find a species that has not yet self-terminated. Now if that it is not immediately observed the other part of the hypothesis, the conversation method, could found. Patrick
We know that there are at least several cases of pleiotropy, where one gene codes for more than one trait and polygeny, where one trait is coded by more than one gene. I would like to know how you would explain the examples shown in Spetner's book ari-freedom
Thanks DaveScot you gave me quite a bit to chew on and clearly pointed out that there is quite a lot I/we have left to learn. I guess you already know what I'm going to point out as to the weakness of your scenario. Extensive mutational studies, of what I would consider more primitive life-forms (bacteria, worms, fruit flies etc..etc..), have never shown even a slight minor beneficial effect to support the tangibility of the front-loading hypothesis. This includes the infamous HOX gene studies. For the environmental cue to have something to work on in the radical front-loading scenario, should not the scenario be required to demonstrate a tangible amount of radically novel and beneficial morphology through these mutational studies? i.e. Should not radically new structures appear at least some time in any one of these extensive studies? To me this (radical front-loading hypothesis) is a nascent hypothesis "still" searching for actual validation when the hypothesis has not yet been physically demonstrated to actually produce the radically novel structures that it is absolutely required to produce in order to be considered compellingly valid scientifically. This is something akin to the evolutionists trying to find that elusive set of "beneficial" mutations that would validate their hypothesis also. To me, the following is your strongest piece of circumstantial evidence: "There is also compelling experimental evidence of highly conserved DNA (between mouse and man, conserved for over 100 my of divergent evolution) where its function could not be identified in the mouse when thousands of conserved sequences were knocked out all at once from the mouse and the GM mice were healthy and indistinguishable from normal mice." As you have already pointed out there is a huge amount that we do not understand as far as genomes are concerned. I could raise several questions like how extensive were the altered mice tested for robustness? How much adaptive radiation was lost to the offspring of the mice? and I would be willing to bet that severe flaws could be found in their study when thoroughly hashed out. As well as the larger genomes are concerned that you pointed out, I could ask, "Is there a certain "encoding" reason why the larger genomes could not be "compressed" as radically as man's genome is ?(remember 12 codes thick in some places). So yes, there is a huge amount for me/us left to learn, and to tell you the truth, I really feel quite na^ked sometimes, as far as my knowledge is concerned, when asking these cutting edge questions at places where man has just barely started to unravel the mystery of life. bornagain77
bornagain77 The front loading hypothesis doesn't require that there be any "code in waiting" today. Consider the possibility that creative evolution has come to an end and there's nothing left in the preprogrammed sequence to produce. In other words, the program has self-terminated with rational man as its final product. In the front loading hypothesis phylogeny parallels ontogeny. Both are self-terminating when a final product (an adult form) is produced. That said, only a diminshingly small fraction of the genomes on this planet have been sequenced and only a tiny fraction of what's contained in those that have been sequenced is understood. It's premature, to say the least, that there is no evidence any of that undiscovered country is "code in waiting". The c-value enigma is alive and well. There are organisms ranging from bacteria to frogs to water lillies with grossly large amounts of DNA that is quite obviously not needed to sustain their present form since there are closely and distantly related forms phenotypically as complex or more complex with far less DNA. There is also compelling experimental evidence of highly conserved DNA (between mouse and man, conserved for over 100 my of divergent evolution) where its function could not be identified in the mouse when thousands of conserved sequences were knocked out all at once from the mouse and the GM mice were healthy and indistinguishable from normal mice. Natural selection is the only known conservation mechanism and it is incapable of conserving unexpressed genomic content. The best explanation at this time for the evidently unexpressed but highly conserved mouse DNA is that some mechanism is conserving it for contingency. This is exactly what the front loading hypothesis requires - contingency conservation. I'd call that a good bit of evidence for front loading. Identifying the contigency function (if any) of that conserved DNA would be much more illuminating but at the present time it is beyond our experimental and/or analytic capability to determine what function(s) unexpressed DNA performs. If it doesn't closely resemble DNA with a known function and nothing happens when it is deleted that's essentially a brick wall we can't see past with the current state of the art. DaveScot
ari-freedom comments; I’m confused. I thought the whole lesson from disproving junk DNA was that we can’t underestimate the levels of complexity in the genome. The genome is turning out to be extremely complex, far more complex than some evolutionists, even now after ENCODE, are willing to admit (The finding, when thoroughly fleshed out, will be absolutely crushing to ANY evolutionary scenario). Yet this extreme complexity in the genome, far from enabling more information to be generated in the genome, from all preliminary evidence and scientific principles we have, is severely poly-constraining to the CSI that is originally put into the "parent" species genome (Sanford; Genetic Entropy). As far as we currently know, there is no known natural law or sequence of events that will ever be able to naturally generate information (CSI) in a genome by purely natural methods (William Dembski, Werner Gitt). So from what we currently know in science, we can expect that there is no "information generating code" in the genome awaiting some hypothetical environmental cue. From this evidence, and several other lines of evidence, we can now build a case for Genetic Entropy. Which states that all favorable radiations/adaptations from the parent species will always occur at a loss of information from the CSI that was originally present in the parent species genome. Can a infinitely powerful Designer, mess with our heads and, gradually insert more complex information on top of the poly-constrained information that is present? Yes He could, but the fossil record, and all the available long term genetic evidence, I've been able to find, which is not much right now, indicates that radical CSI was inserted at the level of parent species or parent order. Thus also conforming to Dr. Behe's tentative "Edge" estimate. bornagain77
Dog_of_War: Surely you don't see sickle cell as an example of good design? It seems to me rather more indicative of evolution, just like the adaptations in bacteria that are the focus of Gil's post: an adaptation beneficial in limited situations that comes at an expense of function in a wider environment. George DW
wow, I just read my comment, can you tell I was burnt out from studying for finals? gore
my thoughts are with you ari-freedom, it alost seems he forgot which side of the opinion he is sharing. Is he saying the scenario is for disproving junk dna as junk has no empirical basis? Wow, I guess I have been reading nothing about lies when it coes to that subject gore
I think the difference between computer viruses and these bacteria is that computer viruses were designed maliciously, while if bacteria were designed (and I'm not so sure yet that they were) then I think that the idea would be that the designer was not malicious. Corey
I'm confused. I thought the whole lesson from disproving junk DNA was that we can't underestimate the levels of complexity in the genome. ari-freedom
bornagain, could you clarify what you mean when you say "bacteria. . . are apparently designed optimally." Also, I'm not so sure that antibiotic resistance is always as deleterious as some of these posts make it out to be. For example, lets take the chemistry of the ?-lactamase that gives bacteria resistance to penicillins. It isn't a damaged piece of the cellular machinery that stops penicillin from working, but rather it breaks the penicillin molecule down so that it can't function. That is the bacteria doesn't just throw out (or break) the piece of itself that penicillin is attacking. Using this enzyme it actually detoxifies the antibiotic. Corey
I think that the sickle cell/malaria example is a perfect proof of man's place in the world. That the design of malaria cannot better the design of sickle cell is telling. Dog_of_War
Joseph, you stated: Bornagain77- again you miss my point. RV/NS is IRRELEVANT if malaria was designed to evolve via Dr Spetner’s “non-random evolutionary hypothesis”. The “non-random evolutionary hypothesis” as you term it, is what I call the "radical front-loading scenario", Many IDists hold/held your position, I believe DaveScot holds your view, though I don't know if he currently does, at the present time, since he has just finished reading Dr. Sanford's book: "Genetic Entropy". The problem with this “non-random evolutionary hypothesis” is that there is absolutely no evidence that there is some "unused code in waiting" waiting to be expressed in a genome from some environmental cue. In fact, the best evidence we currently have indicates that this scenario is not plausible. The evidence I'm referring to is the recent ENCODE findings that point to a virtual 100% poly-fuctional and thus virtual 100% poly-constrained genome for Humans. This finding, if it holds for microorganisms as the preliminary evidence is indicating that it will, will all but rule out the "radical front-loading scenario". i.e. There will be no unused code in the genome to "turn on" and produce novel structures with. This scenario you refer to: “non-random evolutionary hypothesis” had a fairly large amount of plausibility when the genome was thought to be mostly "junk", But not that "junk DNA" has been disproved, this scenario is severely compromised of any empirical basis. bornagain77
GilDodgen in No. 40 states:
Malaria was designed not to be totally cured? No, malaria was not designed to be capable of evolving to become resistant to anything.
How do we know this, categorically, without knowing the intentions of the designer? Seems a bit, well, presumptuous. "Trust in the LORD with all thine heart; and lean not unto thine own understanding. In all thy ways acknowledge him, and he shall direct thy paths." (Proverbs 3:5,6) JWarner
Malaria was designed not to be totally cured?-Leo S
As far as we know what malaria was designed for and what it does today are two totally different things. Again it's the effects of random processes on a once very good design. Another point- we don't know whether or not malaria has conquered sickle-cells. For all we know there could be a population of such malaria somewhere that just haven't shown up in sickle-celled humans. Bornagain77- again you miss my point. RV/NS is IRRELEVANT if malaria was designed to evolve via Dr Spetner's "non-random evolutionary hypothesis". Joseph
I wonder what if malaria had an original benign and symbiotic purpose? New Clues to Malaria's Origin http://bric.postech.ac.kr/science/97now/01_7now/010720c.html of special note: The researchers sequenced 25 introns from eight strains of P. falciparum. They found that just three base pairs differed between the strains. From that, they calculated that modern P. falciparum all trace their ancestry to a single small population that began to flourish between 9500 and 23,000 years ago--about the time when early hunters set up farming communities. That agricultural revolution increased population densities and probably led to more stagnant pools in which mosquitoes can breed--two factors favoring the spread of malaria, Wirth says. The results match that of another study, published in Science last month (ScienceNOW, 25 June). Hmmm, Bet the YEC's could do something with that ,,LOL bornagain77
Malaria was designed not to be totally cured? No, malaria was not designed to be capable of evolving to become resistant to anything. GilDodgen
So Dacook, In diagnosing a patient, you are saying it is not always possible to know the exact pathogen your dealing with? For Dr. Behe's method to be effective the specific pathogen would have to be identified and hit with a Dr^ug that surpasses the triple CCC limit for bacteria. This sounds like a major obstacle since the specific pathogen would have to known beforehand. bornagain77
until the organism was identified. - and after it's identified, only one antibiotic was used, right? It usually takes quite some time to work...I'm not a doctor so you'd know better but I had pneumonia (ended up in the hospital for 4-5 days) and took antibiotics for quite some time but it was only one antibiotic. ari-freedom
I guess by "always" I meant "ever since before I went into medicine." I entered medical school in 1983. It was (and is) SOP to put a patient with a serious infection on 2 or 3 antibiotics until the organism was identified. I don't know how long before that. For non-serious infections only one antibiotic is still commonly used, with follow-up. Just 15 minutes ago I put a patient on a single antibiotic for a minor-looking skin infection following a mole removal done by someone else. If it doesn't improve in a few days I'll broaden coverage. For minor, common infections it's usually not worth the cost, inconvenience, and possible side effects to start with multiples. But for serious ones (the kind I usually see) it always is. dacook
Joseph you stated, "It’s a population thing. The goal is for the population to survive not each individual. It makes more sense that way- having multiple individuals trying to resolve an issue in different ways. One is bound to find a solution." That is the whole point of Dr. Behe's book. Dr. Behe looked for what was reasonable (the Edge) for the RV/NS scenario to accomplish in the "ENTIRE" world-wide population of microorganisms. He estimated from several different lines of evidence that a triple CCC mutation is unreasonable for the RV/NS scenario to accomplish in the ENTIRE world-wide population. That limit is the limit that ID stands by currently. That limit is far below what is required of Evolution to produce for living organisms. In other words it is a barrier established by empirical evidence that we have no reason to suspect will ever be violated by natural methods. And unlike evolution this is very falsifiable. Indeed, just clearly demonstrate a violation of the triple CCC limit and Dr. Behe's (ID's) estimate is falsified immediately. bornagain77
dacook --The ID (Infectious Disease) consultant I use routinely recommends multiple antiobiotics for infections. We have always done this if the organism is unknown. Do you know how long this was SOP? tribune7
ari-freedom (post 28) first comment: I agree. In fact, some doctors used to prescribe / recommend pro - biotics to supplement anti - biotics so that the anti - biotics will kill the bad bacteria and the pro - biotics would replace the good bacteria. If bacteria were designed for a useful purpose in humans then it stands to reason that the bad bacteria maybe degenerates of the original useful bacteria. Second comment: I read studies that indicate raw garlic kills all kinds of bacteria and is virtually impossible to immune to. It even destroys many viruses. Of course, pharmaceutical corporations have an incentive to skew such information by making bias studies and telling people otherwise. See, when garlic is in its natural form, it doesn't have a certain enzyme (allicin). Allicin kills all living cells, including some of you own. However, when crushed (ie: if an insect bites into it) it produces allicin (by combining two separate enzymes, allilin and alinin ?) and the allicin kills the insect (or wardes it away) and quickly evaporates before being able to kill much of the plant. Garlic was designed to resist resistance. Bettawrekonize
If bacteria had “built-in responses to environmental cues” ala Dr Spetner’s “non-random evolutionary hypothesis”- ie they were designed- then wouldn’t it be possible for them to develop a triple CCC?
Perhaps, but the evidence is that malaria, for example, was not designed to have this capability, since it has not been able to evolve resistance to sickle cell.-Gil
1- Not every problem has a solution. 2- The random effects that do exist could have crippled its capabilities. Those same or similar random effects that made it a killer. All I am saying is that if we ever did see a triple CCC arise in say one or two generations (some small number) it would support Dr Spetner, not Darwin.
Thus it goes to reasoning, the chance of any one bacteria having all the necessary mutations to combat all the different antibiotics, deployed at the same time, against a specific pathogenic bacteria, is far, far greater than any one bacteria developing just one deleterious mutation for resistance.-bornagain
It's a population thing. The goal is for the population to survive not each individual. It makes more sense that way- having multiple individuals trying to resolve an issue in different ways. One is bound to find a solution.
1) if bacteria were designed then maybe it is not such a good idea to try to kill them in the first place.- ari-freedom
Computer viruses are designed also. And it is definitely a good idea to kill them. Just because things are designed does not make them "good". Joseph
If bacteria had “built-in responses to environmental cues” ala Dr Spetner’s “non-random evolutionary hypothesis”- ie they were designed- then wouldn’t it be possible for them to develop a triple CCC?
Perhaps, but the evidence is that malaria, for example, was not designed to have this capability, since it has not been able to evolve resistance to sickle cell. By the way, I propose that my suggestion represents a falsifiable prediction. What falsifiable prediction does Darwinian theory make in this arena? GilDodgen
if natural selection favors crippling mutations such as the South park turkey :) , blind cave fish, wingless beetles and resistant bacteria, then evolution would be even less likely than if only random mutations were involved. Evolutionists should be advised to stay away from using natural selection as a mechanism. ari-freedom
AOFM: I'm not a South Park watcher, but I think I get what you're talking about. The bird has a defect that protects him from death, but hardly makes him more advanced, sophisticated or complex than his healthy turkey brothers. Is that right? If so, then I think Dr. Behe should put a picture of the South Park Turkey in his next edition of Edge of Evolution. ;) russ
Hi. This is my first post. 2 points: 1) if bacteria were designed then maybe it is not such a good idea to try to kill them in the first place. I smiled at the suggestion to roll in the dirt. There does seem to be a lot of support for the hygiene hypothesis. Perhaps what we need to do is expose people to other bacteria instead of trying to kill everything. 2) it is probably not healthy for the human body to take so many drugs at one time. It's harder to control for complications. Ari ari-freedom
Joseph, Here is a general outline of how multiple antibiotic resistance has built up in bacteria: A pathogen is afflicting man, Antibiotic A is developed, for a time Antibiotic A is very effective in relieving man's suffering, but somewhere in the world, in an "unlucky" individual, a "lucky" bacteria has a "lucky" deleterious mutation in which the bacteria does not interact with the antibiotic anymore. The mutated bacteria spread and afflict man once again. Antibiotic B is now developed in which the same cycle is repeated, as with Antibiotic C,D and so forth. Thus it goes to reasoning, the chance of any one bacteria having all the necessary mutations to combat all the different antibiotics, deployed at the same time, against a specific pathogenic bacteria, is far, far greater than any one bacteria developing just one deleterious mutation for resistance. This was only a rough outline, but I hope I made the point clear with what is proposed when Gil talks about a triple CCC. bornagain77
In the meantime, medical doctors should prescribe multiple antibiotics for all infections, since this will decrease the likelihood that infectious agents can develop resistance through stochastic processes.
The ID (Infectious Disease) consultant I use routinely recommends multiple antiobiotics for infections. We have always done this if the organism is unknown. dacook
angryoldfatman, I think you are completely correct about the surgery line of thinking. It is very reminiscent of the something that Sal Cordova does a good job of reporting: a mild for of eugenics that is inherint in the darwinian model. Dog_of_War
And the methods that bacteria use to resist antibiotics never fails to remind me of how Gobbles the Crippled Turkey on South Park avoided a giant decapitating saw blade that killed scores of his fellow healthy turkeys. angryoldfatman
Another example of a little known science-stopping incident brought about by Darwinism: unnecessary harmful surgery. I remember a big push back in the 1970s for children to have their tonsils removed even if they weren't sick because tonsils were considered vestigial organs. They went so far as to push this propaganda on Saturday morning cartoons like Fat Albert, if I remember correctly. "We don't know what these body parts do, and according to Darwinian evolution there are going to be body parts that are useless, so let's just carve these things out." To me that's a lot more dangerous than the results of a "God did it" attitude. angryoldfatman
Davescot (14) said: "getawitness is no longer with us." He's not? Darnit. I was looking forward to hearing his response to what russ (12)said: "Since the consensus THEORY doesn’t comport with already available DATA, it would seem that NDE’s goalposts are more in need of relocation than Michael Behe’s, don’t you think." He answered with ad hominem which was removed with the author. -ds Clumsy Brute
While I understand the test of darwinian chance and it's obvious failure here, I would be interested in an explanation of how the medical benefit comes from this prediction. It seems to me that the idea for this benefit comes right from the failure of darwinian evolution and not necessarily from the ID prediction. That's okay, I guess, but a justification for this line of medical treatment that stems entirely from the design paradigm is probably going to be asked for. I'm sorry, you might have explained it and I just don't get it. If that's the case, I apologize. Dog_of_War
Umm where design is concerned probabilities go out the window. However it is also obvious, from experience, that one design can trump another. The trick is to first understand the design you are trying to trump. BTW it doesn't take any "time" for bacteria to develop resistance. As a matter of fact that resistance was already present when the antibiotic is introduced. All the antibiotic does is to wipe out those which do not have the resistance already in place. And that allows those which have the resistance to proliferate. And that is why in order to do the following: Here’s a prediction and a potential medical application from ID theory: Design a chemical or protein which would require a triple CCC to defeat its toxic effects on a bacterium, and it will exhaust the probabilistic resources of blind-watchmaker mechanisms to counteract the toxic effects. we would need to know the genome of every bacteria in the population we want to eradicate. And that isn't practical. Joseph
correction, this line should read: (and being designed is actually a limiting factor for bacteria since they are apparently designed optimally : with "NO" scientifically observed violation of Genetic Entropy) bornagain77
Joseph ask: if..they (bacteria) were designed-then wouldn’t it be possible for them to develop a triple CCC? In a word, NO. The probabilities are far too great. (and being designed is actually a limiting factor for bacteria since they are apparently designed optimally : with scientifically observed violation of Genetic Entropy) It takes time and chance for bacteria to develop antibiotic resistance, thus if you make the chance far too great, for the bacteria (a triple CCC), then the antibiotic treatment will always stay effective for the odds are far too great against it ever developing a triple CCC in any reasonable amount of time. bornagain77
I’d place my bet with nature.
Was that supposed to be some type of refutation?
But then the bacteria kept developing resistance to even the toughest antibiotics.
Which should be a clue that antibiotics aren't the way to go about fighting infectious bacteria. But anayway- Gil,- something to think about- If bacteria had "built-in responses to environmental cues" ala Dr Spetner's "non-random evolutionary hypothesis"- ie they were designed- then wouldn't it be possible for them to develop a triple CCC? Joseph
correction the last line should read: thus "ENFORCING" the triple CCC limit clearly pointed out by Dr. Behe. bornagain77
Gil, The present line of "multiple antibiotics" used in parallel is ineffective against "supergerms" presently in our hospitals, but, unlike GAW's unrealistic and very antagonistic view, I resolutely hold that Gil is absolutely correct in seeing the most effective way to combat "supergerms" is to deny them the stepwise fashion they need to build resistance by "prescribing brand new antibiotics against specific pathogens in parallel". As well, the failure to see this clear line of reasoning much earlier in medicine is indeed "yet another catastrophic failure of Darwinian presumptions". As stated earlier, Today multiple antibiotics, used in parallel, are ineffective against supergerms. In fact the most effective treatment against todays "supergerms" is really the opposite of continued use of multiple antibiotics http://www.answersingenesis.org/creation/v20/i1/superbugs.asp Of special note: "It is precisely because the mutations which give rise to resistance are in some form or another defects, that so-called supergerms are not really ‘super’ at all—they are actually rather ‘wimpy’ compared to their close cousins. When I was finally discharged from hospital, I still had a strain of supergerm colonizing my body. Nothing had been able to get rid of it, after months in hospital. However, I was told that all I had to do on going home was to ‘get outdoors a lot, occasionally even roll in the dirt, and wait.’ In less than two weeks of this advice, the supergerms were gone. Why? The reason is that supergerms are actually defective in other ways, as explained. Therefore, when they are forced to compete with the ordinary bacteria which normally thrive on our skin, they do not have a chance. They thrive in hospital because all the antibiotics and antiseptics being used there keep wiping out the ordinary bacteria which would normally outcompete, wipe out and otherwise keep in check these ‘superwimps’. If they are ‘weaker’, then why do they cause so much and misery in hospitals? These bacteria are not more aggressive than their colleagues, it is only that doctors have less power to stop them. Also, those environments which will tend to ‘select’ such resistant germs, like intensive care units, are precisely the places where there will be critically injured people, physically weakened and often with open wounds. This is why more than one microbiologist concerned about these super-infections has mused (only partly tongue in cheek) that the best thing to happen in major hospitals might be to dump truckloads of germ-laden dirt into the corridors, rather than keep on applying more and more chemicals in a never-ending ‘arms race’ against the bacteria. In other words, stop using the antibiotics (which of course is hardly feasible), and all this ‘evolution’ will reverse itself, as the bacterial populations shift back again to favour the more hardy, less resistant varieties." Yet even if we were to force supergerms (superwimps) to compete against their hardier "parent" lineage, the decay curve, until the supergerms were completely out competed into oblivion (reached a perfect zero), would most likely reach into many years. Thus that line of attack seems untenable from first glance. Man seems to be stuck in this situation in that we don't have enough time to let this "natural cure" happen so man is forced develop brand new antibiotics, and as Dr. Behe and Gil have pointed out, deploy them in parallel instead of serially thus defeating the triple CCC limit. bornagain77
Getawitness is no longer with us. DaveScot
Excellent and straightforward thread Gil, This following site is interesting to this topic: The dearth of new antibiotic development: why we should be worried and what we can do about it, http://www.mja.com.au/public/issues/181_10_151104/cha10412_fm.html of special note: Inappropriate antibiotic use is a key driver of resistance, but the reasons for such use can be complex.7 In developed countries, the obsession with “zero risk” has distorted the decision-making process for many clinicians, with broad-spectrum antibiotics being used even when not indicated. me again: This is definitely where Dr. Behe's "Edge" could be used to bring much needed clarity to the problem we are facing with antibiotic resistance. This following "new" antibiotics page (although dated 2002) is interesting to this topic too, since it discusses how certain new antibiotics accomplish their work: New Antibiotics: When to Use and When Not to Use http://www.acponline.org/ear/vas2002/new_antibiotics.htm I found that many of the new antibiotics being developed are very specific or what they call very narrow in their scope. bornagain77
Falsification of that edge, yes. He would be free to move the goalposts and draw another line in the sand.
But is "moving the goalposts" really a fair characterization? Edge of Evolution is an attempt to establish where the limits of evolution lie. NDE THEORY says that you can go from nothing to humans via natural processes with no intelligence. EoE says that the best available DATA indicate that the best you can do is decrease overall function in an attempt to survive. Showing somehow that NDE is even better than we thought at trench warfare does not constitute "moving the goalposts", or "drawing another line in the sand". You seem to be using those expressions merely to gain rhetorical points. Since the consensus THEORY doesn't comport with already available DATA, it would seem that NDE's goalposts are more in need of relocation than Michael Behe's, don't you think. ;) russ
Design a chemical or protein which would require a triple CCC to defeat its toxic effects on a bacterium, and it will exhaust the probabilistic resources of blind-watchmaker mechanisms to counteract the toxic effects.
Isn't that, strictly speaking, an anti-evolution prediction? You seem to be assuming that ID is the only alternative to evolution and I think that is rather a rather limiting view for one committed to the process of science. After all, no one had heard of Intelligent Design 20 years ago. Why should we not consider that, 20 years from now, there may be another alternative to Darwinian explanations? specs
Gil, I was not being sarcastic. That is the second time that posters here have assumed I am some kind of Darwinist stooge. You guys really seem to be always angry about something. And my nickname was given to me by my older sister when we were young. It is too long of a story to relate. poachy
This might be a very naive post but I would really like to improve my insight, therefore I will present my understanding... to be corrected. I am no biologist, but this discussion makes me think of people believing beyond reasonable limits that the bacteria genes can be altered without limit. This "limitless possibilities for bacteria" are clearly not observed in any experiments or reality because it seems as if "selective breeding" of dogs has been more successful in creating morphological variation than the "selective breeding" of bacteria. This might be directly related to the difference in the specified complexity of the genes of dogs and those of bacteria. The question remains, if it is possible to increase the original specified complexity in the genes of bacteria. The definition of increased specified complexity is not a function of an organism's morphology and /or metabolic functions alone. Change in information content should be related to the information's previous states as well as possible future states. My conclusion is that, with a pure mechanistic system (excluding things like art or creative freedom of capable conscious beings), it should be easy to measure an increase in information content and it could be done without relying on (localized) "survival benefit" as a measure. (Complex systems can be modeled, exactly because its information content (mathematical relationships) can be modeled/understood. This include chaos behavior as well as "signal noise") P.S. Survival is far from the only measure for design optimization. mullerpr
Falsification of that edge, yes. He would be free to move the goalposts and draw another line in the sand. getawitness
By which I mean a falsification of Mike Behe’s putative “edge” of evolution. Gil’s point, I think, is that a bacteria could never develop such resistance because Behe is right. So that would be a unbeatable antibiotic.
Falsification of his "edge of evolution", or simply adjusting where the edge lies? If bacteria successfully adapt to each and every antibiotic, there's no evidence that that will lead to anything more than an altered bacterium. russ
StuartHarris, it seems to be the medical community was pretty optimistic about antibiotics. That's why they prescribed them all the time. But then the bacteria kept developing resistance to even the toughest antibiotics. It seems to me we're reaching something more like the edge of antibiotic technology. getawitness
Behe addresses this in The Edge of Evolution. The medical community has led us to be too pessimistic about the effectiveness of antibiotics. Way too much credence is given to the idea that bacteria can counter all antibiotics by selective pressures. Attacking bacteria with not a single antibiotic, but many at the same time, does not let a bacterial strain degrade itself with a single mutation to ward off the single antibiotic. The bacterial genome is "degrading itself" in Behe's trench warfare analogy, as opposed to the untenable arms race analogy of Darwinism. With many parallel antibiotics many simultaneous mutations are needed, thus exponentially increasing the problem of "fitness cost" -- the degradation of the overall genome that must occur to counter the anitbiotic threats. StuartHarris
By which I mean a falsification of Mike Behe's putative "edge" of evolution. Gil's point, I think, is that a bacteria could never develop such resistance because Behe is right. So that would be a unbeatable antibiotic. getawitness
Developing resistance would be a feat of evolution.
By which you mean "microevolution", correct? russ
Gil, Let me make sure I understand you. So if you designed "a chemical or protein which would require a triple CCC to defeat its toxic effects on a bacterium," then resistance could not evolve, right? This would be an unbeatable antibiotic? Or else the development of resistance would bypass Behe's alleged line in the sand. Is that right? Would it matter what bacterium you were fighting? Designing it would be a feat of engineering, indeed. Developing resistance would be a feat of evolution. I'd place my bet with nature. getawitness

Leave a Reply