Uncommon Descent Serving The Intelligent Design Community

The School of Nanometry

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The finding that the protein components, called “ankyrin repeats,” exhibit such unprecedented elastic properties could lead to a new understanding of how organisms, including humans, sense and respond to physical forces at the cellular level, the researchers said. The nanometer-sized springs are also ideal candidates for building biologically-inspired springy nanostructures and nanomaterials with an inherent ability to self-repair, they reported. A nanometer is one billionth of a meter.

“Whereas other known proteins can act like floppy springs, ankyrin molecules behave more like steel,” said Piotr Marszalek, professor of mechanical engineering and materials science at the Duke Pratt School of Engineering. “After repeated stretching, the molecules immediately refold themselves, retaining their shape and strength.”

“The fully extended molecules not only bounce back to their original shape in real time, but they also generate force in the process of this rapid refolding – something that had never been seen before,” added HHMI investigator Vann Bennett, professor of cell biology at Duke University Medical Center. “It’s the equivalent of un-boiling an egg.”
(more….)

I never get tired of these kinds of stories. As nanotechnology becomes more and more a lucrative economic goal, more and more of these studies will be done, and more and more the ‘solutions’ that nature provides to nanostructure problems will be implemented into the growing “School of Nanotechnology Architecture.” At some point, I think scientists will be forced to admit that nature has the hallmark of just the kinds of elemental design features that they incorporate into their own nano-level machines. And, of course, the question becomes: whence the design.

Comments
physicist The onus to provide a means for falsification is not borne by critics of the theory. The proponents of the theory must provide it. How is it that RM+NS can be falsified? Keep in mind that the falsification methodology must be scientific. Claiming RM+NS can be falsfied by God appearing and proving He did it doesn't count because that's not science. RM+NS must in principle be able to be falsified. Darwin said that any organ that cannot evolve by incremental changes where each increment is favored over the former by natural selection would falsify his theory. So, we can't figure out how flagella can evolve in this manner. A negative cannot be proven so if you try to say we have to show there's no possible way for it to evolve then you lose because a negative cannot be proven even in principle and so there's no way to falsify RM+NS. The onus is on YOU to show that there is a Darwinian pathway for the flagellum to evolve. Good luck. Get back to me when you think you've got something.DaveScot
January 18, 2006
January
01
Jan
18
18
2006
12:28 AM
12
12
28
AM
PDT
Checkout comments 12, 13, 14 + CharlieCharliecrs
January 17, 2006
January
01
Jan
17
17
2006
11:44 PM
11
11
44
PM
PDT
"R*phi(T)*P(T|H)" What's R, phi(T), T, and H?anteater
January 17, 2006
January
01
Jan
17
17
2006
11:17 PM
11
11
17
PM
PDT
physicist says thanks for that. perhaps that link in #17 is an older paper—i’d be happy if someone points out a newer one. .... I'm reading Dembski's book "No Free Lunch". I'm a historian, but his development of the theory in this book clear enough for me to understand. This may have already been discussed: What's being discussed here is the Universal Probability Bound which has to do with the number of particles in the Universe times the number of units of Plank time (the smallest meaningful measure of time) in 14+ billion years times the ability of a particle to change from one state to another. Dembski uses a bound of 10^150 which is way more conservative than other mathimeticians quoted in the book. One uses 10^80 and another uses 10^50. Check it out.Red Reader
January 17, 2006
January
01
Jan
17
17
2006
03:31 PM
3
03
31
PM
PDT
physicist: The '20' is the number of different kinds of amino acids found in proteins. The '300' refers to proteins being, on average, of 300 a.a. in length. Which other 'thread'? Is it the one about 'adultery'? "Doesn;t the hypothesis H have to contain both chance *and* natural selection?" But the 'H' hypothesis IS natural selection in that NS acts on 'random' (chance) mutations.PaV
January 17, 2006
January
01
Jan
17
17
2006
02:59 PM
2
02
59
PM
PDT
PS PaV if you can explain what I meant about the classical and quantum mechs example to Davescot, please help in the other thread, as I'm not sure if I'm being unclear I have no problem at all understanding classical and quantum mechanics. I have a problem with your description of some recent theory of ID being the complement of some older theory of ID. I have no idea what your definition of older and newer theories of ID are nor quite what you mean by a complement. "Complement" to me is an operation in boolean algebra and I'm pretty sure that's not what you mean by it. -ds physicist
January 17, 2006
January
01
Jan
17
17
2006
12:07 PM
12
12
07
PM
PDT
Hi PaV thanks for that. perhaps that link in #17 is an older paper---i'd be happy if someone points out a newer one. I think I understand what P is supposed to be, but maybe I am missing something. Doesn;t the hypothesis H have to contain both chance *and* natural selection? is it obvious to you why it should be 20^-300 in the case you mention. I too have to dash, I will try to think about this more, later.physicist
January 17, 2006
January
01
Jan
17
17
2006
12:05 PM
12
12
05
PM
PDT
physicist: "On the other hand, it seems like you have more of a chance to prove something with Dembski’s law. But what are the P(T|H)?" I think Bill Dembski is best suited to answer your question. But I'm sure he'll direct you back to his papers. The formula you quoted further above seems like an 'older' formula. Both formulas contain the P(T|H). It's the probability of T as the rejection space (created by) given the 'chance' hypothesis H. So the more probable that T came about by 'chance', then the 'higher' the value of P. We're now dealing with probability spaces, of course, and for the simple example of a protein of length 300 a.a. (amino acids), this P is 20^-300th power. Got to run.PaV
January 17, 2006
January
01
Jan
17
17
2006
10:59 AM
10
10
59
AM
PDT
davescot---i would like to explain better the objections on #42 and #62 of the other thread. please do let me know where I'm being unclear.physicist
January 17, 2006
January
01
Jan
17
17
2006
09:29 AM
9
09
29
AM
PDT
Sure, it seems there is debate about the appropriate value of R, but maybe it isn't always so important. What surely is important is whether P(T|H) has been calculated for any system? Surely you need to know this in order to falsify Darwinian RM+NS for some system with pattern T. Does anyone reading this know whether calculations of P(T|H) have been made? Davescot, I respect your experience in design, and I agree that there is design in nature---of course the debate is whether the designer can be natural selection. I'm going to make another analogy with physics (which may be imperfect). We understand very well the laws of classical and quantum mechanics, and these laws are experimentally verifed for the interactions of simple systems. However, these laws would be very difficult to experimentally verify for very complex systems, with lots of interactions. For example, for large numbers of particles one cannot directly make useful predictions for the motion of each of the particles---you could *never* practically do the calculations using the underlying classical (or really quantum) mechanics theory. What gives you confidence that the underlying theory still applies is that it works for very simple systems. I think biologists feel like this with RM+NS. They know that reproduction of organisms take place, and they know that it is inevitable that this reproduction is imperfect. I think the consequent mutations can be observed in the lab (and I'm not saying that IDers dispute this kind of microevolution---I realise that that's not the ID argument). So one has confidence that for very simple systems RM+NS *does* take place. The question then is, is the model of evolution via RM+NS consistent with the complexity and diversity of biological systems we observe in nature? And surely the problem is that it is difficult to test this hypothesis explicitly; not only are the calculations are intractible due to interactions with the environment and other species, but we don't even know the initial conditions of the environment on the earth very precisely. So it's just not a test one can do very easily. However, presumably since RM+NS works well for very simple, idealised systems, biologists are inclined to believe that it will still work for very complicated interacting systems, much (IMO) in the same way that physicists believe simple laws underly very complex behaviour, even if it is intractible to derive the complex behaviour directly. And yes, I think there is an element of belief involved. However, if one were able to directly contradict the predictions of RM+NS in a clever, non-brute force calculation, then of course that would be interesting. However, I still don't understand how the notion of irreducible complexity is more than an assertion about a given biological system. Can one prove the assertion? It just seems a very difficult thing to prove. On the other hand, it seems like you have more of a chance to prove something with Dembski's law. But what are the P(T|H)? physicist PS davescot, I hope you noticed my reply on the other thread---I was not knocking ID for failing to distinguish hypothesis and theory---that wasn't my point at all. sorry for the misunderstanding.physicist
January 17, 2006
January
01
Jan
17
17
2006
09:15 AM
9
09
15
AM
PDT
physicist I'm a retired hardware/software design engineer. At this point you'll need to take up Dembski's mathematical theorems with someone else. The compelling evidence of design for me is irreducible complexity in molecular machinery, particulary the digital genetic code, the information it encodes, and the ribosome which together form a robotic protein assembler able to produce all the 3 dimensional components required to reproduce itself. It's so complex we haven't even cracked the algorithm for protein folding yet which is something of a holy grail in bioengineering. Digitally programmed machinery is something I spent a successful and lucrative career designing. I know a design when I see one and until someone can demonstate to me in a plausible, detailed, and laboratory verified manner how a self-replicating protein factory can self-organize then I consider Intelligent Design to be not just a live option but the only reasonable explanation for how it came into existence. This should not be censored from 9th grade biology students by tortured interpretations of the establishment clause. It isn't quite rocket science and to call it religion is an act of desperation by someone who knows he's obviously wrong. But I understand Dembski's universal probability bound to be 10^150 not 10^120 and is given as larger than the number of elementary particles in the universe. I could be wrong and with numbers that big it probably doesn't have any practical impact on the theorems.DaveScot
January 17, 2006
January
01
Jan
17
17
2006
08:05 AM
8
08
05
AM
PDT
PS this is from reading: http://www.designinference.com/documents/2005.06.Specification.pdfphysicist
January 17, 2006
January
01
Jan
17
17
2006
07:28 AM
7
07
28
AM
PDT
that seemed to chop of the end of my post: I’ve now read a bit more about Dembski’s law. My understanding of the claim is that one rules out RM+NS as consistent with a biological pattern, T, if: R*phi(T)*P(T|H) is much less than one. Have I understood that correctly? I'm not sure how one should determine R, but I know it's postulated to be 10^120. More importantly, has P(T|H) been calculated for any biological pattern?physicist
January 17, 2006
January
01
Jan
17
17
2006
07:26 AM
7
07
26
AM
PDT
Hi I've now read a bit more about Dembski's law. My understanding of the claim is that one rules out RM+NS as consistent with a biological pattern, T, if: R*phi(T)*P(T|H) physicist
January 17, 2006
January
01
Jan
17
17
2006
07:13 AM
7
07
13
AM
PDT
woctor: "The number 20^300 vastly overstates the size of the search space. The real question is, how hard would it have been for evolution to go from a pre-existing protein to any protein having the desired property, taking into account the viability or non-viability of the intermediates?" It doesn't vastly overstate the size of the search space for those scientists in the lab. And your quotes from the article simply mean that 'ankyrin' is ubiquitous--as it should be given its extraoridinary property. You further state: "First of all, evolution does not randomly assemble proteins from scratch. It works by modifying pre-existing ones. Secondly, you seem to be assuming that only a single protein in the entire space has the desired properties. This is not true." As to the first, how do you know that evolution does not randomly assemble proteins from scratch? You're simply making an assertion. When the FIRST protein came into existence, did it come from 'scratch'? And what would have been the search space for that first protein? As to the second, do you have evidence to the contrary? Do you 'know' for a fact that there are other proteins that have this same property, or are YOU assuming? As a Darwinist you must be accustomed to thinking that what you assume to be true then must be true. Doesn't work here.PaV
January 17, 2006
January
01
Jan
17
17
2006
07:08 AM
7
07
08
AM
PDT
Davescot Something I don't understand about Behe's assertion of IC. Is the idea that he is trying to falsify Darwinian RM+NS? But AFAIK he doesn't prove that the bacterial flagellum hasn't arisen by RM+NS. Challenging Darwinian theory isn't equivalent to falsifying it, surely.physicist
January 17, 2006
January
01
Jan
17
17
2006
05:56 AM
5
05
56
AM
PDT
"Behe’s IC concept fails because it is defined solely with respect to the IC system’s final function. Evolution is not constrained to maintain the same function through successive precursor stages when “approaching” an IC system, so the concept of IC is mistargeted." Claptrap. This completely sidesteps the real issue - the fact that both direct and indirect Darwinian pathways have been demonstrated incapable of producing machinery whose IC core cannot be built gradualistically. You are essentially rephrasing Ken's Ko-option Kanard (cute eh?). It does not suffice.Bombadill
January 17, 2006
January
01
Jan
17
17
2006
05:36 AM
5
05
36
AM
PDT
"He was presented with fifty-eight peer-reviewed publications, nine books, and several immunology textbook chapters about the evolution of the immune system;" Courtroom theatrics. What was Behe supposed to do, fisk thousands of pages of crap right there on the witness stand? Get real. Jones read none of it and wouldn't understand any of it if he did.DaveScot
January 16, 2006
January
01
Jan
16
16
2006
11:52 PM
11
11
52
PM
PDT
woctor Darwin hisself said irreducible complexity would falsify his theory. The pesky thing about theories in science are that they aren't science if they aren't, at least in principle, falsifiable. So Darwin dedicated an entire chapter to weaknesses. Well sir, Michael Behe took up Darwin's challenge and said the bacterial flagellum was an organ that couldn't be formed from incremental changes where each change improved the fitness of the organism. This challenge will remain standing until someone produces a plausible, detailed progression of how a flagellum can be constructed by random mutation plus natural selection. No one has even come close yet. Yes, it's difficult. Be thankful he didn't choose the ribosome because that's a lot harder. Evolution is a theory in crisis. The only thing keeping it viable is judicial fiat. That won't be available to prop it up much longer. So can you feel the love yet? I can. :-)DaveScot
January 16, 2006
January
01
Jan
16
16
2006
09:58 PM
9
09
58
PM
PDT
Has anyone else noticed how much woctor sounds like keiths (may he rest in peace)? woctor seems to have to come along just at the right time to take keiths' place after keiths got booted. keith, I mean woctor, consistent predictability is not thoughtful.Red Reader
January 16, 2006
January
01
Jan
16
16
2006
07:29 PM
7
07
29
PM
PDT
Last time I checked, Behe's Irreducible Complexity was standing as strong as ever. The fact is that Darwin's mechanism of NS + RM fails to account for IC machinery. Telling me that a flagellum shares homologous components with other machines does nothing to tell me how a blind gradualistic mechanism can build it when it needs all of it's parts simultaneously to function. You know... Ken Miller's canard.Bombadill
January 16, 2006
January
01
Jan
16
16
2006
07:20 PM
7
07
20
PM
PDT
woctor: "The key is not to find examples of “neat” design, but rather to find examples that cannot, in principle, have been produced by undirected evolution. This is what Behe attempted (and failed) to do with the concept of irreducible complexity." Here's a quote from the article: "After thousands of stretches, a pattern emerged," Marszalek said. "The molecule exhibited linear elasticity--a property that had never been seen in any other protein." The probability space for a normal-sized protein is 20^300th power. That means that scientists trying to synthetically compose a nanoparticle with this linear elasticity property would, by chance, i.e., randomly, NEVER, NEVER, EVER fabricate this protein even if they had all the time to do it from the beginning of the universe. And the Darwinist's response: Well, how do you know it didn't happen by chance? Because I can do the math. You ask IDers "to find examples that cannot, in principle, have been produced by undirected evolution." But this last example is an instance where the protein couldn't have been produced by DIRECTED (men and women in nanotech labs) evolution. What say you?PaV
January 16, 2006
January
01
Jan
16
16
2006
06:48 PM
6
06
48
PM
PDT
Red Don't you know if 9th graders are told that evolution is a theory, not a fact, and should be carefully studied and critically considered it will drive western civilization back into the dark ages? I'm planning on winning so as a precaution, in case the nattering nabobs of negativity are right (they're scientists after all so they're probably right), I'm learning how to get along without electricity and collecting goods to use in a barter economy. ;-)DaveScot
January 16, 2006
January
01
Jan
16
16
2006
03:45 PM
3
03
45
PM
PDT
PaV - I hear it said by Darwin's defenders that ID is something like "the end of science". Nothing could be further from the truth. I read something just in the last two days like how horrible it is that kids might learn ID in school. She said that the world is in danger of being overrun by mutant bacteria and if kids in school aren't taught the only real scientific theory--NDE--then there will be no one educated enough about evolution to stop the mutant threat. I do not have a link. But the opposite is true as your article suggests.Red Reader
January 16, 2006
January
01
Jan
16
16
2006
03:27 PM
3
03
27
PM
PDT
Great first article, PaV! I think nano-engineers will be marveling at and learning from the examples given us by the designer of life for a long time to come.DaveScot
January 16, 2006
January
01
Jan
16
16
2006
02:30 PM
2
02
30
PM
PDT
1 2

Leave a Reply