Uncommon Descent Serving The Intelligent Design Community

Rube Goldberg Complexity Increase in Thermodynamically Closed Systems

arroba Email

A thermodynamically closed system that is far from equilibrium can increase the amount of physical design provided it is either front loaded or has an intelligent agent (like a human) within it.

A simple example: A human on a large nuclear powered space ship can write software and compose music or many other designs. The space ship is closed but far from equilibrium. But complexity can still increase because of the human intelligent agent.

Consider then a robot whose sole purpose is to make other robots like it or even unlike it in a similarly thermodynamically closed system. It can do this provided the software is front loaded into the robot.

Can the robot make something more irreducibly complex than itself in such a thermodynamically closed environment? I’d say, “YES”, but in a qualified way, it can evolve it provided that it is front loaded with the goal of making robots with more IC than itself.

Simple illustration, write a piece of software that can duplicate itself, it then essentially a software robot. Make the software such that each generation of software must go through a useless Rube Goldberg ritual like processing a set of randomly generated short passwords (say 3 characters). What do I mean? The first generation would look like:

String password_1 = "ABC";

if ( password_1.equals( "ABC" ) ) {
// proceed with replication....

With each generation, the robot lineage is pre-programmed to make each offspring increase the number of passwords it must make in order to procreate. Generation 2 would have software like:

String password_1 = "ABC";
String password_2 = "123";

if ( password_1.equals( "ABC")
&& password_2.equals ("123")

// proceed with replication....


Again this addition of useless complexity to each generation. By useless, I mean Rube Goldberg complexity.

Thus by the millionth generation, the robot must process a million passwords in order to procreate the next generation, whereas the first generation only had to process 1 password.

The irreducible complexity in the robot is substantially higher than the first generation. One can see this could be analogous to increase in IC in biology if the strategy of increasing Rube Goldberg complexity was a front loaded goal.

Do I believe this is how complexity evolved on the Earth? Not most of it, maybe some of it at best. I believe genetic entropy dominates, but I just put this idea on the table for consideration.

I use the definition of closed systems from

Can the robot make something more irreducibly complex than itself in such a thermodynamically closed environment?
Living systems are therefore in the thermodynamic sense open systems, and the metabolism - that is the turnover of free energy - found in all living organisms is a prerequisite for their existence [225]. - Information and the Origin of Life, p 131 225. The fact that living systems are open systems is in accordance with the fact that biological information can only arise in open systems.
SalC: I have made a comment here that should be useful for onward reflections. KF kairosfocus
I'm fully in agreement with some degree of front-loading in biological life. I think we are only scratching the surface of what exists in that realm. I don't even have a problem with saying that an irreducibly complex might be produced via front loading. IC really only presents an argument against natural approaches, and any number of front-loading approach might produce it. However, I don't think your password scheme counts as IC. I'm certainly not saying that the parts have to be "absolutely essential," as you put it. That's an impossible standard. But they have to be essential to the architecture or mechanism in operation. Every part of the mousetrap has some required role for something that operates like a snap-mousetrap. Your passwords simply don't. They, from the perspective of how the system operates, simply pointless. Put another way, the definition requires that each part "contribute to the basic function" of the system. Your passwords don't contribute to the basic function of replication.
If one came across a later generation robot, without seeing the part of the system that prescribes increasing the number of passwords, one might mistakenly suppose the robot password system had to appear all at once since knocking out one part would disable the whole replication system.
I don't think so. In such a case, we'd identify that the system could be simplified by removing extra passwords and wouldn't conclude that all passwords had to appear at once. (Behe requires that IC system be understood enough that we could make such a determination.) Winston Ewert
Where did I say that the system had to be necessary for survival?
You didn't, and my mistake. Sorry. But with respect to the robots reproductive ability, every new password adds to the number of parts of the IC replication system according to Behe's first version -- if you knock out one part of a robot's password system, the robot's replication system ceases to function. The fact that the robot can implement the replication with fewer steps via alternate means does not mean the extra passwords are not part of the increase in IC. I pointed out the flaw in thinking IC is refuted by simpler ways of doing the same task, like holding a glass. Evolutionnews reported on idea I put forward at UD: http://www.evolutionnews.org/2012/03/illustrating_ir057831.html If one came across a later generation robot, without seeing the part of the system that prescribes increasing the number of passwords, one might mistakenly suppose the robot password system had to appear all at once since knocking out one part would disable the whole replication system. And if three character passwords seem too short, it's not a big deal to make each password 1000 characters long. One could envision the robot not only implements the added complexity via software but maybe some hardware Rube Goldberg mechanisms such that the final generations are physically more complex in terms of the number interlocking parts needed for reproduction. The problem is the word "essential" takes on many meanings. One can mean absolutely essential, as in, "that's the only way something can be done", or one can mean that given a certain architecture, something is essential in the sense if it is missing the system fails. A lot of IC in biology strikes me as essential only with respect to a given architecture, it is not absolutely essential in the sense that accomplishing a given function can be achieved only by one route. A good example is insulin. It is "essential" in the sense that if it is gone, the creature dies, but it is not essential in the absolute sense since lots of creatures don't use insulin in the first place. ======= As a general addendum: The discussion has relevance to the issue of whether some of the existing complexity (including IC complexity) could have evolved from prior forms via intelligently designed front loading, and what kind of front loading might be needed to allow this. Behe was certainly sympathetic to front loading, and the robot example shows to me that at least it is possible a front loaded goal of increasing complexity might possibly cause more IC to emerge. It certainly isn't Darwinian evolution, if anything it is anti-Darwinian. How I think this may relate to present biology is that a lot of supposed junk DNA does seem to be oriented to evolving organisms with modest new features. Polyploidy, transposable elements, repeats, etc. seem oriented to evolving adaptive functionality not expressed in a current population. As I've said before, I think biological systems perform lossy decompression of modest front loaded features. I'm not averse to the idea a biological system can take input from the environment and do a modest amount of functional self-engineering. I believe that because such quasi-AI systems seem within realm of human designs, so I would not be surprised to see them in biological designs. Even many YECs are interested in front loading because of Noah's Ark and the limited number of supposed ancestral species... Any way, I'm in the minority of ID proponents thinking that IC complexity can increase through front loaded design. Thanks for your response. scordova
Where did I say that the system had to be necessary for survival? I certainly didn't mean to. I don't doubt that lots of complexity is "showing off" but I fail to see the relevance. Winston Ewert
But imho, biology has a lot of IC that is not absolutely necessary for replication, there is a lot of complexity almost just for acrobatic show. If complexity were just about replication, we'd not have any need for complex multi-cellular sexually reproducing species. I think it is possible some IC evolved from a front loaded strategy. IC is not about what is absolutely necessary for survival or whatever. A design feature is one that actually would be selected against and biology is rich with complexity that is selected against, that's why there is a lot of extinction of more complex forms. The peacock's tail is an example of complexity that doesn't have to be there, but there it is. scordova
I don't think this is an example of an irreducibly complex system. The system is trivially reducible by removing password checks. Arguably, the system is improved by removing these pointless checks. You could argue that the collection of passwords is irreducibly complex but it doesn't seem that the passwords could be described as well-matched interacting parts. Furthermore the removal of a password doesn't cause the collection of passwords to cease functioning. It will remain a collection of passwords just one with diminished capacity. It will fail with the new robot, but that's by reference to systems external to the password collection. In general, Rube Goldberg like systems will not be irreducibly complex because they contain many unnecessary steps. To be irreducibly complex we require multiple necessary steps. Winston Ewert

Leave a Reply