Uncommon Descent Serving The Intelligent Design Community

Solving the Origin-of-Life Problem

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

There are three main approaches to current origin-of-life studies – metabolism-first, replication-first, and membrane-first. The problem with each of these approaches is that they ignore the reality of irreducible complexity in self-replicating system.

Our own InVivoVeritas has put together a nice blog post illustrating the problem. The functions of enclosure, construction-planning, power, fabrication, gateway, and transport are all required for a minimal functioning cell. As InVivoVeritas shows, the other proposals not only don’t work, they suffer from theoretical problems that prevent them from being able to be precursors of the larger system.

Naturalists always insist that they are just “following the evidence”. But, especially here, it seems to me that they are dictating the evidence. The assumption is that everything must start from smaller units and build up. Why must that be true? Why is it not that it starts from a larger unit first? The only evidence of cells that we have anywhere are fully-functional cells (and, in fact, the earliest cells highly resemble modern ones). The theoretical evidence is that all of the components are necessary. Therefore, every bit of evidence we have points to InVivoVeritas’ point – that only full-replication-first systems are likely to work.

This doesn’t meant that people with other ideas shouldn’t pursue their research. It just means that people should stop pretending that the evidence is in their favor.

One other thing is to keep in mind that, despite claims to the contrary, evolutionary theory is not separable from the origin of life.

Anyway, I would be interested in hearing everyone’s reactions to InVivoVeritas’s arguments from his post.

Comments
Nightlight at #22
My point is that only because you can come up with some abstraction of such systems (MCM) that appears irreducibly complex in those terms, that does not prove that self-replicators are irreducibly complex in any absolute sense, in all possible conceptual frameworks.
If you contemplate a little more in depth what are absolute constraints and barriers that a physical self-replicator must overcome I am sure that you will find that such a machinery is not only irreducible complex but I would say extremely irreducible complex – no matter how you try to “cut” it. Here are the barriers that a self-replicating material machine must overcome: a. Construction and evolution of an envelope with variable geometry – as the machine ingest materials, grows in volume during the growing and cloning phase. b. Transformative power and capabilities of converting raw materials accepted through the enclosure gateways into materials good for construction and later transformed through construction and assembly into internal parts/components and full assemblies. c. Selective capability of the enclosure gateways to select and accept from outside environment only “good” raw materials – that are compatible with the fabrication “needs” and the “construction plans” of the self-replicating machinery and to ignore and reject bad such materials that might “poison” the inside of the machinery. This capability may not be trivial at all and may be seen as a versatile material /substance identification feature of the machinery (or of the gateway). d. The machinery might need to maintain some internal scaffolding that preserve the 3d (spatial) structure and integrity of the enclosure and the organization of its interior. And do not forget that during machinery spatial growth (to accommodate new materials from which internal parts are constructed during cloning phase) that scaffolding itself is subject to variable geometry demands, demands that are not less challenging when the machinery starts the division phase that completes when the grown enclosure and scaffolding transforms into two daughter enclosures, volumes and separate scaffoldings. e. There must be certain ingested materials that are good for producing energy and the machinery must have the ability to recognize, ingest and separate this sort of materials. The machinery must possess some mechanisms and processes to generates energy (either in continuum, as-needed or in-quanta manner – none of these three alternative being trivial or without secondary concerns like: energy planning (availability when needed), distribution and sufficiency to “feed” all energy consuming processes (sub-systems) running inside. f. There must be quite a sophisticated manufacturing (fabrication) capability inside this machinery. It must be able to fabricate all inventory of parts and assemblies that exist into a “mature” machine – in order to create its replica. g. Have you ever wondered what does this mean that such a machinery is autonomous and self-sufficient (the same thing can be said also about its “progeny”)? The autonomy is conditioned only by the availability in its surroundings of appropriate “nutrients” and a non-adversarial environment (from a mechanical, temperature, chemical point of view). This autonomy is so demanding because all internal parts and processes must be guided and planned with such precision – that is astonishing. h. There must be vast amounts of structured information of various categories that is stored and manipulated by the specialized components of this self-replicating machine. It is unimaginable that such a machine can create accurate replicas of itself, can coordinate the activities of all its parts, assemblies and processes without an extensive information basis as well as diverse means to store, encode/decode, communicate, process and copy information. And it is very important to observe and conclude that all those varieties of information must (pre) exist into the original machinery (let’s say before any advance toward self-replication) and must accurately be copied into the daughter machines during cloning and division phases of the replication. I think it is hard to contest that any of the machinery capabilities inventoried above must be present by necessity inside a physical machine that has the intrinsic ability to self-replicate. And all this argumentation was made to counteract your statement above. More focused reflection on what MUST be inside a self-replicating machine and an analysis of its candidate components and capabilities paints a picture of an immense sophistication, extreme irreducible complexity (rather than the plain “irreducible complexity”). By logical consequence any abstract representation (or model) of a cell that is an autonomous physical self-replicator must preserve and project this defining characteristic of the modeled object of (extreme) irreducible complexity. I don’t think that any alternative ways to articulate the composition in parts and functions of the cell may avoid the presence of the 6 components (or something close to them) that I identified in MCM. Rather additional components and capabilities will need to be added to the Model – as the desire for a more accurate and detailed model need to be achieved. It seems to me that you are significantly biased toward your own experiments and visions related to cellular automata and software experiments in a world which may be very interesting but has very limited points of contact with the physical reality of the self-replicating cell and associated multi-dimensional intricacies of such machinery. You are preoccupied by conceptual frameworks for self-replication and you argue that there might be varieties of such frameworks where self-replication is not irreducible complex. As jonnyb stated, even such a “flat, one-dimensional” kind of self-replication that you are talking about requires some generational rules and a non-trivial software execution substrate that together make up an irreducible complex system. I found your introduction about Wolfram concepts interesting. I think though that you engage too much on unfounded speculations that lead you somewhat to a credo close to what materialists believe: that there is a magic place at an exotic dimensional level that with some simple rules and some recursive procedures can create information (FSCI) and ultimate intelligence from (almost) nothing. In conclusion it seems to me that the point that you made in the quote above is rather weak and I hope my response shows clearly why I think so.InVivoVeritas
September 24, 2013
September
09
Sep
24
24
2013
01:14 AM
1
01
14
AM
PDT
Computations by neural networks is based on anticipatory aka teleological algorithms (this post explained how a pattern recognizer network performs anticipatory computation). The main advantage of that type of anticipatory system (network with adaptable links) is that it requires very low complexity front loading, simple automata which can adapt links with "neighboring" automata based on some built in punishment/reward. With few simple rules of operation loaded upfront (including definitions of punishment & reward, plus rules for link adaptation) they spontaneously form self-programming distributed computers, just like a brain except with a different implementation of nodes and links. And of course, you always need some front loading for any theory that says something. There is no free lunch. But this one based on the computational foundation of natural laws is less expensive than other proposals. Unlike the orthodox ID, the NKS or Planckian networks ID doesn't have all knowing all powerful designer. Instead, the NKS designer builds initially simple system out of lots of very primitive automata (computing units) which have built in a drive (via local, dumb punishment-rewards scoring) to maximize net rewards/punishments. From there on, the creation itself computes the rest of the complexity we observe (the general mutual harmonization process). In such system "evil" is simply a result incompletely computed optimal harmonization, intermediate result of computation. For example, it is our (human) job to figure out (compute) how to harmonize social organization to maximize the net reward/punishment score i.e. humans are are the latest computing technology built by the older, simpler computing technology (cellular biochemical networks) to perform mutual harmonization at a larger scale (than that comprehensible by the cellular biochemical networks) such as human societies. In turn, the cellular biochemical networks are themselves a technology built by the "non-live" technology we presently understand as dumb physical-chemical systems (atoms, molecules, fields, etc). In fact the latter systems are themselves a previous generation of the computing technology, intelligent, anticipatory systems.nightlight
September 23, 2013
September
09
Sep
23
23
2013
01:46 PM
1
01
46
PM
PDT
Nightlight - Division of labor is a teleological phenomenon, which occurs when forethought is added to the system. I think part of the issue is that you may be confusing "repeating" with "replication". The autocatalytic sets makes me think that. Autocatalytic sets *repeat*, but they don't *replicate*. Those are very different phenomena, and their distinction is important. There are two possible designs for a replicator - one is where a process examines the existing structure, then copies it. another is where a process has a separate information store, and builds out of that. The important piece of replication is that if you make a change to the input you get a similar (still-being-copied) change to the output. If I mutate a gene, the mutation gets copied along with the rest. If I modify an autocatalytic set, it simply stops working. I think you'll find that if you really get down to what is required to successfully self-replicate in the real world, you'll find that IVV's model is actually oversimplified - There are more necessary components than his model states. But I don't think you'll be able to find the function of self-replication without the systems he specifies.johnnyb
September 23, 2013
September
09
Sep
23
23
2013
12:56 PM
12
12
56
PM
PDT
@johnnyb #25
Also, you misunderstand irreducible complexity. IC doesn't state that *this* system is the simplest which performs this function.
I didn't say that.
It states that there is a core of parts which are all simultaneously required. The existence of a simpler system does not refute IC.
What I said is that a different conceptual scheme (the way you break things into components and put it together) can produce the same "compact self-replicator" phenomenon without being "irreducibly complex" (hence it is simpler in that sense) i.e. without requiring all complete pieces to be built just so and put together at once to make the whole system work. In the alternative scheme, a distributed replicator is built first (very easy to do from nearly any random automata configuration, provided rules of the game are right) which is then focused into a compact replicator via holographic type algorithms. In other words, different aspects of the replicator, each aspect spanning all the final parts, are built separately. Of course, to make that go, one needs a distributed computational substratum (with additive computing capacity of the simple blocks such as neural network) at the foundations of the laws of nature, rather than our present laws of physics. To illustrate the difference consider building a large computer chip with millions transistors connected via a complex network of connections. The MCM model would insist that the chip must be built by building each complete, working transistor separately and inserting it into the partially built chip and connecting it to the already present complete and connected transistors. That would be impossible to build that way. Instead, the chip is built in multiple phases, each phase laying down some common aspect (feature, attribute) of all transistors and their connections. Only when the final layer is placed down, all transistors and their connections are complete and in the right places. In other words, the better scheme leverages the division of labor to turn the seemingly impossible job into a doable job.nightlight
September 23, 2013
September
09
Sep
23
23
2013
11:15 AM
11
11
15
AM
PDT
@kairosfocus #23
It seems to me that he issue is to set ourselves in a warm little pond or the like environment and from the chemistry and physics there without intelligent design, arrive at an encapsulated gated metabolic entity with an integrated code using kinematic von Neumann self replicator. For this, intelligently designed software entities are of only dubious relevance.
That presumes that the present fundamental laws of physics are the way how it all works. As many theoretical physicists can tell you, that is unlikely to be so. Quantum Field Theory, which is our current foundation is an only an "effective" theory i.e. like ancient Ptolomaic theory of gravity in terms of epicycles. It works to an extent but as more phenomena are described, it gets increasingly more cumbersome, complex and ad hoc, with ever more hand put cycles needed to adjust the theory to the fit the latest observations. Today we have fundamental theory that routinely yields infinites as results, which then need to be fixed by subtracting some other ad hoc infinities, conveniently picked just so, that the finite terms remaining fit the experimental data. We have dozens hand put free parameters (so called physical "constants") that get adjusted and averaged every few years by a committee for the best fit of the theory to the latest experimental data. We have open ended quantum measurement & locality problems still debated as vigorously as in the 1920s when Einstein and Bohr started the disagreements. We have astrophysics and theory of gravity that don't mesh with quantum theory. Plus whenever gravity doesn't agree with the astrophysical data, if the objects seem to attract more than theory predicts we insert invisible, undetectable "dark matter" that helps it out. But when the objects don't seem to attract strongly enough as theory predicts, we declare that it is due to invisible "dark energy" that is causing repulsion, canceling out the predicted attraction. We might as well call them all 'spirits of nature'. Regarding ID, there is also a problem of fine tuning of physical laws for life i.e. the physical laws and "constants" seem to be posed on a tip of an extremely sharp needle, where the smallest change would make the whole universe unravel and either collapse into a point or instantly explode out into emptiness. What that all points to is that what we now call fundamental laws of physics cannot be a suitable foundation for a coherent explanation of all the rest, including origin and evolution of life. Number of theoretical physicists are pursuing various pregeometries, the more fundamental theories for which "effective" approximation at large scales would appear as the present "fundamental" physical laws, but without dozens of mysterious free fudge factors, infinities, dark matter and energy or any other overly-flexible nature spirits. Steven Wolfram with his New Kind of Science (NKS) is to my eyes the most promising and most radical approach in this direction, postulating the computational foundation of the laws of nature (brief sketch of the main idea). The basic space-time and the rest of physics is built at Planck scale as a random network of very simple nodes (automata) with some (still unknown) rules of operation. The rest of physics, chemistry, biology,... is computed by the network itself (which is a universal computer). Hence, universe works like Matrix, except at a much deeper level than the Hollywood version. The basic building blocks need not be very smart (Wolfram has shown that most simple rules yield universal computers), which avoids the problem with infinite regression of ever smarter 'initial designers' needed to design the previous 'initial designers'. The amount of front loading is thus minimal, relying on the system itself and its additive intelligence (computational capacity) to work out in detail the far more complex and still unfolding harmonization process. Of course, any system needs some front loading (or the basic postulates from which the rest is derived). The beauty of the computational approach is that the one doesn't need vast, let alone infinite, amount of intelligent or complex front loading (the all knowing intelligent agency) -- the system itself works out (computes) the observed complexity. The computational approach is the most promising one because it offers a single coherent explanation for both, the laws of physics, chemistry,... and the complexity of the observed phenomena. It also explains why the approach you insist on, of trying to derive origin of life and its evolution from our present fundamental physics cannot work. In the computational approach our laws of physics are not fundamental but merely capture some simple regularities of the much more complex unfolding patterns being continuously computed by the underlying substratum. Hence, in the computational approach, life and biology are not reducible to the laws of physics which is what ID insist needs to be done, else this or that kind of all knowing all powerful designer must have done it. In the NKS scheme, the designer is neither all powerful nor all knowing. He has put in very simple elemental building blocks, each of very modest computational power, but which can (and are by the rules of the game, to maximize pleasure/pain, compelled to) combine to yield more powerful distributed computers, which in turn combine further to create even more powerful computing technologies (downstream including life, cells, us and our computers in our corner of the universe).nightlight
September 23, 2013
September
09
Sep
23
23
2013
10:44 AM
10
10
44
AM
PDT
Also, you misunderstand irreducible complexity. IC doesn't state that *this* system is the simplest which performs this function. It states that there is a core of parts which are all simultaneously required. The existence of a simpler system does not refute IC.johnnyb
September 23, 2013
September
09
Sep
23
23
2013
10:00 AM
10
10
00
AM
PDT
Nightlight - The problem is that without the systems that IVV specifies, they don't form hills *to* anywhere. Taking your analogy, it is like looking at the waves nearby, and assuming that the people on the cliff must have surfed to the top. In order for any sort of evolution to even have the possibility of occurring, one needs an arbitrary replicative capacity. Without this, there are no transitions from one to another - it makes the "next" step no more probable than just going there directly. Another discussion from a more theoretical/mathematical angle (which seems to be your preference) is Voie's Biological Function and the Genetic Code are Interdependent, which shows the issue from a Goedelian perspective. But, as IVV points out, there are many other necessary factors such as containment, environmental interactions, and power that must be dealt with as well.johnnyb
September 23, 2013
September
09
Sep
23
23
2013
09:58 AM
9
09
58
AM
PDT
IVV & NL: It seems to me that he issue is to set ourselves in a warm little pond or the like environment and from the chemistry and physics there without intelligent design, arrive at an encapsulated gated metabolic entity with an integrated code using kinematic von Neumann self replicator. For this, intelligently designed software entities are of only dubious relevance. All, backed by actual observations. It rapidly becomes evident that the system is massively irreducibly complex and riddled with functionally specific complex information and /or associated organisation, where we have to explain relevant codes, coded stored info in string data structures, and algorithms. The only empiric ally warranted, observed source of such is that causal process known as design, which we habitually associate with designers. For the same reason that we detect arson from signs and thence associate arson with arsonists. KFkairosfocus
September 23, 2013
September
09
Sep
23
23
2013
05:16 AM
5
05
16
AM
PDT
@InVivoVeritas #14
I think our interests are totally divergent. You are focusing on using the cellular automata for studying some exotic, vaguely defined self-replication in a world where the individuality (as in cells, micro-organisms or individuals of the biological world) is diluted or totally disappears. My interest is on creating a minimum model of the cell where, it is true, self-replication is a very important capability.
The specific issue I was aiming at is the claim of irreducible complexity of self-replicators not the issues of completeness or merits of the MCM model in itself. That how the thread starter framed the issue. My point is that only because you can come up with some abstraction of such systems (MCM) that appears irreducibly complex in those terms, that does not prove that self-replicators are irreducibly complex in any absolute sense, in all possible conceptual frameworks. They are irreducibly complex only if you also require that the allowed elemental building blocks must be those you conceived. I am simply pointing out that a different kind of building blocks, more finely grained, yield self-replicating cells through a much more indirect, but algorithmically much simpler path. While the cellular automata (or more generally, adaptable networks) do not produce directly the replicators that appear or work like live cells (as spatially compact replicators), they are only an intermediate phase, which achieve distributed replicators with high probability (almost always) from random initial state of the automata universe. The second phase which is algorithmically analogous to holographic projection can turn the distributed replicators into the final compact replicators to which live cells belong. The issue here is thus analogous to standing in front of a tall cliff with a person in a swimming suit and no gear of any kind around, waving at us form the top. You are pointing out that since there are only couple very high footholds on the front face of the cliff, there is no way that this guy could have reached the top without a helicopter or some other external help (analogue of supernatural intervention along the way). In turn, I am pointing to a series of mild hills coming to the top from the other side of the cliff and saying that's how he could have gotten there the way he is, without climbing gear, helicopter or any other help from outside.nightlight
September 23, 2013
September
09
Sep
23
23
2013
02:35 AM
2
02
35
AM
PDT
I am turning now back to the original topic of this thread (please see: A Minimum Cell Model and the Origin of Life Problem). Why a cell model might be useful? The Rationale and Usefulness of a Cell Model I am going to use this opportunity to explain the motivations and potential usefulness of a cell model like the Minimum Cell Model. There seems to be no place for debate that the typical mono-cellular organism (in general a cell) can be legitimately and technically seen as a mechanism or machinery (of quite amazing complexity - by the way). This statement is supported by the fact that the cell is effective in producing accurate copies of itself through a process of self-replication supported by metabolism. It is natural to think about constructing a model of the cell as an individual mechanism that ingests material from its environment and uses this material to grow and ultimately divide and produce two daughter cells – identical with their mother cell. The daughter cell preserves accurately the capabilities of the mother cell to metabolize input materials and to create its own identical copies through self-replication (a composition of cloning and division phases). Observing basic known facts about the internals of the biological cell it is appealing to make the effort to identify the type of components that make up the cell together with their roles and functions that together achieve successful self-replication. The objective here is to characterize the cell (the mechanism) as a composition of components that cooperate and orderly interact in achieving the observed mechanism results. How shall we proceed to identify the Components of the Model and their function? By observing the following guidelines: • Identify Components of the Model that map very well with known Elements of the Biological Cell. [It should be hard/impossible to deny that the Component does not model real Cell Elements] • Identify for each Component Functions that map well with the known functions discovered within the cell for the corresponding Cell Elements. [It should be difficult/impossible to deny that the specified Component Functions do not manifest in the cell and they are not associated with the corresponding Cell Elements]. • Remain at a High Level of Granularity in identifying the Model Components and their Functions. [In order to construct a Minimum Model] • Verify that all major Cell Elements and all major Cell Functions are represented in the Model – through corresponding Components and their Functions. [Model adequacy and completeness] • Identify major typical Interactions between the Model Components and verify that these Interactions have a good correspondence with the represented Cell Element Interactions. [Cell processes represented in the Model] • Verify that the Model contains Components (with their Functions) that represent the information storage and information processing that is known to be manifested in the cell or there is good ground to speculate that manifest in the cell. [Major role of information in the cell functioning] The guidelines above should result into an Adequate Model with a good representation power of the key elements of the Cell. There is a strong intuition that the cell is a Mechanism of Irreducible Complexity [see Michael Behe’s Darwin Black Box] The Minimum Cell Model may give more concreteness to perceiving the Cell as an Irreducible Complex Machinery and the known invariants and rules that govern such a system. The constructed Model – The Minimum Cell Model (MCM) - will help in giving concreteness (at a simplified but realistic level) to perceiving, understanding and operating with a Cell as with a Mechanism with well identified parts and interactions that achieves the goals of metabolism and self-replication. The availability of the MCM may facilitate the following activities: • Providing a better understanding of the cell seen as a marvelous mechanism structured as a composition of sub-systems each one with a definite set of roles and responsibilities. • Elaboration of more detailed cell models. • Getting directions for future research • Providing a more objective, technical foundation for research on various topics like origin of life, mutations, common descent, evolution, etc.InVivoVeritas
September 22, 2013
September
09
Sep
22
22
2013
10:49 PM
10
10
49
PM
PDT
Nightlight at #16 Your entry focuses again on cellular automata and your experiments and speculations about applicability of cellular automata into simulating or investigating some platonic, abstract replication in no way similar with the replication exhibited by the biological cell. I think our interests are totally divergent. You are focusing on using the cellular automata for studying some exotic, vaguely defined self-replication in a world where the individuality (as in cells, micro-organisms or individuals of the biological world) is diluted or totally disappears. My interest is on creating a minimum model of the cell where, it is true, self-replication is a very important capability. For me it is important that the model be adequate and represent correctly the main characteristics, composition, structure and behavior of the cell as a physical entity that always is clearly isolated from its environment by an envelope named enclosure in the model. It seems to me that at least for the reasons below the cellular automata are not adequate for modeling the self-replication of biological cell (i.e. real-life self-replication): - The lack of a border (enclosure) simulating (representing) the membrane of the cell - The inadequate (limited) mapping of the 2d automata cells to the elements of a physical cell (no matter what is the selected granularity level of representation). - Not clear if the admission of new “stuff” inside the cell can be adequately represented by automata rules. - I have no idea how the rules defined for the automaton can be mapped in any way to processes or activities taking place in a cell.InVivoVeritas
September 22, 2013
September
09
Sep
22
22
2013
10:19 PM
10
10
19
PM
PDT
Mung at #18 Evolutionists and materialists are categorical on rejecting the Miraculous (like a creator) as a starting hypothesis for their science. But they are very comfortable with the Miraculous when it is part of their just-so ‘scientific’ stories.InVivoVeritas
September 22, 2013
September
09
Sep
22
22
2013
10:10 PM
10
10
10
PM
PDT
I have always wondered about how some 'system' became enclosed and yet the enclosing 'membrane' just happened to allow through the barrier those 'nutrients' essential for the continuance of the enclosed system. The "cell" part of "cellular" automata seems at best mere metaphor and at worst misleading.Mung
September 22, 2013
September
09
Sep
22
22
2013
03:03 PM
3
03
03
PM
PDT
#13: For crying out loud, Philip! I was being sarcastic.Axel
September 22, 2013
September
09
Sep
22
22
2013
10:25 AM
10
10
25
AM
PDT
@InVivoVeritas #14 Thanks for detailed response. Regarding the real world chemistry, any collection of molecules described in abstract terms of conversions between types of molecules forms in the symbolic space some kind of automata system (albeit not on a simple, regular rectangular grid as those I played with). The experiments with artificial automata merely explores design space of such systems. Once interesting abstract systems are found, one can pursue real world implementation. From my experimentations, the big surprise was how many different rules produce "interesting" behaviors. The most common "interesting" cases arising from the cyclic 4 state automata with cyclic transitions of type 0 -> 1 -> 2 -> 3 -> 0 (or some variants with different graphs than 4 node ring), produced patterns very similar to the real world BZ reactions or to spirals common in "reaction-diffusion" systems. The mentioned crawling amoeba like self-replicating blobs were much less frequent in the rules space, requiring more finely tuned life-death functions that had 3 distinct sections of the mapping: under-crowding (too little of the "good stuff"), optimal region (just right), over-crowding (too much of the "good stuff"). The "good stuff" being measured (computed from the neighborhood) that produced the most interesting blobs was variety i.e. the sums of various kinds of differences between the 8 neighbors around the current cell. In real world these 'sums of differences' would correspond to various kinds of gradients in the substratum. Both design spaces, the symbolic and the real world one, are for all practical purposes unlimited in the variety available. While the real world chemistry (depicted in the above abstract form) is only a subset of the general abstract space, one need not think of automata as only molecules & their kinetic equations. They can be anything, bacteria or viruses, predator & prey animals with food webs, humans, companies, elements of economic networks, web memes in the culture space, etc. Similarly, the computing model being a regular rectangular grid is merely an artifact of the programming convenience. More general abstract models are networks (2-D grid is merely a special case of network with 4 or 8 connections to the nearest neighbors). Since I was a physics grad student (with several summer weeks of nothing else to do and a new PC), I wasn't even thinking about chemistry but about some pregeometry, some unknown elemental building blocks underpinning at Planck scale our regular physical space-time, "elementary" particles and fields. Since such automata can form 'universal computer', any behavior I could think of and describe in the finite set of rules was a fair game. After all, I was peeking into the unknown realm, imposing no a priori constraints other than looking for the maximum simplicity at the most elemental level. Note that such simplicity at the ground level (e.g. Planck scale at ~10e-36 m) has plenty of room to combine into systems of enormous complexity well before our level of elementary particles (10e-16 m). There are 20 orders of magnitude between these two levels, which is more building blocks than between us (humans & human technology) and our 'elementary' particles. Given that such systems built from the simplest two state automata can compute anything that is computable, hence simulate any conceivable finitely describable behavior at the higher level (including anything that life does), they can also as a side effect, as one aspect of their output patterns, replicate any laws we currently consider fundamental. Another post at UD describes this "Planckian Network" approach and its implications for fine tuning of physics, origin and evolution of life, in more detail with additional links, so I leave these aspects at that. Returning to our discussion proper, the gratuitous constraints that your MCM approach imposes on the basic replicators are the requirement that functional blocks (as you happened to conceptualize them) must be mirrored in the spatial organization of the real system -- the functional blocks in your particular conceptualization (there are countless many conceivable ways to go about it, though), have to be replicated by the spatial and physical organization of the replicator. Such constraint automatically excludes much more general distributed self-replicating systems whose functional blocks are fragmented into basic building blocks of the substratum and spread around, and what is being replicated is some spread out activity pattern on the substratum. Consider for example your first requirement "Enclosure" that defines the "system". That kind of replicators are spatially compact distinct blobs with "mine" and "others" belonging to completely disjoint spatial regions. If you drop such constraint, you can have much more general "systems" which overlap and permeate each other, where the same elemental building blocks of the substratum can serve multiple roles of different "replicator systems" simultaneously. The point where we differ comes down to what exactly is the "(replicator) system"? Inspected more closely, "system" is actually epistemological category, not ontological category as you treat it. Namely, a "replicator" is one of the conceivable ways to organize all the events going on in the universe into a more compact description or conceptualization. They simply capture certain kinds of redundancy (mainly repetitiveness) in the overall pattern of all events. But there are countless many distinct ways to do simplifications and compactification of the descriptions and conceptualizations. This arbitrariness in the definition of "replicator system" became obvious to me when I was playing with those self-replicating automata blobs. Namely, working on a small grid, the daughter blobs would split and move away from the parent, but then upon reaching the edge of the grid, they would wrap around and come from the other side colliding with the parent, most often destroying both. But I wanted to see what would happen if the grid were huge or infinite, how long would blobs live and what kind of birthing patterns would they yield. So, I decided to capture daughter blob upon birth and separation, copy it into its own private grid, remove it from the original grid, so the parent blob would have the original grid all to herself. Since each blob moved always in one direction, this procedure exactly simulates what would have happened if the grid were much larger. But then the problem arose -- what exactly is the blob? While visually it seemed obvious, when looked closely, the boundaries of each blob were fuzzy, getting sparser as you move farther from its geometric center, like a cloud of vapor swirling around the more solid core. Changing state of a cell (by chopping it out and assigning it to daughter) which may interact soon with the rest of the original blob would interfere with the "natural" laws of this artificial universe, hence that was a no-go approach. Eventually, I settled on waiting until the separation between two fuzzy blobs became large enough that no common cells could arise that can interact with the other blob (within the finite time of each experiment) i.e. when the two fuzzy clouds were "safely" separated. As I was playing with the blob separation code, trying to get it to work right (that was my first computer and I was just learning to program), it dawned on me that a "blob" or "system" is simply a shorthand for some algorithm which selects a subset of elemental automata. I could have equally well defined a "system" some other way via a different sepration algorithm, and it would have contained some other subset of elemental automata, exhibiting some other regularities of the "system". The whole "system" and "replicators" concepts were in the eyes of the beholder, the subjective creations of the observer out of the patterns of all events on the grid. Going back to your MCM requirements, a far more general, less constrained (hence easier to build from simple components) replicator system can be made from automata in which all functions are allowed to be spread out spatially. There is no "boundary" other than whatever some algorithm decides to classify as belonging to the "replicator 1", replicator 2, 3.... The automata belonging to one replicator, can be doing something else in another one permeating the first one. The input of one "replicator" can be simultaneously the output of the other one or several... etc. The main requirement on the overall pattern unfolding on the substratum is that there is some repetitiveness, which through suitable "system" selection algorithm can be interpreted as the persistence of some "system". To get "replication" phenomenon, one needs a wave-like repetitiveness e.g. a wave-like pattern breaking into two or more similar wave-like patterns, like wave spreading on a lake hitting a rock and bouncing a reflection back, which then overlaps and permeates the original wave. The same molecules of water, each doing its little thing, is thus simultaneously serving two waves unfolding at the larger scale. Obviously, the energy needed for all that wave-pattern creation may be pumped into the system via winds or some rivers entering or leaving the lake. The selector algorithms that define the "systems" would then yield "systems" that "replicate", which is all of course in the eyes of the beholder i.e. the artifact of particular "system" separator algorithms. Of course, such system doesn't look anything like compact molecular blobs typically imagined as the replicators of the proto-life. But neither does hologram of a ball look like a ball, yet when you shine the right kind of light beams through the hologram, an image of a compact 3-D ball appears floating in front of your eyes. In the above algorithmic perspective, these light beams which decode the hologram into the original 3-D object are simply an analog optical computer performing the same kind of "system separating" algorithm that those separator algorithms did on the automata. The two kinds of computations and algorithms are merely implemented on different kind of hardware, but they are both performing the same type of function -- algorithmically combining some spread out dots (elements) into spatially compact "system". With hologram, which is an analog computer & algorithm, the computed "compact system" is the 3-D image of a compact ball, while with automata systems, the computed "compact system" is the data structure listing some subset of cells from the grid as computed by the separator algorithm. Hence, what appears to us as a spatially compact self-replictors, may well be implemented as the 'holographic' image of the spread out replicators operating at some underlying level. Even if the ultimate objective is a compact, self-contained replicator system, one need not go about it by imposing such spatial compactness constraint throughout, at all levels of the design and construction. This is analogous to building an arc bridge out of separate rocks. If you insist that in all stages of the construction you have to have empty space below the incomplete arc (as it appears in the final bridge), the task will appear irreducibly complex since arc can become stable only when all rocks are in place. But if you build an arc on top of a mound of earth, then when all rocks connect, hose the earth away with water, the final arc comes out easily. Returning now to related secondary questions, such as how does energy and material transfer to/from system, etc. If you go down to some underlying level such as Planck scale, below what we consider our 'elementary particles' and laws of physics, then you can start with arbitrary system of automata (or rather network, which is a more general and more powerful model of this type). There is no "energy" or "matter" in our sense here at this level, only the rules of operation of the automata. The rules of automata are presently unknown, hence we can play and explore anything we can conceive. If you allow for sufficiently large systems of this type, built out of the most simple elemental building blocks, such system can in principle compute any conceivable finite amount of behavior, including the mentioned holographic compaction algorithms, as well as the other patterns and regularities of the resulting holographic images, understood presently as our laws of physics, chemistry, biology, etc. As sketched in that earlier thread about Planckian networks, that kind of network with adaptable links, mathematically modeled via neural networks, functions as a self-programing distributed computer. Assuming its building blocks to be at the Planck scale, yields a computer which is 'pound for pound' 10^80 times more powerful than the ultimate computing technology we could ever build out of our "elementary" particles as its basic cogs i.e. at the theoretical end point of our Moore's laws. Hence, for all practical purposes, the output of the computations by such underlying Planckian network would appear to our cognitive apparatus as a result of some godlike intelligence, unimaginably smarter than us.nightlight
September 21, 2013
September
09
Sep
21
21
2013
10:19 PM
10
10
19
PM
PDT
NL, I keep seeing a need for a KINEMATIC self replicator facility coupled to a metabolising entity (with garbage collection, breakdown and elimination), encapsulation and gating. The software automata are not addressing the full realities relevant to the living cell. Cf discussion at 101 level here. KF PS: HDH, see how technical exchanges lead to a much less contentious matter?kairosfocus
September 21, 2013
September
09
Sep
21
21
2013
03:39 PM
3
03
39
PM
PDT
nightlight at #4 First thanks for taking the time to read through my Minimum Cell Model (MCM) blog and providing your thoughts and criticism. I am trying to respond to your points in a rather systematic manner below.
Cellular automata serve as a mathematical abstraction which models real world reactions such as autocatalytic sets.
This is too strong a statement (assuming that real autocatalytic sets exist). The cellular automata do not and cannot model certain aspects of ‘real world reactions’ like the energy aspects of chemical reactions, availability and spatial distribution of reactants, etc. Your next statement – I quote below – is much more modest but accurate this time.
Their rules capture one aspect of the behavior of the full physical system — which molecules transform into which other and in which combinations of neighborhoods.
So, as yourself observed above the cellular automata can be used for experimentation on a computing substrate of very limited aspects of some exotic chemical reactions (exotic because again it is not clear for me if these autocatalytic sets exists and how many varieties of them, etc.)
Hence, the cellular automata do model how such self-replicating system can arise from a real world autocatalytic sets of molecules.
This is an unfounded extrapolation. Again, the cellular automata model what molecules may transform in each other (assuming such transformations are realistic from a chemical perspective). The cellular automata model does not and cannot help with any of these “real world” aspects: - How the necessary energy is provided at the place of reaction? - How the input chemical reactants are procured and brought together to the “place” of reaction? - How the set of input reactants as well as output reactants are constrained/isolated physically before and after reactions (can other undesired reactants come into same space and compromise the expected chemical transformations)? - Do the modeled chemical reactions have any specific place in a larger “picture” of the cell auto-replication or metabolism?
The main difference between that style of elf-replication model and the one in the blog, is that the latter imposes additional gratuitous constraint on the system — it requires that the functional and spatial blocks of actual system physically mirror (simulate) the conceptual categories that author chose as his conceptualization tool. I.e. he is imposing his mental picture of the process as the constraint on the real system (on how it needs to spatially to break down its functions).
The cellular automata unencumbered by any gratuitous constraint have a very limited and modest modeling capability for the real life biological cell processes. I would appreciate though if you can be more specific and indicate which of the constraints in the proposed Minimum Cell Model (MCM) appear to you as being gratuitous. I will make any suggested changes to the model to make it better as the result of responding to founded criticism. I am going to try to help you to isolate the constraints that are gratuitous and briefly enumerating them (also for the benefit of those who did not have the time to read the blog): Enclosure (E) It seems to me that any realistic cell model – no matter how much simplified – cannot get read of such important component. The enclosure gives the cell identity and isolation and protection of its other internal components and processes from cell environment. Gateway (G) is there a way to think of a cell without having one or more openings (gateways) that opens to admit inside the cell enclosure materials from outside the cell (or push out “refuse” materials out of the cell)? In my view this is not a component that deserve the “gratuitous” qualification. Power Generator (P) This is the component in the MCM responsible to transform certain input materials accepted through the Gateways into energy needed to power various activities and processes that happens inside the cell. Can we get rid of the Power Generator (P) type of MCM component as gratuitous? Transporter and Assembler (T) This is the cell component type responsible with transportation within the cell of different materials that are needed for various cell processes like: energy generation, fabrication, etc. This component is also responsible for assemblage of various simple parts fabricated by the cell into more complex structures that emerge for example during the cloning phase of the cell self-replication. Is the Transporter and Assembler (T) type of MCM component dispensable (i.e. not needed)? Hard to support such a view. Fabricator (F) The Fabricator type of MCM is the component responsible for fabrication of cell parts and items that are needed for normal cell metabolism and cloning replica parts of the cell during the cloning phase of cell self-replication. It is pretty hard to think of a cell model that does not possess such a component type with its associated capability of versatile fabrication. Construction Planner (CP) The construction planner component type is responsible to store in certain ways detailed construction plans of all component types of the cell and their parts as well as the construction plan of the whole cell. This component is also responsible to use the information in its construction plan to coordinate the activities of various component types in the cell and to coordinate the overall progress of the cloning and the division phases of the cell self-replication. The Construction Planner CP maps to known components of the biological cell (as quite possibly as well with elements of the cell that are so far vaguely known or not known at all). Thus the CP maps at a minimum to the DNA as a protein fabrication plan and possibly to the 160,000 initiation machines mentioned on this blog entry Origins of Genomic ‘Dark Matter’. Basically the CP maps to many of the biological cell elements that store information and/or process information. It seems that this area: where the ‘body plan” of a cell resides, and where are the higher level ‘construction plans’ of the cell or of its organelles and how they are assembled together to get to a ‘mature’ cell , this area is less understood and requires still significant research. nightlight, I fully accept that the MCM is a grossly simplified cell model and the way I “partitioned” the cell into component types may be subjective and can be improved, refined or just re-stated. It is more difficult for me to understand why do you think that any of the 6 component types in model – listed above – is superfluous, unnecessary. Please help me improve the MCM model with your specific feedback on this or other aspects. I believe that a simplified cell model like MCM has the potential to help any conversation on topics in cell biology to be anchored on an empirical basis, with terms mutually understood. My hope is that a model like MCM will help elucidate thinking and discussions on important topics like self-replication, origin of life, evolution. InVivoVeritas
September 21, 2013
September
09
Sep
21
21
2013
03:32 PM
3
03
32
PM
PDT
But Axel, The Mind Is Not The Brain - Scientific Evidence - Rupert Sheldrake - (Referenced Notes) - video https://vimeo.com/33479544bornagain77
September 21, 2013
September
09
Sep
21
21
2013
02:59 PM
2
02
59
PM
PDT
We speak of someone being a Renaissance Man, but how much more apt that title to matter: Renaissance Matter. One must never underestimate its scientific genius. It might not quite scale the Olympian heights of random chance, but the way it has produced human minds is still pretty awesome.Axel
September 21, 2013
September
09
Sep
21
21
2013
02:52 PM
2
02
52
PM
PDT
“It suggests that algae knew about quantum mechanics,, billion(s) of years before humans,” says Scholes. Well, of course, the algae may have had a hand in producing our minds - if only as consultants.Axel
September 21, 2013
September
09
Sep
21
21
2013
02:45 PM
2
02
45
PM
PDT
Thus not only is God somehow directly involved in the formation of all the biological molecules of life on earth, but He is also ultimately responsible for feeding all higher life on earth since all higher life on earth is dependent on 'non-local' photosynthesis for food. Verse and Music:
John 1:4 In him was life, and that life was the light of all mankind. Natalie Grant - Alive (Resurrection music video) lyric: "Death has lost and love has won!",, http://www.godtube.com/watch/?v=KPYWPGNX
Further notes on 'minimal' complexity:
The essential genome of a bacterium - 2011 Excerpt: Using hypersaturated transposon mutagenesis coupled with high-throughput sequencing, we determined the essential Caulobacter genome at 8bp resolution, including 1012 essential genome features: 480 ORFs, 402 regulatory sequences and 130 non-coding elements, including 90 intergenic segments of unknown function. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3202797/pdf/msb201158.pdf Life’s Minimum Complexity Supports ID - Fazale Rana - November 2011 Excerpt page 16: The Stanford investigators determined that the essential genome of C. crescentus consisted of just over 492,000 base pairs (genetic letters), which is close to 12 percent of the overall genome size. About 480 genes comprise the essential genome, along with nearly 800 sequence elements that play a role in gene regulation.,,, When the researchers compared the C. crescentus essential genome to other essential genomes, they discovered a limited match. For example, 320 genes of this microbe’s basic genome are found in the bacterium E. coli. Yet, of these genes, over one-third are nonessential for E. coli. This finding means that a gene is not intrinsically essential. Instead, it’s the presence or absence of other genes in the genome that determine whether or not a gene is essential.,, http://www.reasons.org/files/ezine/ezine-2011-11/ezine-2011-11.pdf To Model the Simplest Microbe in the World, You Need 128 Computers - July 2012 Excerpt: Mycoplasma genitalium has one of the smallest genomes of any free-living organism in the world, clocking in at a mere 525 genes. That's a fraction of the size of even another bacterium like E. coli, which has 4,288 genes.,,, The bioengineers, led by Stanford's Markus Covert, succeeded in modeling the bacterium, and published their work last week in the journal Cell. What's fascinating is how much horsepower they needed to partially simulate this simple organism. It took a cluster of 128 computers running for 9 to 10 hours to actually generate the data on the 25 categories of molecules that are involved in the cell's lifecycle processes.,,, ,,the depth and breadth of cellular complexity has turned out to be nearly unbelievable, and difficult to manage, even given Moore's Law. The M. genitalium model required 28 subsystems to be individually modeled and integrated, and many critics of the work have been complaining on Twitter that's only a fraction of what will eventually be required to consider the simulation realistic.,,, http://www.theatlantic.com/technology/archive/2012/07/to-model-the-simplest-microbe-in-the-world-you-need-128-computers/260198/ "To grasp the reality of life as it has been revealed by molecular biology, we must first magnify a cell a thousand million times until it is 20 kilometers in diameter and resembles a giant airship large enough to cover a great city like London or New York. What we would see then would be an object of unparalleled complexity,...we would find ourselves in a world of supreme technology and bewildering complexity." Michael Denton PhD., Evolution: A Theory In Crisis, pg.328
Moreover that city of 'bewildering complexity' can replicate itself seemingly effortlessly within 20 to 30 minutes.
Scant search for the Maker Excerpt: But where is the experimental evidence? None exists in the literature claiming that one species has been shown to evolve into another. Bacteria, the simplest form of independent life, are ideal for this kind of study, with generation times of 20 to 30 minutes, and populations achieved after 18 hours. But throughout 150 years of the science of bacteriology, there is no evidence that one species of bacteria has changed into another, in spite of the fact that populations have been exposed to potent chemical and physical mutagens and that, uniquely, bacteria possess extrachromosomal, transmissible plasmids. Since there is no evidence for species changes between the simplest forms of unicellular life, it is not surprising that there is no evidence for evolution from prokaryotic to eukaryotic cells, let alone throughout the whole array of higher multicellular organisms. - Alan H. Linton - emeritus professor of bacteriology, University of Bristol. http://www.timeshighereducation.co.uk/story.asp?storycode=159282
My question to materialistic atheists who look for life to accidentally 'emerge' for lifeless chemicals, especially with such astonishing evidence from quantum mechanics for a Theistic universe is,,,
Luke 24:5 ,,,“Why do you look for the living among the dead?
here is a clue as to where life may truly be found;
The absorbed energy in the Shroud body image formation appears as contributed by discrete values - Giovanni Fazio, Giuseppe Mandaglio - 2008 Excerpt: This result means that the optical density distribution,, can not be attributed at the absorbed energy described in the framework of the classical physics model. It is, in fact, necessary to hypothesize a absorption by discrete values of the energy where the 'quantum' is equal to the one necessary to yellow one fibril. http://cab.unime.it/journals/index.php/AAPP/article/view/C1A0802004/271 Scientists say Turin Shroud is supernatural – December 2011 Excerpt: “The results show that a short and intense burst of UV directional radiation can colour a linen cloth so as to reproduce many of the peculiar characteristics of the body image on the Shroud of Turin,” they said. And in case there was any doubt about the preternatural degree of energy needed to make such distinct marks, the Enea report spells it out: “This degree of power cannot be reproduced by any normal UV source built to date.” http://www.independent.co.uk/news/science/scientists-say-turin-shroud-is-supernatural-6279512.html
Music:
High School Musical 2 - You are the music in me http://www.youtube.com/watch?v=IAXaQrh7m1o "In Christ Alone" / scenes from "The Passion of the Christ" http://www.youtube.com/watch?v=UDPKdylIxVM
bornagain77
September 21, 2013
September
09
Sep
21
21
2013
02:08 PM
2
02
08
PM
PDT
In fact, in the following video, the theoretical feasibility of reducing an entire human to quantum information and teleporting him/her to another location in the universe is discussed:
New Breakthrough in (Quantum) Teleportation - video http://www.youtube.com/watch?v=6xqZI31udJg Quote from video: "There are 10^28 atoms in the human body.,, The amount of data contained in the whole human,, is 3.02 x 10^32 gigabytes of information. Using a high bandwidth transfer that data would take about 4.5 x 10^18 years to teleport 1 time. That is 350,000 times the age of the universe." for comparison sake: "The theoretical (information) density of DNA is you could store the total world information, which is 1.8 zetabytes, at least in 2011, in about 4 grams of DNA." (a zettabyte is one billion trillion or 10^21 bytes of digital data) Sriram Kosuri PhD. - Wyss Institute
In the preceding video they speak of having to entangle all the material particles of the human body on a one by one basis in order to successfully teleport a human. What they failed to realize in the video is that the human body is already 'teleporatation ready' in that all the material particles of the human body are already 'quantumly entangled':
Does DNA Have Telepathic Properties?-A Galaxy Insight – 2009 Excerpt: DNA has been found to have a bizarre ability to put itself together, even at a distance, when according to known science it shouldn’t be able to.,,, The recognition of similar sequences in DNA’s chemical subunits, occurs in a way unrecognized by science. There is no known reason why the DNA is able to combine the way it does, and from a current theoretical standpoint this feat should be chemically impossible. per Daily Galaxy Quantum Information/Entanglement In DNA – Elisabeth Rieper – short video http://www.metacafe.com/watch/5936605/ Physicists Discover Quantum Law of Protein Folding – February 22, 2011 Quantum mechanics finally explains why protein folding depends on temperature in such a strange way. Excerpt: First, a little background on protein folding. Proteins are long chains of amino acids that become biologically active only when they fold into specific, highly complex shapes. The puzzle is how proteins do this so quickly when they have so many possible configurations to choose from. To put this in perspective, a relatively small protein of only 100 amino acids can take some 10^100 different configurations. If it tried these shapes at the rate of 100 billion a second, it would take longer than the age of the universe to find the correct one. Just how these molecules do the job in nanoseconds, nobody knows.,,, Their astonishing result is that this quantum transition model fits the folding curves of 15 different proteins and even explains the difference in folding and unfolding rates of the same proteins. That’s a significant breakthrough. Luo and Lo’s equations amount to the first universal laws of protein folding. That’s the equivalent in biology to something like the thermodynamic laws in physics. http://www.technologyreview.com/view/423087/physicists-discover-quantum-law-of-protein-folding/
Also of note, quantum entanglement requires a non-local, beyond space and time, cause in order to explain its effect:
Looking Beyond Space and Time to Cope With Quantum Theory – (Oct. 28, 2012) Excerpt: The remaining option is to accept that (quantum) influences must be infinitely fast,,, “Our result gives weight to the idea that quantum correlations somehow arise from outside spacetime, in the sense that no story in space and time can describe them,” says Nicolas Gisin, Professor at the University of Geneva, Switzerland,,, Per Science Daily
The implications of finding 'non-local', beyond space and time, quantum information/entanglement in our body on a massive scale are fairly self evident:
Does Quantum Biology Support A Quantum Soul? – Stuart Hameroff – video (notes in description) http://vimeo.com/29895068 Quantum Entangled Consciousness (Permanence/Conservation of Quantum Information) – Life After Death – Stuart Hameroff – video https://vimeo.com/39982578
One more line of evidence that God was directly involved in the formation of the first life on earth is photosythesis:
The Sudden Appearance Of Photosynthetic Life On Earth - video http://www.metacafe.com/watch/4262918 Nonlocality of Photosynthesis - Antoine Suarez - video - 2012 http://www.youtube.com/watch?v=dhMrrmlTXl4&feature=player_detailpage#t=1268s Evidence for wavelike energy transfer through quantum coherence in photosynthetic systems. Gregory S. Engel, Nature (12 April 2007) Photosynthetic complexes are exquisitely tuned to capture solar light efficiently, and then transmit the excitation energy to reaction centres, where long term energy storage is initiated.,,,, This wavelike characteristic of the energy transfer within the photosynthetic complex can explain its extreme efficiency, in that it allows the complexes to sample vast areas of phase space to find the most efficient path. ---- Conclusion? Obviously Photosynthesis is a brilliant piece of design by "Someone" who even knows how quantum mechanics works. http://www.ncbi.nlm.nih.gov/pubmed/17429397 Quantum Mechanics at Work in Photosynthesis: Algae Familiar With These Processes for Nearly Two Billion Years - Feb. 2010 Excerpt: "We were astonished to find clear evidence of long-lived quantum mechanical states involved in moving the energy. Our result suggests that the energy of absorbed light resides in two places at once -- a quantum superposition state, or coherence -- and such a state lies at the heart of quantum mechanical theory.",,, "It suggests that algae knew about quantum mechanics,, billion(s) of years before humans," says Scholes. http://www.sciencedaily.com/releases/2010/02/100203131356.htm
bornagain77
September 21, 2013
September
09
Sep
21
21
2013
02:05 PM
2
02
05
PM
PDT
Moreover, as if that was not enough to refute any materialistic/atheistic origin of life scenario, it is now found that not only have material processes never been observed to generate functional information,,
The Capabilities of Chaos and Complexity: David L. Abel - Null Hypothesis For Information Generation - 2009 To focus the scientific community’s attention on its own tendencies toward overzealous metaphysical imagination bordering on “wish-fulfillment,” we propose the following readily falsifiable null hypothesis, and invite rigorous experimental attempts to falsify it: "Physicodynamics cannot spontaneously traverse The Cybernetic Cut: physicodynamics alone cannot organize itself into formally functional systems requiring algorithmic optimization, computational halting, and circuit integration." A single exception of non trivial, unaided spontaneous optimization of formal function by truly natural process would falsify this null hypothesis. http://www.mdpi.com/1422-0067/10/1/247/pdf Can We Falsify Any Of The Following Null Hypothesis (For Information Generation) 1) Mathematical Logic 2) Algorithmic Optimization 3) Cybernetic Programming 4) Computational Halting 5) Integrated Circuits 6) Organization (e.g. homeostatic optimization far from equilibrium) 7) Material Symbol Systems (e.g. genetics) 8) Any Goal Oriented bona fide system 9) Language 10) Formal function of any kind 11) Utilitarian work http://mdpi.com/1422-0067/10/1/247/ag
But it now also found that material particles reduce to functional 'quantum' information,
Quantum Entanglement and Information Quantum entanglement is a physical resource, like energy, associated with the peculiar nonclassical correlations that are possible between separated quantum systems. Entanglement can be measured, transformed, and purified. A pair of quantum systems in an entangled state can be used as a quantum information channel to perform computational and cryptographic tasks that are impossible for classical systems. The general study of the information-processing capabilities of quantum systems is the subject of quantum information theory. http://plato.stanford.edu/entries/qt-entangle/ Quantum Entanglement and Teleportation - Anton Zeilinger - video http://www.metacafe.com/watch/5705317/ Ions have been teleported successfully for the first time by two independent research groups Excerpt: In fact, copying isn’t quite the right word for it. In order to reproduce the quantum state of one atom in a second atom, the original has to be destroyed. This is unavoidable – it is enforced by the laws of quantum mechanics, which stipulate that you can’t ‘clone’ a quantum state. In principle, however, the ‘copy’ can be indistinguishable from the original (that was destroyed),,, http://www.rsc.org/chemistryworld/Issues/2004/October/beammeup.asp Atom takes a quantum leap – 2009 Excerpt: Ytterbium ions have been ‘teleported’ over a distance of a metre.,,, “What you’re moving is information, not the actual atoms,” says Chris Monroe, from the Joint Quantum Institute at the University of Maryland in College Park and an author of the paper. But as two particles of the same type differ only in their quantum states, the transfer of quantum information is equivalent to moving the first particle to the location of the second. - Per Free Republic Physicists set new record for quantum teleportation with matter qubits - Apr 16, 2013 Excerpt: "The greatest significance of our work is the dramatic increase in efficiency compared to previous realizations of matter-matter teleportation," Nölleke said. "Besides, it is the first demonstration of matter-matter teleportation between truly independent systems and constitutes the current record in distance of 21 m. The previous record was 1 m." Per Physorg How Teleportation Will Work - Excerpt: In 1993, the idea of teleportation moved out of the realm of science fiction and into the world of theoretical possibility. It was then that physicist Charles Bennett and a team of researchers at IBM confirmed that quantum teleportation was possible, but only if the original object being teleported was destroyed. — As predicted, the original photon no longer existed once the replica was made. http://science.howstuffworks.com/science-vs-myth/everyday-myths/teleportation1.htm Quantum Teleportation – IBM Research Page Excerpt: “it would destroy the original (photon) in the process,,” http://researcher.ibm.com/view_project.php?id=2862 Unconditional Quantum Teleportation – abstract Excerpt: This is the first realization of unconditional quantum teleportation where every state entering the device is actually teleported,, http://www.sciencemag.org/content/282/5389/706.abstract Quantum Computing - Stanford Encyclopedia Excerpt: Theoretically, a single qubit can store an infinite amount of information, yet when measured (and thus collapsing the Quantum Wave state) it yields only the classical result (0 or 1),,, http://plato.stanford.edu/entries/qt-quantcomp/#2.1 Explaining Information Transfer in Quantum Teleportation: Armond Duwell †‡ University of Pittsburgh Excerpt: In contrast to a classical bit, the description of a (photon) qubit requires an infinite amount of information. The amount of information is infinite because two real numbers are required in the expansion of the state vector of a two state quantum system (Jozsa 1997, 1) --- Concept 2. is used by Bennett, et al. Recall that they infer that since an infinite amount of information is required to specify a (photon) qubit, an infinite amount of information must be transferred to teleport. - Per Duwell
bornagain77
September 21, 2013
September
09
Sep
21
21
2013
02:01 PM
2
02
01
PM
PDT
As Stephen Meyer points out here,,,
The DNA Enigma - Where Did The Information Come From? - Stephen C. Meyer - video http://www.metacafe.com/watch/4125886
and also points out here,,,
Dr. Stephen Meyer: Chemistry/RNA World/crystal formation can't explain genetic information - video http://www.youtube.com/watch?v=yLeWh8Df3k8
,, the primary problem for Origin of Life research (and the 'random' evolution of organic life in general) always boils down to an 'information problem'. Yet the only source that we know of that is capable of generating functional information is mind. Thus to address the 'information problem' properly it is first necessary to see if mind might have preceded the formation of organic life on Earth. One might imagine, as the late Francis Crick did,,
At the 37 min. 15 sec. mark of this following video, Dr. Walter Bradley talks a little bit about the OOL problem and Watson and Crick's, the co-discoverers of the DNA helix, disbelieving reactions to the DNA, RNA, Protein, 'translation complexity' they found themselves to be dealing with: Evidence for an Engineered Universe - Walter Bradley - video http://www.youtube.com/watch?v=nLd_cPfysrE
and as Richard Dawkins also does in the movie EXPELLED,,
Ben Stein vs. Richard Dawkins Interview http://www.youtube.com/watch?v=GlZtEjtlir
,,,one might imagine as they did that some type of Extra-Terrestrial aliens (ETs) created the first life on Earth and thus try to circumvent the 'information problem'. It would hardly be observational science but one could imagine that scenario. On the other hand if one demanded a little more rigor to one's science then one could look to the cutting edge of science in quantum mechanics and find that breakthroughs in quantum mechanics have given us clear, unambiguous, evidence that mind/consciousness precedes not only life on earth but precedes all of material reality in the universe altogether.
A team of physicists in Vienna has devised experiments that may answer one of the enduring riddles of science: Do we create the world just by looking at it? - 2008 Excerpt: In mid-2007 Fedrizzi found that the new realism model was violated by 80 orders of magnitude; the group was even more assured that quantum mechanics was correct. http://seedmagazine.com/content/article/the_reality_tests/P3/ 1. Consciousness either preceded all of material reality or is a 'epi-phenomena' of material reality. 2. If consciousness is a 'epi-phenomena' of material reality then consciousness will be found to have no special position within material reality. Whereas conversely, if consciousness precedes material reality then consciousness will be found to have a special position within material reality. 3. Consciousness is found to have a special, even central, position within material reality. 4. Therefore, consciousness is found to precede material reality. Four intersecting lines of experimental evidence from quantum mechanics that shows that consciousness precedes material reality (Wigner’s Quantum Symmetries, Wheeler’s Delayed Choice, Leggett’s Inequalities, Quantum Zeno effect): https://docs.google.com/document/d/1G_Fi50ljF5w_XyJHfmSIZsOcPFhgoAZ3PRc_ktY8cFo/edit Colossians 1:17 And he is before all things, and by him all things consist. Quantum Enigma:Physics Encounters Consciousness - Richard Conn Henry - Professor of Physics - John Hopkins University Excerpt: It is more than 80 years since the discovery of quantum mechanics gave us the most fundamental insight ever into our nature: the overturning of the Copernican Revolution, and the restoration of us human beings to centrality in the Universe. And yet, have you ever before read a sentence having meaning similar to that of my preceding sentence? Likely you have not, and the reason you have not is, in my opinion, that physicists are in a state of denial… https://uncommondescent.com/intelligent-design/the-quantum-enigma-of-consciousness-and-the-identity-of-the-designer/ Lecture 11: Decoherence and Hidden Variables - Scott Aaronson Excerpt: "Look, we all have fun ridiculing the creationists who think the world sprang into existence on October 23, 4004 BC at 9AM (presumably Babylonian time), with the fossils already in the ground, light from distant stars heading toward us, etc. But if we accept the usual picture of quantum mechanics, then in a certain sense the situation is far worse: the world (as you experience it) might as well not have existed 10^-43 seconds ago!" http://www.scottaaronson.com/democritus/lec11.html
bornagain77
September 21, 2013
September
09
Sep
21
21
2013
01:58 PM
1
01
58
PM
PDT
nightlight at #1
When one summer in grad school, with plenty of free time on my hands, I got my first PC, one of the first programs I wrote (to help me learn C) was an explorer tool for 4 state automata...
This reminds me of my first programming exercise: the horse covering all 64 squares of the chess table in 64 consecutive moves. But that was in Fortran. And it was as much an exercise in Fortran programming skills as in identifying a success strategy.
Such simple and compact rules had not even remote resemblance to the “minimum” irreducible complexity described in that article. One could conceive of some autocatalytic set of real molecules playing out the same kind of process of self-replicating blobs with their individual ‘genetic’ signatures in the real world
And maybe because we have two different objectives: you focused on computer modeling of abstract self-replication (unconstrained by the physical reality of a biological cell and physical world) while the blog proposes a minimum model for real-life biological cells.InVivoVeritas
September 21, 2013
September
09
Sep
21
21
2013
12:33 PM
12
12
33
PM
PDT
Johnnyb Nice surprise to see that you found my Minimum Cell Model blog worth to be mentioned on UD. ThanksInVivoVeritas
September 21, 2013
September
09
Sep
21
21
2013
12:12 PM
12
12
12
PM
PDT
Cellular automata serve as a mathematical abstraction which models real world reactions such as autocatalytic sets. Their rules capture one aspect of the behavior of the full physical system -- which molecules transform into which other and in which combinations of neighborhoods. This is analogous to giving groups of people or players hats of different colors which they can exchange by some rules, then only tracking the flows of hat colors. As long as the assumed abstract rules of hat color exchange are followed (for whatever complex reasons of physics, chemistry, biology, financial incentive to players,...), the resulting dynamics will follow. The laws of physics, chemistry, biology,... that underpin the hat color transformation rules are implicit in such abstraction. The abstraction only captures patterns of hat colors as they unfold and as long as the local transformation rules are followed, the conclusion from such model about the color patterns, including emergence of self-replicator patterns, are valid. Hence, the cellular automata do model how such self-replicating system can arise from a real world autocatalytic sets of molecules. The main difference between that style of elf-replication model and the one in the blog, is that the latter imposes additional gratuitous constraint on the system -- it requires that the functional and spatial blocks of actual system physically mirror (simulate) the conceptual categories that author chose as his conceptualization tool. I.e. he is imposing his mental picture of the process as the constraint on the real system (on how it needs to spatially to break down its functions). That is entirely gratuitous constraint as the counterexamples (mine or others in literature) of self-replicators built out of simple cellular automata, where laws of physics & chemistry are implicit (in the transformation rules of autocatalytic set), demonstrate.nightlight
September 21, 2013
September
09
Sep
21
21
2013
11:39 AM
11
11
39
AM
PDT
Also, regarding autocatalytic sets: First, just to point out, being a "set" it would be irreducibly complex. IC = *multiple* interacting pieces. Thus, the notion of a *set* implies IC. However, most are not truly replicative. But, nonetheless, if you found one that was, it would still need to contain these components! If it didn't have an enclosure, how would it be replicating? A larger reaction, perhaps, but replicating? Not really. If it didn't ingest molecules, how would the reaction continue? If it didn't accept/block through a gateway, how would it keep reaction-blocking things out?johnnyb
September 21, 2013
September
09
Sep
21
21
2013
11:32 AM
11
11
32
AM
PDT
Nightlight - Thanks for joining in! However, the comparison with cellular automata is not very relevant, as the purpose here was physical self-replication. There have actually been zero physical self-replicators built. NASA did some preliminary work, and if I remember correctly think they estimated a minimum size of several hundred tons.johnnyb
September 21, 2013
September
09
Sep
21
21
2013
11:07 AM
11
11
07
AM
PDT
There is no proof of "minimality" there. It is simply one way to do it and describe it that author could conceive. A simpler self-replicating systems can be built from cellular automata. When one summer in grad school, with plenty of free time on my hands, I got my first PC, one of the first programs I wrote (to help me learn C) was an explorer tool for 4 state automata. The program used 320x240 2-D array of 4-state cells (to match the screen graphics) and would evolve the system from random or from saved initial states, using whatever rules of interaction were prescribed. After couple weeks of playing with different automata rules I found an entire family of 4-state rules that result in self-reproducing amoeba-like blobs containing 100-300 cells, moving around and breaking off an daughter cell in some discrete pattern e.g. daughter D1 after S1=73 steps, D2 after 12 steps,.... The sequences S1, S2,...Sp had large periods of several thousand to tens of thousands steps. The daughters also had their own replication sequences, each sequence being a "genetic" signature for the blob. In case of collisions some or all colliding blobs would break up and "die". Hence, the program would run each of the daughter blobs in its own space (otherwise the finite matrix with wraparound boundary conditions would cause parent and daughters to collide). The class of rules with this property had general pattern of autocatalytic reactions i.e. there was a 'dead' cell 0, then 3 live states (or species), 1,2 and 3 such that 1 eats into 2, 2 eats 3 and 3 eats 1. Any of the states could also die, depending on number of its pray and predator cells surrounding it. The functions determining die or live outcome for each cell were similar to Conway's Life, where life (feeding) would occur at some optimum neighborhood, with over-crowding and under-crowding causing death. The whole specification for the rules would fit into bitmaps of few dozen bytes. Such simple and compact rules had not even remote resemblance to the "minimum" irreducible complexity described in that article. One could conceive of some autocatalytic set of real molecules playing out the same kind of process of self-replicating blobs with their individual 'genetic' signatures in the real world (i.e. no PC and the C program need to be included into the "irreducible complexity" of such system).nightlight
September 21, 2013
September
09
Sep
21
21
2013
10:42 AM
10
10
42
AM
PDT

Leave a Reply