academic freedom agit-prop, opinion manipulation and well-poisoning games algorithms and/or codes Back to Basics of ID DNA Information Information theory language RNA UD Guest Posts

UB’s notes on autocatalytic reaction sets vs languages and symbol systems

Spread the love

UB writes:

UB, only way thread, 164: >>My apologies to Origenes, he had asked for my comment, but I was away . . . . I am no expert of course, but thank you for asking me to comment. Frankly you didn’t need my opinion anyway. When you ask “What is the error in supposing something?” you likely already know there is no there there. And someone seriously asking you (like some odd prosecution of your logic) to enumerate what exactly is the biological error or the chemical error in the proposition of something that has never before been seen or recorded in either biology or chemistry — well whatever.

Deacon begins by asking the question, what is necessary and sufficient to treat a molecule as a sign. He is 50 to 150 years late on that question (depending on how one wants to look at it). In any case (setting aside for the moment his reliance on “uncharacterized” chemistry) he doesn’t get to where he is going, and he tells you as much in his Conclusions. He says his exercise “falls well short” of the origin of the code, but he reckons that his exercise offers something more basic. Regardless of what one might feel about proposing unknown chemistry as a “proof of principle”, his paper doesn’t offer the pathway implied by the title of the paper (a title that Deacon chose to honor the work of Howard Pattee, How does a molecule become a message Pattee, 1969). From my perspective, even with the admitted reliance on unknown chemistry, Deacon still doesn’t get from dynamics to descriptions and doesn’t shed any particular light on the problem.

I might suggest you look at Howard Pattee’s own response to Deacon’s paper. I do not know where it is available or if it is behind a paywall somewhere, but I have a copy here in front of me. It has a little bit of a cool tone to it. He begins the paper with (first sentence) “Deacon speculates on the origin of interpretation of signs using autocatalytic origin of life models and Peircean terminology” and in the very next sentence takes a rather direct contrary position.

He begins by offering some background:

The focus of my paper “How does a molecule become a message?” (Pattee 1969) that Deacon (2021) has honored, was a search for the simplest language in which messages were both heritable and open-ended. I was trying to satisfy Von Neumann’s condition for evolvable self-replication. He argued that it is necessary to have a separate non-dynamic description that (1) resides in memory, (2) can be copied, and (3) can instruct a dynamic universal constructor. (I replaced “description” with “message” simply for alliteration.) I concluded (Pattee 1969, 8): “A molecule becomes a message only in the context of a larger system of physical constraints which I have called a ‘language’ in analogy to our normal usage of the concept of message.” A language consists of a small, fixed set of symbols (an alphabet) and rules (a grammar) in which the symbols can be catenated indefinitely to produce an unlimited number of meaningful or functional sequences (messages).

… and then goes on to offer some ancillary corrections before addressing Deacons model in full (i.e. “Before discussing Deacon’s main thesis, I need to respond to his misleading history of molecular biology”). He then discusses the (three-dimensional) structuralist and the (one-dimensional) informationalist camps in the OoL field, and then under the heading “Deacon’s Model” he concludes:

There are three well-known problems with autocatalytic cycle models: (1) limited information capacity (What are the symbol vehicles?), (2) instability of multiple dynamic cycles (error catastrophe), and (3) no known transition to the present nucleic acid-to-protein genetic code. The only known way to mitigate problems (1) and (2) is to solve (3), that is, to transition from dynamic catalysts to a symbol-code-construction system. Deacon recognizes these problems and his solution to (3) is to “offload” autogen catalyst information to RNA-like template molecules:

“Offloading (or transfer of constraints) is afforded because complementary structural similarities between catalysts and regions of the template molecule facilitate catalyst binding in a particular order that by virtue of their positional correlations biases their interaction probabilities.”

Deacon’s offloading is the inverse of the Central Dogma’s information flow from inactive one-dimensional sequences to three-dimensional active catalysts. Deacon’s offloading information flow is from three-dimensional active catalysts to one-dimensional inactive sequences. His offloading speculations require many vague chemical steps with unknown probabilities of abiotic occurrence. Deacon claims that these are “chemically realistic” steps, but he gives no example or evidence of this inverse process. Adding to the chemical vagueness of offloading, Deacon applies the Peircean vocabulary, icon, index, and symbol, and the immediate, dynamic and final interpretants. This Peircean terminology does not help explain or support a chemistry of offloading, nor does it make clearer how molecules become signs.

It appears to me that speculation of unknown chemistry, mixed with language like “proofs”, is an recognizable problem among both experts and laypeople alike.

Note: Just so no one is mistaken, Howard Pattee is a unguided origin of life proponent, but he strongly believes that the speculation of answers must have a foot in chemical and physical reality. In other words, he believes that genetic symbols have their grounding directly in the folded proteins they specify, but also acknowledges that the triadic sign-relationship (symbol, constraint, referent) is required for the specification of those proteins from a transcribable memory. He doesn’t pretend to have an answer to the problem of the transition from dynamics to descriptions, and he doesn’t write papers like Terrance Deacon.

All of this is highly relevant and worth being on headlined record.

We may note from Wikipedia’s confessions:

In information theory and computer science, a code is usually considered as an algorithm that uniquely represents symbols from some source alphabet, by encoded strings, which may be in some other target alphabet. An extension of the code for representing sequences of symbols over the source alphabet is obtained by concatenating the encoded strings.

Before giving a mathematically precise definition, this is a brief example. The mapping

C = { a ↦ 0 , b ↦ 01 , c ↦ 011 }

is a code, whose source alphabet is the set { a , b , c } and whose target alphabet is the set { 0 , 1 }. Using the extension of the code, the encoded string 0011001 can be grouped into codewords as 0 011 0 01, and these in turn can be decoded to the sequence of source symbols acab.

Using terms from formal language theory, the precise mathematical definition of this concept is as follows:

let S and T be two finite sets, called the source and target alphabets, respectively.

A code C : S → T is a total function mapping each symbol from S to a sequence of symbols over T.

The extension C′ of C, is a homomorphism of S into T , which naturally maps each sequence of source symbols to a sequence of target symbols.

In short, as Wikipedia noted, a code is “a system of rules to convert information—such as a letter, word, sound, image, or gesture—into another form, sometimes shortened or secret, for communication through a communication channel or storage in a storage medium.”

The example of a storage medium that pops up with the link on “code medium”? DNA:

More from Lehninger:

And, from Crick, in his March 19, 1953 letter to his son, notice, how in the first three sentences on p. 5 of this $6 million letter, he shifts from “is like a code” to “is a code”:

Crick’s letter

Languages, of course, are symbolic systems that express meaningful, functional information through the organisation of representative elements. These can be sounds, glyphs, gestures and more.

Thanks to UB, we have food for thought. END

PS, regrettably, as JVL injected a personality, I think I must also headline UB’s response to the accusation of closed mindedness:

JVL: “ I’m not sure Upright BiPed will grace us with his opinion. He tends to avoid having to admit he might be wrong.

JVL, I gave you researcher’s names, the dates of experiments, and the experimental results. You were forced to agree with all of it. If you’d now like to assert that I’ve made an error in that history, by all means, point it out. I don’t believe you can, and I don’t believe you will. It has to be remembered here that your core position is that the design inference at the origin of life — clearly recorded in the history of science and experiment — is summarily invalidated because the proponents of an unguided OoL simply don’t believe it. Your position (a well-known logical fallacy) deliberately separates conclusions from evidence and destroys science as a methodological approach to knowledge.

I trust, we can now move on to address substance on the merits, instead.

11 Replies to “UB’s notes on autocatalytic reaction sets vs languages and symbol systems

  1. 1
    kairosfocus says:

    UB’s notes on autocatalytic reaction sets vs languages and symbol systems

  2. 2
    PyrrhoManiac1 says:

    Very interesting! I hadn’t realized that Deacon’s article was the target article for that edition of Biosemiotics, with response by Pattee and many others.

    I’m intrigued by Pattee’s suggestion that the symbol-to-action conversion must precede interpretation. But now I wonder if Deacon and Pattee mean the same thing by “interpretation”.

    There were a few things I liked about Deacon’s approach, but the main one is that he takes an organicist, holistic stance towards the very concept of “information”: rather than think of information as an intrinsic quality of something, he thinks of it as a relational concept that already presupposes a teleological system embedded in an environment constituted by affordances for that system.

    I’m not clear on whether Pattee would agree with that and say that that’s necessary for symbol-to-action conversion, and just disagree with Deacon about whether interpretation is the right way to think about that process. He doesn’t engage with Deacon’s emphasis on teleology, which is what I find most intriguing about Deacon’s project.

    Since this is was a short response to Deacon and not a full rehearsal of all his views, it also wasn’t clear on how Pattee sees the relationship between symbol-action conversion (from genetic sequences to functional proteins) and action-symbol conversion (triggering of macromolecules on membranes that signal something of biological relevance to the organism).

    On Deacon’s view, it seems that these are coordinated: the autogen needs to both detect the presence of materials it needs for continued self-maintenance and store information inside itself about what functional units it needs to build in order to self-maintain.

  3. 3
    Origenes says:

    As always Upright Biped’s response is very much appreciated.
    Do correct me if I’m wrong, but in my understanding Deacon’s model is not about information at all. Deacon’s “information” is about what?

    Ori: Whatever Deacon means by “biological information” here, it has not to do with the sequence of his proposed protein-like molecule; which just sits in the capsid being completely ignored by its surroundings.

    UB seems to confirm this when he writes:

    … Deacon still doesn’t get from dynamics to descriptions and doesn’t shed any particular light on the problem.

    UB goes on to quote Pattee:

    There are three well-known problems with autocatalytic cycle models: (1) limited information capacity (What are the symbol vehicles?), (2) instability of multiple dynamic cycles (error catastrophe), and (3) no known transition to the present nucleic acid-to-protein genetic code. The only known way to mitigate problems (1) and (2) is to solve (3), that is, to transition from dynamic catalysts to a symbol-code-construction system.

    Well, of course, you need a “symbol-code-construction system” in order to have information at all.
    Something can only be called information if there exists a “symbol-code-construction system” where it can be plugged into. Put another way, it is only information if there is a language system where the symbols refer to something coherent. So, “44775$&KJ5e8FDB9HGVFYUJY” has no meaning, it contains no information, if there exists no language system in which it refers (in a coherent way) to things. It is irreducible complex, so, all of the aspects of the symbol-code-construction system have to be present and in tune with each other. In isolation any of the parts are meaningless.

    PM1

    There were a few things I liked about Deacon’s approach, but the main one is that he takes an organicist, holistic stance towards the very concept of “information”: rather than think of information as an intrinsic quality of something, he thinks of it as a relational concept that already presupposes a teleological system embedded in an environment constituted by affordances for that system.

    Can you please elaborate? What does “an organicist, holistic stance towards the very concept of information” mean?

  4. 4
    kairosfocus says:

    Origenes, I suspect, in the end poof magic, information from nothing, emergence. KF

    PS, Information may, perhaps, best be understood as:

    INFORMATION: that contingent aspect of a message, event, state of affairs etc which reduces uncertainty and which — through providing a meaningful digital or analogue [= discrete vs continuous state] variation of a medium of storage or transmission — may guide interpretation, steps of a process or intelligent response.

    For example, I have just provided information regarding information as a concept and functional phenomenon which may guide onward discussion. That provision is contingent as to whether it was or was not provided, but having been provided as text in English using ASCII code typed and transmitted across the Internet, it now exists. That very presence is informative. Further, its substance reflects and invites analysis and so may help guide onward response. It certainly is not an algorithmic package that alters a programmed entity without ability to think for itself. Save, the relatively trivial aspects of how it was typed, encoded in ASCII, transmitted and stored once I clicked send. Which I anticipate as I type but in principle could have elected not to. But per my decision, here it is, a definition for discussion.

    Obviously, while blind chance and mechanical necessity in principle can explore all of a space of possible configurations — e.g. 500 – 1,000 coins with H/T [or the equivalent of a paramagnetic substance, I here echo L K Nash and P Mandl as they begin statistical thermodynamics] — resource and time constraints are such that for a sol system of 10^57 atoms or a cosmos of 10^80, with 10^17 s and plausible atomic event rates at perhaps 10^-14s, apparent meaningful information beyond such thresholds becomes maximally implausible as being due to such blind chance and/or mechanical necessity. For instance, deer tracks and droppings send one message, grizzly bear tracks quite another.

    Where, incremental hill climbing presupposes arrival at a local shoreline of function and emergence, too often is little more than evasive poof magic.

    PPS, AmHD:

    in·for·ma·tion (?n?f?r-m??sh?n)
    n.
    1. Knowledge or facts learned, especially about a certain subject or event. See Synonyms at knowledge.
    2. The act of informing or the condition of being informed; communication of knowledge: Safety instructions are provided for the information of our passengers.
    3. Computers Processed, stored, or transmitted data.
    4. A numerical measure of the uncertainty of an experimental outcome.
    5. Law A formal accusation of a crime made by a public officer rather than by grand jury indictment in instances in which the offense, if a federal crime, is not a felony or in which the offense, if a state crime, is allowed prosecution in that manner rather than by indictment.
    in?for·ma?tion·al adj.
    American Heritage® Dictionary of the English Language, Fifth Edition. Copyright © 2016 by Houghton Mifflin Harcourt Publishing Company. Published by Houghton Mifflin Harcourt Publishing Company. All rights reserved.

  5. 5
    Upright BiPed says:

    .
    Hello Origenes,

    … in my understanding Deacon’s model is not about information at all. Deacon’s “information” is about what

    Let me to put two things together and try to give an answer to that question. There was a long time here on UD where you would hear critics trying to obfuscate and downplay information-related arguments; talking about there being “information in everything”. This is the view that something, like a carbon atom for instance, contains “information” in itself. A carbon atom contains 6 electrons and 6 neutrons and 6 protons. That’s information. In my reading, this is referred to as structural information. You can extend structural information further to say that a carbon atom will dynamically interact with whatever adjacent atoms in whatever varied ways, and all of that is information as well. You can use the mathematics of physics to represent those interactions and you will actually get useful and predictive results and equations from that exercise. Of course, the problem with this repeated objection by critics is simply that (even though all atoms must follow dynamical lawS) structural information itself is irrelevant to protein synthesis. Specifying the order of amino acids in a protein is controlled by a description in encoded memory. That memory requires symbol tokens and a set of interpretive constraints, just as it was predicted and subsequently confirmed by experiment. I would often tell critics that they were making an anthropocentric projection in their thinking about “information”, but it was always to no avail. Even now, you can still hear the regular UD critics occasionally saying something like “a cut tree contains information”. They’ll point out that a dendrologist can look at those rings and tell us that the tree was 25 years old and had survived 3 major droughts in its lifetime. I would sometimes suggest to them that they were confusing “form” with “information”, and remind them that the “25 years” and “3 droughts” is not in the tree rings, it’s in the head of the observer. This information they find in tree rings begins when photons bouncing off the tree rings reaches the eye of the observer, are then converted to a sensory signal, and travel down the optical nerve to the visual cortex/brain where they are then interpreted by the constraints we’ve assembled in memory and through our collective pursuit of knowledge. “25 years” is the result of a learned constraint, not of the “structural information” contained in the tree rings. Protein synthesis, of course, doesn’t need our interpretation at all, its has own encoded descriptions of it own interpretive constraints. But as I said, it all falls on deaf ears.

    In any case, I tell you all this in order to suggest that Deacon speculates about an autocatalytic system where the three-dimensional structural information of the system is “offloaded” onto one-dimensional sequences involved in the network. Pattee hints at this when he wryly ask “What are the symbol vehicles?” I would add to that — where are the necessary descriptions of the interpretive constraints (what Sydney Brenner emphasizes as the “fundamental” requirement of the gene system) and how is it they are sequentially coherent with one another, and with the other descriptions supposedly “offloaded” in Deacon’s model. The system has to be self-referential in order to function. Are we to assume that the various chemical biases that Deacon suggest are inherent in the dynamic offloading process just happen to also result in simultaneous coordination between all the sequences offloaded (which seems like a rather extreme form of fine tuning to me). Deacon doesn’t provide that kind of detail because (even using unknown chemistry as the backbone of his model) he never actually gets to Pattee’s (Von Neumann’s, Turing’s) “symbol-code-construction” system. Instead, he acknowledges that his model is incapable (“falls well short”) of explaining the rise of a symbolic code. Unlike virtually everyone in the mainstream OoL game, Deacon can at least be given credit for trying to acknowledge the issues, but by his own admission, he never gets to “rate-independent control of a rate-dependent process” which is the physical source of the gene system’s open-ended capacity to specify proteins.

    What Deacon’s paper demonstrates in collateral, is that an encoded symbol system has never been observed to rise from dynamics. It thus remains a universal correlate of intelligence.

    (I’m out)

  6. 6
    PyrrhoManiac1 says:

    @5

    Even now, you can still hear the regular UD critics occasionally saying something like “a cut tree contains information”. They’ll point out that a dendrologist can look at those rings and tell us that the tree was 25 years old and had survived 3 major droughts in its lifetime. I would sometimes suggest to them that they were confusing “form” with “information”, and remind them that the “25 years” and “3 droughts” is not in the tree rings, it’s in the head of the observer. This information they find in tree rings begins when photons bouncing off the tree rings reaches the eye of the observer, are then converted to a sensory signal, and travel down the optical nerve to the visual cortex/brain where they are then interpreted by the constraints we’ve assembled in memory and through our collective pursuit of knowledge. “25 years” is the result of a learned constraint, not of the “structural information” contained in the tree rings.

    This seems basically right to me. But I wonder if quibbling about the details might be productive. (It usually isn’t, but sometimes it is.)

    There’s an objective, factual correlation between observable tree rings and humidity of past seasons, because of the causal relation between available moisture and plant growth. Karen Neander calls this “natural-factive information”: it’s observable correlations that are usually causally founded.

    At the most fundamental level, a brain is a biological structure that has the teleological function of detecting natural-factive information and using it to guide behavior. For example, a toad will see something worm-shaped moving horizontally and act as if it is prey, because in the toad’s usual environment, horizontally moving worm-shaped objects are typically sources of nutrients for toads.

    What we do (and toads don’t) is tag these sources of natural-factive information with lexical markers that allow us to carry out inferences (“if that ring is thicker than the rings around it, then there was probably more available moisture during that growth season”), transmit inferences to others, and make those inferences available to others for uptake — to be accepted or criticized.

    Toads (probably) don’t have concepts, but they have ways of exploiting information about their environments in order to locate food and mates and avoid predators. I think that this suggests a distinction between the natural-factive information that a toad’s brain uses to guide its behavior, and the semantic information that we use in describing and explaining the world.

    My point here is that the kind of information involved in protein synthesis seems a lot more like the natural-factive information that toads (and almost certainly less complex animals) use, and a lot less like the kind of semantic information that figures prominently in language.

  7. 7
    kairosfocus says:

    PM1, kindly, look at start, elongate with AA x, x1 . . . xn, stop and tell us why that is not a coded algorithm, which is actually the general consensus for cause; kindly, scroll up to the OP. KF

  8. 8
    PyrrhoManiac1 says:

    @7

    PM1, kindly, look at start, elongate with AA x, x1 . . . xn, stop and tell us why that is not a coded algorithm, which is actually the general consensus for cause; kindly, scroll up to the OP. KF

    An algorithm is a list of instructions that, if performed, would carry out a sequential process. But the transcription and translation that map nucleotide sequences to protein configurations is not a list of instructions that, if performed, would carry out a sequential process — it just is sequential process.

    The gene for hemoglobin is not a list of instructions to the cell “build a hemoglobin molecule!” The nucleotide sequence that corresponds to the amino acid sequence that folds up into a hemoglobin molecule is not separable from the totality of cellular metabolism. There’s no clear distinction between what counts as “software” and what counts as “hardware”, which is what would be required in order for the genes-as-algorithms to be anything more than a misleading metaphor.

    More generally, I think that the construal of a nucleotide sequence as algorithm is an example of what Whitehead called “the fallacy of misplaced concreteness”: taking an abstract description as a kind of concrete reality.

    In this case, the construal focuses upon the fact that transcription is a process with a beginning and an ending — the last triplet being the codon for “stop”. But what begins the process? What activates DNA transcriptase? What cellular processes direct the transcriptase to that specific sequence at that specific moment? The transcriptases are themselves transcripted and translated from the sequences that code for them — the regulatory genes — and there are massively complex networks in which regulatory genes turn ‘on’ and ‘off’ other regulatory genes, as well as turning ‘on’ and ‘off’ the genes that code for proteins that do the rest of the work in the cells.

    In other words, one can construe a nucleotide sequence as an algorithm only by abstracting away from its role in the totality of cellular life, because only then can one identify the stop and start as independent of and prior to the whole rest of the living system.

  9. 9
    kairosfocus says:

    PM1,

    An algorithm is a list of instructions that, if performed, would carry out a sequential process. But the transcription and translation that map nucleotide sequences to protein configurations is not a list of instructions that, if performed, would carry out a sequential process — it just is sequential process.

    The protein assembly instructions in D/RNA, are precisely “a list of instructions that, if performed, would carry out a sequential process.” They require associated execution machinery, ribosomes, tRNA, enzymes etc, to actually assemble proteins step by step as chains of AAs. Onward stages count but are not strictly algorithmic. Such, of course, includes the algorithm to build haemoglobin’s underlying AA chain. Where, too, isolating for consideration a distinct phase in the production of proteins, for consideration, is patently reasonable and here quite revealing. Yes, I have not spoken to the stepwise transcription of DNA, or regulatory controls, or the editing of the transcript, or to chaperoned folding or to agglomeration and modifications that may add metal atoms etc. That is not relevant to this focal point, though it is additional FSCO/I.

    We can take it as a given that you knew that proteins are assembled as AA chains in the ribosome, as a basic commonplace of contemporary knowledge (now taught in primary school). So, the real question is, why did you try to evade and obfuscate the point?

    The answer comes back, because of its strength, and what it points to; which is where you do not want to go.

    KF

  10. 10
    PyrrhoManiac1 says:

    @9

    The protein assembly instructions in D/RNA, are precisely “a list of instructions that, if performed, would carry out a sequential process.” They require associated execution machinery, ribosomes, tRNA, enzymes etc, to actually assemble proteins step by step as chains of AAs.

    I think this analogy relies on a conflation between a specification of the steps of the process prior to its being carried out and an abstract description of the process as it is being carried out.

    If we take those to be the same thing, then it would make sense to say that genetic sequences function as algorithms. But I don’t think they are, so it still seems like a position rests on a mistake.

    We can take it as a given that you knew that proteins are assembled as AA chains in the ribosome, as a basic commonplace of contemporary knowledge (now taught in primary school). So, the real question is, why did you try to evade and obfuscate the point?

    I didn’t think that the messy biochemical details of translation and protein synthesis were relevant to the point I was making. I was claiming that the conception of genetic sequences as algorithms relies on abstracting those sequences away from the totality of cellular metabolism.

    If anything, I’m urging that holistic, organicist, teleologically realist approach to biology should be applied even to single cells.

    The answer comes back, because of its strength, and what it points to; which is where you do not want to go.

    I feel like you’re insinuating something but honestly I can’t tell what it is.

  11. 11
    kairosfocus says:

    PM1, attempting to label an instance as analogy is itself a fallacy. One that is now being insisted on despite correction. The coded sequence is stored in DNA (where it can be read by us and understood to be an algorithm for AA chaining], it is transcribed [and perhaps edited] in mRNA. It is threaded into a ribosome, where starting with AUG –> start, with methionine, successive AAs are chained until one of three stop codons are encountered. So, no, we have algorithm stored in a memory device, transcription [which BTW is also stepwise], threading, then start, sequence of goal directed steps, halt. We even have widely circulated tables of the standard code and know about 20+ variants, similar to what happened with BASIC. That is sufficient, save for those committed to deny what is evident. KF

    PS, I again point you to a leading, subject-shaping in fact, Biochem textbook:

    “The information in DNA is encoded in its linear (one-dimensional) sequence of deoxyribonucleotide subunits . . . . A linear sequence of deoxyribonucleotides in DNA codes (through an intermediary, RNA) for the production of a protein with a corresponding linear sequence of amino acids . . . Although the final shape of the folded protein is dictated by its amino acid sequence, the folding of many proteins is aided by “molecular chaperones” . . . The precise three-dimensional structure, or native conformation, of the protein is crucial to its function.” [Principles of Biochemistry, 8th Edn, 2021, pp 194 – 5. Now authored by Nelson, Cox et al, Lehninger having passed on in 1986. Attempts to rhetorically pretend on claimed superior knowledge of Biochemistry, that D/RNA does not contain coded information expressing algorithms using string data structures, collapse. We now have to address the implications of language, goal directed stepwise processes and underlying sophisticated polymer chemistry and molecular nanotech in the heart of cellular metabolism and replication.]

    See https://uncommondescent.com/darwinist-debaterhetorical-tactics/protein-synthesis-what-frequent-objector-af-cannot-acknowledge/

    It is the insistent refusal to acknowledge this basic state of affairs that is so telling.

Leave a Reply