ID Foundations Intelligent Design Naturalism science education

PLOS editor reflects on teaching evolutionary biology “sensitively”?

Spread the love

A molecular biologist offers some thoughts on the recent suggestions by formerly out-windowed British science educator Michael Reiss, who wants to teach Darwinism nicely:

A science-based appreciation of the unimaginable size and age of the universe, taken together with compelling evidence for the relatively recent appearance of humans (Homo sapiens from their metazoan, vertebrate, tetrapod, mammalian, and primate ancestors) cannot help but impact our thinking as to our significance in the grand scheme of things (assuming that there is such a, possibly ineffable, plan)(1). The demonstrably random processes of mutation and the generally ruthless logic by which organisms survive, reproduce, and evolve, can lead even the most optimistic to question whether existence has any real meaning.

Consider, as an example, the potential implications of the progress being made in terms of computer-based artificial intelligence, together with advances in our understanding of the molecular and cellular connection networks that underlie human consciousness and self-consciousness. It is a small step to conclude, implicitly or explicitly, that humans (and all other organisms with a nervous system) are “just” wet machines that can (and perhaps should) be controlled and manipulated. The premise, the “self-evident truth”, that humans should be valued in and of themselves, and that their rights should be respected (2) is eroded by the ability of machines to perform what were previously thought to be exclusively human behaviors.

Mike Klymkowsky, “Is it possible to teach evolutionary biology “sensitively”?” at Bioliteracy

What Klymkowsky takes to be demonstrable fact is mostly a series of naturalist statements of belief in the first paragraph and flatly contradicted by mathematical facts in the second. No wonder people don’t want this stuff in the schools.

Why not address the way Darwinism is out of sync with the facts before we worry about teaching it “sensitively”?

See also: Educator proposes a more humane way to teach evolution. In 2008, Reiss ended up resigning from a Royal Society post because of an earlier effort to make Darwinism sound reasonable.

Follow UD News at Twitter!

5 Replies to “PLOS editor reflects on teaching evolutionary biology “sensitively”?

  1. 1
    AaronS1978 says:

    Exclusively human behaviors huh? Demonstrated by machines huh? Made by humans…… It’s still an exclusively human behavior then……. Only humans made the machine. His logic is corruptive. And he is obviously completely overcome by the IP metaphor of the brain

    The brain is an organism and is a live. That’s your first immediate difference and there are many more. I think this was written on Nautilis I’m not sure but it’s a nice read, they’re supposed to be the picture of a dollar bill in there but I don’t think it showed up when I pasted this. The gentleman brings up very good points the brain has existed long before computers or any other machine and we’ve created a machine to replicate processes that we believe working a very specific way. We’ve created machines to replicate human behavior and the machines came after the brain yet we compare the brain to a computer. I agree with the gentleman below

    “The mathematician John von Neumann stated flatly that the function of the human nervous system is ‘prima facie digital’, drawing parallel after parallel between the components of the computing machines of the day and the components of the human brain 
    Each metaphor reflected the most advanced thinking of the era that spawned it. Predictably, just a few years after the dawn of computer technology in the 1940s, the brain was said to operate like a computer, with the role of physical hardware played by the brain itself and our thoughts serving as software. The landmark event that launched what is now broadly called ‘cognitive science’ was the publication of Language and Communication (1951) by the psychologist George Miller. Miller proposed that the mental world could be studied rigorously using concepts from information theory, computation and linguistics.
    This kind of thinking was taken to its ultimate expression in the short book The Computer and the Brain (1958), in which the mathematician John von Neumann stated flatly that the function of the human nervous system is ‘prima facie digital’. Although he acknowledged that little was actually known about the role the brain played in human reasoning and memory, he drew parallel after parallel between the components of the computing machines of the day and the components of the human brain.
    Propelled by subsequent advances in both computer technology and brain research, an ambitious multidisciplinary effort to understand human intelligence gradually developed, firmly rooted in the idea that humans are, like computers, information processors. This effort now involves thousands of researchers, consumes billions of dollars in funding, and has generated a vast literature consisting of both technical and mainstream articles and books. Ray Kurzweil’s book How to Create a Mind: The Secret of Human Thought Revealed (2013), exemplifies this perspective, speculating about the ‘algorithms’ of the brain, how the brain ‘processes data’, and even how it superficially resembles integrated circuits in its structure.
    The information processing (IP) metaphor of human intelligence now dominates human thinking, both on the street and in the sciences. There is virtually no form of discourse about intelligent human behaviour that proceeds without employing this metaphor, just as no form of discourse about intelligent human behaviour could proceed in certain eras and cultures without reference to a spirit or deity. The validity of the IP metaphor in today’s world is generally assumed without question.
    But the IP metaphor is, after all, just another metaphor – a story we tell to make sense of something we don’t actually understand. And like all the metaphors that preceded it, it will certainly be cast aside at some point – either replaced by another metaphor or, in the end, replaced by actual knowledge.
    Just over a year ago, on a visit to one of the world’s most prestigious research institutes, I challenged researchers there to account for intelligent human behaviour without reference to any aspect of the IP metaphor. They couldn’t do it, and when I politely raised the issue in subsequent email communications, they still had nothing to offer months later. They saw the problem. They didn’t dismiss the challenge as trivial. But they couldn’t offer an alternative. In other words, the IP metaphor is ‘sticky’. It encumbers our thinking with language and ideas that are so powerful we have trouble thinking around them.
    The faulty logic of the IP metaphor is easy enough to state. It is based on a faulty syllogism – one with two reasonable premises and a faulty conclusion. Reasonable premise #1: all computers are capable of behaving intelligently. Reasonable premise #2: all computers are information processors. Faulty conclusion: all entities that are capable of behaving intelligently are information processors.
    Setting aside the formal language, the idea that humans must be information processors just because computers are information processors is just plain silly, and when, some day, the IP metaphor is finally abandoned, it will almost certainly be seen that way by historians, just as we now view the hydraulic and mechanical metaphors to be silly.
    If the IP metaphor is so silly, why is it so sticky? What is stopping us from brushing it aside, just as we might brush aside a branch that was blocking our path? Is there a way to understand human intelligence without leaning on a flimsy intellectual crutch? And what price have we paid for leaning so heavily on this particular crutch for so long? The IP metaphor, after all, has been guiding the writing and thinking of a large number of researchers in multiple fields for decades. At what cost? 
    In a classroom exercise I have conducted many times over the years, I begin by recruiting a student to draw a detailed picture of a dollar bill – ‘as detailed as possible’, I say – on the blackboard in front of the room. When the student has finished, I cover the drawing with a sheet of paper, remove a dollar bill from my wallet, tape it to the board, and ask the student to repeat the task. When he or she is done, I remove the cover from the first drawing, and the class comments on the differences.
    Because you might never have seen a demonstration like this, or because you might have trouble imagining the outcome, I have asked Jinny Hyun, one of the student interns at the institute where I conduct my research, to make the two drawings. Here is her drawing ‘from memory’ (notice the metaphor):

    And here is the drawing she subsequently made with a dollar bill present:

    Jinny was as surprised by the outcome as you probably are, but it is typical. As you can see, the drawing made in the absence of the dollar bill is horrible compared with the drawing made from an exemplar, even though Jinny has seen a dollar bill thousands of times.
    What is the problem? Don’t we have a ‘representation’ of the dollar bill ‘stored’ in a ‘memory register’ in our brains? Can’t we just ‘retrieve’ it and use it to make our drawing?
    Obviously not, and a thousand years of neuroscience will never locate a representation of a dollar bill stored inside the human brain for the simple reason that it is not there to be found.
    The idea that memories are stored in individual neurons is preposterous: how and where is the memory stored in the cell?
    A wealth of brain studies tells us, in fact, that multiple and sometimes large areas of the brain are often involved in even the most mundane memory tasks. When strong emotions are involved, millions of neurons can become more active. In a 2016 study of survivors of a plane crash by the University of Toronto neuropsychologist Brian Levine and others, recalling the crash increased neural activity in ‘the amygdala, medial temporal lobe, anterior and posterior midline, and visual cortex’ of the passengers.
    The idea, advanced by several scientists, that specific memories are somehow stored in individual neurons is preposterous; if anything, that assertion just pushes the problem of memory to an even more challenging level: how and where, after all, is the memory stored in the cell?
    So what is occurring when Jinny draws the dollar bill in its absence? If Jinny had neverseen a dollar bill before, her first drawing would probably have not resembled the second drawing at all. Having seen dollar bills before, she was changed in some way. Specifically, her brain was changed in a way that allowed her to visualise a dollar bill – that is, to re-experience seeing a dollar bill, at least to some extent.
    The difference between the two diagrams reminds us that visualising something (that is, seeing something in its absence) is far less accurate than seeing something in its presence. This is why we’re much better at recognising than recalling. When we re-member something (from the Latin re, ‘again’, and memorari, ‘be mindful of’), we have to try to relive an experience; but when we recognise something, we must merely be conscious of the fact that we have had this perceptual experience before. 
    Perhaps you will object to this demonstration. Jinny had seen dollar bills before, but she hadn’t made a deliberate effort to ‘memorise’ the details. Had she done so, you might argue, she could presumably have drawn the second image without the bill being present. Even in this case, though, no image of the dollar bill has in any sense been ‘stored’ in Jinny’s brain. She has simply become better prepared to draw it accurately, just as, through practice, a pianist becomes more skilled in playing a concerto without somehow inhaling a copy of the sheet music.
    rom this simple exercise, we can begin to build the framework of a metaphor-free theory of intelligent human behaviour – one in which the brain isn’t completely empty, but is at least empty of the baggage of the IP metaphor.
    As we navigate through the world, we are changed by a variety of experiences. Of special note are experiences of three types: (1) we observe what is happening around us (other people behaving, sounds of music, instructions directed at us, words on pages, images on screens); (2) we are exposed to the pairing of unimportant stimuli (such as sirens) with important stimuli (such as the appearance of police cars); (3) we are punished or rewarded for behaving in certain ways.
    We become more effective in our lives if we change in ways that are consistent with these experiences – if we can now recite a poem or sing a song, if we are able to follow the instructions we are given, if we respond to the unimportant stimuli more like we do to the important stimuli, if we refrain from behaving in ways that were punished, if we behave more frequently in ways that were rewarded.
    Misleading headlines notwithstanding, no one really has the slightest idea how the brain changes after we have learned to sing a song or recite a poem. But neither the song nor the poem has been ‘stored’ in it. The brain has simply changed in an orderly way that now allows us to sing the song or recite the poem under certain conditions. When called on to perform, neither the song nor the poem is in any sense ‘retrieved’ from anywhere in the brain, any more than my finger movements are ‘retrieved’ when I tap my finger on my desk. We simply sing or recite – no retrieval necessary.
    A few years ago, I asked the neuroscientistEric Kandel of Columbia University – winner of a Nobel Prize for identifying some of the chemical changes that take place in the neuronal synapses of the Aplysia (a marine snail) after it learns something – how long he thought it would take us to understand how human memory works. He quickly replied: ‘A hundred years.’ I didn’t think to ask him whether he thought the IP metaphor was slowing down neuroscience, but some neuroscientists are indeed beginning to think the unthinkable – that the metaphor is not indispensable.
    A few cognitive scientists – notably Anthony Chemero of the University of Cincinnati, the author of Radical Embodied Cognitive Science (2009) – now completely reject the view that the human brain works like a computer. The mainstream view is that we, like computers, make sense of the world by performing computations on mental representations of it, but Chemero and others describe another way of understanding intelligent behaviour – as a direct interaction between organisms and their world.
    My favourite example of the dramatic difference between the IP perspective and what some now call the ‘anti-representational’ view of human functioning involves two different ways of explaining how a baseball player manages to catch a fly ball – beautifully explicated by Michael McBeath, now at Arizona State University, and his colleagues in a 1995 paper in Science. The IP perspective requires the player to formulate an estimate of various initial conditions of the ball’s flight – the force of the impact, the angle of the trajectory, that kind of thing – then to create and analyse an internal model of the path along which the ball will likely move, then to use that model to guide and adjust motor movements continuously in time in order to intercept the ball.
    That is all well and good if we functioned as computers do, but McBeath and his colleagues gave a simpler account: to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms.
    We will never have to worry about a human mind going amok in cyberspace, and we will never achieve immortality through downloading
    Two determined psychology professors at Leeds Beckett University in the UK – Andrew Wilson and Sabrina Golonka – include the baseball example among many others that can be looked at simply and sensibly outside the IP framework. They have been blogging for years about what they call a ‘more coherent, naturalised approach to the scientific study of human behaviour… at odds with the dominant cognitive neuroscience approach’. This is far from a movement, however; the mainstream cognitive sciences continue to wallow uncritically in the IP metaphor, and some of the world’s most influential thinkers have made grand predictions about humanity’s future that depend on the validity of the metaphor.
    One prediction – made by the futurist Kurzweil, the physicist Stephen Hawking and the neuroscientist Randal Koene, among others – is that, because human consciousness is supposedly like computer software, it will soon be possible to download human minds to a computer, in the circuits of which we will become immensely powerful intellectually and, quite possibly, immortal. This concept drove the plot of the dystopian movie Transcendence (2014) starring Johnny Depp as the Kurzweil-like scientist whose mind was downloaded to the internet – with disastrous results for humanity.
    Fortunately, because the IP metaphor is not even slightly valid, we will never have to worry about a human mind going amok in cyberspace; alas, we will also never achieve immortality through downloading. This is not only because of the absence of consciousness software in the brain; there is a deeper problem here – let’s call it the uniqueness problem – which is both inspirational and depressing.
    ecause neither ‘memory banks’ nor ‘representations’ of stimuli exist in the brain, and because all that is required for us to function in the world is for the brain to change in an orderly way as a result of our experiences, there is no reason to believe that any two of us are changed the same way by the same experience. If you and I attend the same concert, the changes that occur in my brain when I listen to Beethoven’s 5th will almost certainly be completely different from the changes that occur in your brain. Those changes, whatever they are, are built on the unique neural structure that already exists, each structure having developed over a lifetime of unique experiences.
    This is why, as Sir Frederic Bartlett demonstrated in his book Remembering(1932), no two people will repeat a story they have heard the same way and why, over time, their recitations of the story will diverge more and more. No ‘copy’ of the story is ever made; rather, each individual, upon hearing the story, changes to some extent – enough so that when asked about the story later (in some cases, days, months or even years after Bartlett first read them the story) – they can re-experience hearing the story to some extent, although not very well (see the first drawing of the dollar bill, above).
    This is inspirational, I suppose, because it means that each of us is truly unique, not just in our genetic makeup, but even in the way our brains change over time. It is also depressing, because it makes the task of the neuroscientist daunting almost beyond imagination. For any given experience, orderly change could involve a thousand neurons, a million neurons or even the entire brain, with the pattern of change different in every brain.
    Worse still, even if we had the ability to take a snapshot of all of the brain’s 86 billion neurons and then to simulate the state of those neurons in a computer, that vast pattern would mean nothing outside the body of the brain that produced it. This is perhaps the most egregious way in which the IP metaphor has distorted our thinking about human functioning. Whereas computers do store exact copies of data – copies that can persist unchanged for long periods of time, even if the power has been turned off – the brain maintains our intellect only as long as it remains alive. There is no on-off switch. Either the brain keeps functioning, or we disappear. What’s more, as the neurobiologist Steven Rose pointed out in The Future of the Brain (2005), a snapshot of the brain’s current state might also be meaningless unless we knew the entire life history of that brain’s owner – perhaps even about the social context in which he or she was raised.
    Think how difficult this problem is. To understand even the basics of how the brain maintains the human intellect, we might need to know not just the current state of all 86 billion neurons and their 100 trillion interconnections, not just the varying strengths with which they are connected, and not just the states of more than 1,000 proteins that exist at each connection point, but how the moment-to-moment activity of the brain contributes to the integrity of the system. Add to this the uniqueness of each brain, brought about in part because of the uniqueness of each person’s life history, and Kandel’s prediction starts to sound overly optimistic. (In a recent op-ed in The New York Times, the neuroscientist Kenneth Miller suggested it will take ‘centuries’ just to figure out basic neuronal connectivity.)
    Meanwhile, vast sums of money are being raised for brain research, based in some cases on faulty ideas and promises that cannot be kept. The most blatant instance of neuroscience gone awry, documented recently in a report in Scientific American, concerns the $1.3 billion Human Brain Project launched by the European Union in 2013. Convinced by the charismatic Henry Markram that he could create a simulation of the entire human brain on a supercomputer by the year 2023, and that such a model would revolutionise the treatment of Alzheimer’s disease and other disorders, EU officials funded his project with virtually no restrictions. Less than two years into it, the project turned into a ‘brain wreck’, and Markram was asked to step down.
    We are organisms, not computers. Get over it. Let’s get on with the business of trying to understand ourselves, but without being encumbered by unnecessary intellectual baggage. The IP metaphor has had a half-century run, producing few, if any, insights along the way. The time has come to hit the DELETE key.”

  2. 2
    bornagain77 says:

    His science based thinking has a lot to be desired,

    First sentence:

    A science-based appreciation of the unimaginable size and age of the universe, taken together with compelling evidence for the relatively recent appearance of humans (Homo sapiens from their metazoan, vertebrate, tetrapod, mammalian, and primate ancestors) cannot help but impact our thinking as to our significance in the grand scheme of things (assuming that there is such a, possibly ineffable, plan)(1).

    As to the size and age of the universe, the size of the universe has always been taken by Christians to reflect the Greatness of God. The old time hymn “How Great Thou Art” reflects that sentiment:

    Carrie Underwood & Vince Gill duet “How Great Thou Art” ACM Girls’ Night Out

    Moreover, if the size of the universe were not what it is, we would not be here to talk about it. In other words, the size of the universe is yet another fined tuned parameter of the universe that is necessary for life to even exist in the universe in the first place.

    As to the age of the universe, contrary to what he may believe, the fact that the universe has any age whatsoever is evidence that the universe had a beginning. The fact that the universe actually had a beginning is an exclusively Theistic proposition and certainly does not square with the presupposition, held within atheistic materialism, that the material universe has always existed. In other words, the beginning of the universe is a very powerful line of evidence that directly challenges the entire materialistic foundation of his atheistic worldview.

    As to the “relatively recent appearance of humans”, the fact that humans are the last species to ‘appear’ on earth is exactly what the bible predicted. Moreover, his evidence for evolution of humans from “metazoan, vertebrate, tetrapod, mammalian, and primate ancestors” is non-existent and is in fact contradicted by the fossil record and recent genetic evidence. Much less has anyone empirically shown that the transformation of one species into another species is realistically feasible.

    As to, “cannot help but impact our thinking as to our significance in the grand scheme of things”

    True, and yet the actual scientific evidence that we have from both General Relativity and Quantum Mechanics, the two most precisely tested theories ever in the history of science, reveals that we are far more significant in this vast and old universe that was dared to be imagined even by Christians just a few short years ago:

    I find it extremely interesting, and strange, that quantum mechanics tells us that instantaneous quantum wave collapse to its ‘uncertain’ 3-D state is centered on each individual observer in the universe, whereas, 4-D space-time cosmology (General Relativity) tells us each 3-D point in the universe is central to the expansion of the universe. These findings of modern science are pretty much exactly what we would expect to see if this universe were indeed created, and sustained, from a higher dimension by an omniscient, omnipotent, omnipresent, eternal Being who knows everything that is happening everywhere in the universe at the same time. These findings certainly seem to go to the very heart of the age old question asked of many parents by their children, “How can God hear everybody’s prayers at the same time?”,,, i.e. Why should the expansion of the universe, or the quantum wave collapse of the entire universe, even care that you or I, or anyone else, should exist? Only Theism offers a rational explanation as to why you or I, or anyone else, should have such undeserved significance in such a vast universe:

    Overturning of the Copernican Principle by both General Relativity and Quantum Mechanics

    (April 2019) Overturning the Copernican principle
    Thus in conclusion, the new interactive graph by Dr. Dembski provides a powerful independent line of evidence, along with several other powerful lines of evidence, that overturns the Copernican principle and restores humanity back to centrality in the universe, and even, when putting all those lines of evidence together, brings modern science back, full circle, to Christianity from whence it originated in the first place.

  3. 3
    Fasteddious says:

    Fascinating topic. Of course our minds are not the same as modern computers. Recall that digital computers mostly replaced, some 50 or 60 years ago, analog ones that operated on entirely different principles. There have also been pneumatic and fluidic computers. Now there are “fuzzy logic” chips and neural network circuits that also behave differently. And, of course, all of these latter types can be simulated in digital computers. The human brain is, of course, not a digital computer and may be more like some of these others. Doubtless certain aspects of it can be simulated on digital computers, as in Alexa or Siri, or other AI constructs, but those are just simulations, designed to mimic us, and not true minds. If we ever fully understand the human brain (unlikely in my estimation), we will still not understand human minds, as there is surely some non-physical component to the mind and to consciousness.
    I take exception to one of the statements in Aaron’s lengthy quotation: that human minds do not “retrieve” specific memories. Who has not found that something was “on the tip of the tongue” when trying to remember something specific like a name, place or title, and then later had it pop into mind, complete with detailed spelling and added info? Clearly in such cases, the brain is digging out (retrieving) the memory somehow from somewhere inside it. This is not to say that the brain is acting like a computer with a slow or faulty memory. While I understand the author’s concern of equating the human mind to an IP system, this too is not so black and white. Clearly humans can process information, and we do it constantly as we deal with the experiences brought to us through our senses. Of course, our processing is nothing like that of a digital computer, or normal IP systems, but there is surely some degree of similarity. Perhaps our brains are more like complex analog computers, using fuzzy logic and neural network learning?
    Another idea touched on in the discussion is the idea of each human having a mental model of the world around him, which gets tested and revised or added to every day. This starts in the womb, I expect, and happens at a particularly fast pace following birth, where the brain is bombarded with a constant barrage of inputs, which the baby has to sort out and make sense of. What are these constantly changing visual images that move about and shift? Later, what are these things waving around in my site as I do this? (E.g. my hands.) Then there is spatial awareness (near-far, left-right, up-down, edges and contrast, moving images, etc.), and later, control of his bodily movements, etc. Somehow, the brain, aided by a curious and active mind, creates, tests and puts together pieces of models of reality, and weaves them into a coherent whole that helps the baby learn and act in his environment. Once the baby has a largely consistent model that works for him, he can add to it, fit more details and correct a few glitches. In this way babies are natural scientists and philosophers. How that model is “stored” and maintained in the brain is totally unknown, but any attempt to “explain” the mind or consciousness without reference to how our minds develop and grow starting before birth, is going to fail.

  4. 4
    Brother Brian says:

    F@3, very interesting comment. Definitely food for thought. I would be interested in your opinion on whether we will ever get to the point where we have some man-made device that can’t be distinguished from a human with regard to communicating, problem solving, reasoning, etc.

  5. 5
    Fasteddious says:

    BB @4:
    That is a good question which goes beyond a simple Turing test approach. Certainly, some AI will be able to “pass” a Turing test soon – simulating a human interviewee well enough to fool many interviewers. But that is just a simulation programmed to fool a naïve human. There are other tests that go beyond the Turing test. As AI’s get “smarter”, they will be able to fool more and more people, depending on the nature and length of the interaction. Humans usually make assumptions about the person they are conversing with: some degree of honesty, some common ground, basic ethics, simple logic, etc. Clever programmers will make use of such assumptions to fool the interviewer. Perhaps AI experts should be the future interviewers instead of naïve humans?
    Regarding “problem solving and reasoning”, we already have AI’s that can do simple versions of this, so the question would be, what level of problem or reasoning? Humans operate at varying levels in these areas as well. I think a better approach for the interviewer would be to ask “why?” questions, seeking personal explanations from the AI for opinions or statements made by the AI. Getting someone to explain themselves is a good way to probe their understanding and intellect.
    But to answer your question as asked, I expect the answer is yes, at some point there will be an AI that can present itself as a human and thereby fool most people for some reasonable length of time. However, that does not mean the AI would be conscious, self aware, introspective, truly creative, etc. It would just be a complex algorithm, programmed by clever humans to emulate humans.

Leave a Reply