Uncommon Descent Serving The Intelligent Design Community

I am a machine. No, I am a tree. Here’s the problem with analogy …

arroba Email

A friend writes to say that nonsensical materialism is now being marketed to engineers, via IEEE, the largest professional engineering society in the world, with over 365 000 members.

This article, “I, Rodney Brooks, am a robot”, appeared in the IEEE Spectrum which is the magazine that goes to all members:

I am a machine. So are you.

Of all the hypotheses I’ve held during my 30-year career, this one in particular has been central to my research in robotics and artificial intelligence. I, you, our family, friends, and dogs—we all are machines. We are really sophisticated machines made up of billions and billions of biomolecules that interact according to well-defined, though not completely known, rules deriving from physics and chemistry. The biomolecular interactions taking place inside our heads give rise to our intellect, our feelings, our sense of self.

Accepting this hypothesis opens up a remarkable possibility. If we really are machines and if—this is a big if—we learn the rules governing our brains, then in principle there’s no reason why we shouldn’t be able to replicate those rules in, say, silicon and steel. I believe our creation would exhibit genuine human-level intelligence, emotions, and even consciousness.

There is no single, simple set of rules that governs the operations of the human brain or the mind that inhabits it.

It would make as much sense to say, “I am a tree” as “I am a robot.” In some ways, more. Trees are life forms, like humans. There are at least some qualities that we share with trees (we need water, nutrients, and oxygen, and we grow, reproduce and die. We have roots and branches. And the older we are, the harder it is to move us without excessive damage.

Still, we are not trees. And we certainly are not robots.

The fact that we can say “I am” anything at all, or “I am not” that thing certainly apprises us that we are not trees or robots. Philosophers call it the “hard problem” of consciousness, the sense of self.

As Mario and I pointed out in The Spiritual Brain, the “computer” theory of how the human mind works is badly in need of an early retirement.

Note: I don’t know where the sign is from, but am told it is somewhere in Britain.

Also from The Mindful Hack

How not to study science …

God is not dead yet – but some haven’t gotten the memo

And from Colliding Universes,

Berlinski: Creation of everything out of nothing – a clinical level of self-delusion?

And what if the Large Hadron Collider doesn’t find the Higgs boson … ? Philosophy time!

Philosopher: God is not dead, and physics arguments are one of the reasons

Stephen Hawking, miffed over science funding cuts, to move to Ontario, Canada?

And from the Post-Darwinist,

When people laugh, fascists fear for their livelihoods

God is not dead yet, and in fact

I must reserve a ticket for the Canuck Comics’ rally for freedom

Why would Brazilians want to hear from a chemist who thinks there is design in nature?

Enron and Darwinism – a perfect fit?

Trying to understand intelligent design? I see a hatchet in your future.

Look, you have lots of reasons to avoid pulling the quackgrass this weekend. Pull some of it anyway. It gets uglier every time you look at it. The act of looking at it unpulled makes it uglier.


The whole issue is devoted to the "singularity" where it seems we'll develop engineering "wetware", install it into our brains (eventually probably find a way to do it genetically?), then you have an advanced being, who inexorably designs newer/improved versions of "us" til we're indistinguishable from computers, or something like that. And a quote in an article from the editor in that issue writes Denyse out of existence when he says something like: "no one talks of consciousness except in terms of biological machinery", or some such. es58
"Kurzweil’s plan to download his consciousness into a computer and live forever" Yeah, like that's gonna happen. Does Kurzweil think that's before or after the first matter transporters and warp drives are built? Beam me up, Scotty. DaveScot
That whole issue of spectrum is interesting, including Kurzweil's plan to download his consciousness into a computer and live forever. es58
Sparc The machines I discuss are more tractible for analysis than the human brain (or brain plus mind for dualists). The machine example I generally prefer is the combination of ribosome and coding DNA which is uncannily like a computer-controlled milling machine. I'm on the fence in regard to mind/body dualism. Not enough data. Some strange stuff needs to be accounted for if brain is all there is but the brain is so complex and physics so incomplete it's impossible to say with any confidence what it alone can or cannot do. DaveScot
P.S.--The Kurzweil Reader is a fantastic achievement. Atticus Finch
"Though I admire his work, I sure hope Ray Kurzweil is paying attention to this post." Actually, I mentioned Kurweil's stupid idea of scanning and fabricating a copy of a brain to my mathematician acquaintance. Electrical engineers and physicists tend to think of the brain as a circuit. In fact, the brain is floating in a chemical bath. Some regions function differently at different times, depending on chemistry. What's a scan of brain structure going to tell you about that, Ray? BTW, I can't tell you the number of people who thought the problem I addressed in my dissertation research had already been solved by Kurzweil. The guy has a nasty habit of advertising his grandiose goals as though he's already achieved them. Atticus Finch
I love watching Apple assembler code scroll on the inside of The Terminator's eyeballs. One can only assume that there's a little terminator inside the big terminator, and a teeny terminator inside the little terminator, and ... It's a wonderful example of the homunculus regress. But Terminator 2 was hysterical: - John Connor: Can you learn things that you haven't been programmed with, so you can be, you know, more human, and not just a dork all the time? - The Terminator: My CPU is a neural net processor, a learning computer. The more contact I have with humans, the more I learn. - John Connor: Cool. My favorite AI movie moment comes from 2010. I saw it in the theater with a buddy, and a LOT of people turned to see why we were laughing our heads off and slapping our thighs. - SAL-9000: Will I dream? - Dr. Chandra: Of course you will. All intelligent beings dream. Nobody knows why. What's not so funny is that the sci-fi project I mentioned above got $5 million from the NSF. Atticus Finch
Though I admire his work, I sure hope Ray Kurzweil is paying attention to this post. F2XL
I thaugt ID results in machines. You actually stated the same:
First, as a matter of literal fact, our bodies are composed of hundreds of billions of machines. Indeed, biologists cannot avoid using the terminology associated with machines when describing the activities inside our cells, however they assume that the microscopic machines originated. In other words, to the extent that God is a "creator of natures," the natures he creates are composed of machines. Our billions of bodily nano-machines do not, of course, rattle or clunk, but that is because they are sophisticated, not because they are not machines.
"Machine intelligence still makes a good sci-fi plot!" Indeed. The movie Blade Runner comes to mind (based on a Philip K. Dick novel that I have yet to read). The juxtaposition of the replicants' mania with its potential humanity resulted in a cult classic, for more reasons than I can ponder (the Vangelis soundtrack having something to do with it I'm certain). Apollos
Machine intelligence still makes a good sci-fi plot! Processes generally don't scale in a linear manner forever. As in a little bit of machine intelligence will be a super machine intelligence if you add enough of the smaller bits together. A similar reasoning is applied to evolution as in adding enough little bits of evolution together will turn a microbe into a mammoth. DaveScot
Just last week, a mathematician asked me if I thought a computer with a large number of densely- interconnected processors could learn human language. I told him that humans communicate with one another under the assumption of common experience. Much of our experience cannot be put into words, but we often use language with the unconscious expectation that it will evoke recollection of that experience. We also use language with a sense of the nonverbal (e.g., emotional) response it will evoke in the present. We can anticipate that response because we are like our listeners. I told him I thought that one has to live in a human body to understand human language. Then the mathematician revealed to me that he had just reviewed a proposal for a large grant to develop the machine he had described. Evidently someone made a big deal of the correlation of human intelligence with density of interconnection of neurons. The mathematician voted against the award, but he was in the minority. Atticus Finch
I agree completely Denyse. Back before I went into psychology, I had grandiose ideas like the fellow that you quote...that some day, computers could be like human beings. But working with people day in and day out, and after studying the mind and computer science, I hardly ever compare the two unless I am explaining a specific concept. All I hope for is being able to scratch the surface enough with my knowledge of the mind for being able to help people. Fortunately, I view the mind as having self-corrective mechanisms designed in, and my job is to help the individual activate those processes. parapraxis
Denyse O'Leary, Although Brooks refers to rules, he actually made his reputation as a detractor of symbolic artificial intelligence (of which rule-based expert system were the prime example at the time). He believes that explicitly-programmed AI systems can go only so far. His thinking is that complex behaviors emerge from collections of simple elements behaving according to simple rules (not rules in the sense of expert systems). His early bug-bots had six independent leg-controllers with learning capability. When he first turned on the bots, they were unable to walk, but insect-like walking behavior (front and back legs down on one side, middle leg down on the other) emerged consistently when he delivered a global reinforcement signal for movement. That is, he told all the controllers when the entire bot had moved, and did not give feedback to individual controllers. What Brooks accomplished was extraordinary, but he is exhibiting now the infamous hubris of AI. So many researchers of the past have jumped to the conclusion that their small successes would scale up. You might think that people in the field, or at least people in the funding agencies, would learn from the past, but they do not. Atticus Finch
Brooks writes, "We are really sophisticated machines made up of billions and billions of biomolecules that interact according to well-defined, though not completely known, rules deriving from physics and chemistry." REALLY? Why just biomolecules? Show me a "life machine" that isn't critically dependent on inorganic molecules. And if we're playing a game of emergence, why not start at the quantum level? I'm not the brightest bulb, but I can respond in a fraction of a second to a single photon hitting my retina in an otherwise dark environment. "The biomolecular interactions taking place inside our heads give rise to our intellect, our feelings, our sense of self." Gee, thousands of idiot neuroscientists have gotten the idea that the nervous system runs throughout the body, and that it interacts in complex ways with the endocrine system. Puberty does seem to put the oddest notions in children's heads. And it does seem that I make my worst decisions playing poker when my heart is beating the hardest. Atticus Finch
parapraxis, as so often, I am running out the door, but yes, you are right, the mind is much more complex than a computer. But, as you likely realize, that is not simply the outcome of greater increments of complexity. It is a different order. Once upon a time, one of my children was going through engineering and she was telling me about the early efforts to create "expert systems." The conversation went something like this: She: It's a problem. People like you are the problem. You know so much about publishing but you can't explain what you know. Me: It's not easy to explain what I know. Normally, I must focus on a specific problem. I might need to go talk to some people or read up, or call a meeting ... She: But that's just the trouble! You can't explain what you know. Actually, the trouble is, what I know is part of who I am. I can aim myself at a problem, but the explanation is only somewhat less complex than the entity that is helping to resolve it. You won't get anywhere with expert systems without compensating for that. And even then, you face the problem of figuring out what you can cut without loss ... O'Leary
Yeah, as a psychologist and computer consultant, I find that the mind and a computer have little in common. The complexity of the mind dwarfs the complexity of a computer to the extent that they will never be comparable. http://thecountryshrink.com parapraxis
billions and billions
Hmm. Where have I heard that before? Atticus Finch

Leave a Reply