Uncommon Descent Serving The Intelligent Design Community

Could the Internet ever be conscious? Definitely not before 2115, even if you’re a materialist.

arroba Email

This is a post about two scientists, united in their passion about one crazy idea. Brain scientist and serial entrepreneur Jeff Stibel thinks that the Internet is showing signs of intelligence and may already be conscious, according to a recent BBC report. So does neuroscientist Christof Koch (pictured above). Koch, who has done a lot of pioneering work on the neural basis of consciousness, was the Lois and Victor Troendle Professor of Cognitive and Behavioral Biology at California Institute of Technology from 1986 until September 2012, when he took up a new job as Chief Scientific Officer of the Allen Institute for Brain Science in Seattle.

Stibel, who is nothing if not passionate about his cause, refers to the Internet as a “global brain” and claims that it is starting to develop “real intelligence, not artificial intelligence.” Stibel wants to help build this global intelligence: that is what drives him. He even describes the Internet as a new life-form, which may one day evolve on its own. Readers can view him talking about his project here.

Koch also waxes poetic on the intelligence of the Internet. Here is what he said in a recent interview with Atlantic journalist Steve Paulson, in a report titled, The Nature of Consciousness: How the Internet Could Learn to Feel (August 22, 2012):

Are you saying the Internet could become conscious, or maybe already is conscious?

Koch: That’s possible. It’s a working hypothesis that comes out of artificial intelligence. It doesn’t matter so much that you’re made out of neurons and bones and muscles. Obviously, if we lose neurons in a stroke or in a degenerative disease like Alzheimer’s, we lose consciousness. But in principle, what matters for consciousness is the fact that you have these incredibly complicated little machines, these little switching devices called nerve cells and synapses, and they’re wired together in amazingly complicated ways. The Internet now already has a couple of billion nodes. Each node is a computer. Each one of these computers contains a couple of billion transistors, so it is in principle possible that the complexity of the Internet is such that it feels like something to be conscious. I mean, that’s what it would be if the Internet as a whole has consciousness. Depending on the exact state of the transistors in the Internet, it might feel sad one day and happy another day, or whatever the equivalent is in Internet space.

You’re serious about using these words? The Internet could feel sad or happy?

Koch: What I’m serious about is that the Internet, in principle, could have conscious states. Now, do these conscious states express happiness? Do they express pain? Pleasure? Anger? Red? Blue? That really depends on the exact kind of relationship between the transistors, the nodes, the computers. It’s more difficult to ascertain what exactly it feels. But there’s no question that in principle it could feel something.

In the course of the interview, Koch also expressed his personal sympathy with the idea that consciousness is a fundamental feature of the universe, like space, time, matter and energy. Koch is also open to the philosophy of panpsychism, according to which all matter has some consciousness associated with it, although the degree of consciousness varies enormously, depending on the complexity of the system. He acknowledges, however, that most neuroscientists don’t share his views on such matters.

After taking a pot shot at Cartesian dualism (which was espoused by the late neuroscientist and Nobel prizewinner Sir John Eccles), Koch argues that consciousness boils down to connectivity:

Unless you believe in some magic substance attached to our brain that exudes consciousness, which certainly no scientist believes, then what matters is not the stuff the brain is made of, but the relationship of that stuff to each other. It’s the fact that you have these neurons and they interact in very complicated ways. In principle, if you could replicate that interaction, let’s say in silicon on a computer, you would get the same phenomena, including consciousness.

So how does the Internet stack up against the human brain? Slate reporter Dan Falk, who interviewed Koch by phone recently, quotes Koch as calculating that the Internet already has 1000 times as many connections as the human brain, in a report titled, Could the Internet Ever “Wake Up”? And would that be such a bad thing? (September 20, 2012):

In his book Consciousness: Confessions of a Romantic Reductionist, published earlier this year, he makes a rough calculation: Take the number of computers on the planet — several billion — and multiply by the number of transistors in each machine — hundreds of millions — and you get about a billion billion, written more elegantly as 10^18. That’s a thousand times larger than the number of synapses in the human brain (about 10^15)…

In a phone interview, Koch noted that the kinds of connections that wire together the Internet — its “architecture” — are very different from the synaptic connections in our brains, “but certainly by any measure it’s a very, very complex system. Could it be conscious? In principle, yes it can.”

…We can’t pin down the date when the Internet surpasses our brains in complexity, he says, “but clearly it is going to happen at some point.”

A neuron is not a transistor

Diagram of a typical myelinated vertebrate motoneuron. Image courtesy of LadyofHats and Wikipedia.)

There’s an implied assumption in the foregoing discussion, that one neuron equals one transistor. I don’t think so. One blogger, who calls himself The Buckeye Monkey, puts it this way in a thoughtful article:

A transistor is not a neuron, not even close. A transistor is just a simple electronic switch…

See how it only has 3 contacts? The one on the left is the input, the one in the middle is the control, and the one on the right is the output. By applying a small voltage on the control (middle) you turn on the switch and allow current to flow from the input to the output…

Making out like 1 transistor = 1 neuron is beyond nonsense, it’s asinine…

The fact is our brains are not simply evolution’s version of electronic computers. Our brains are electro-chemical computing devices. Each neuron can have 1000 connections to other neurons, and the chemical soup of hormones sloshing around in our skulls can have drastic effects on how they processes information. Every neuron receives a vast array of input signals from other neurons and turn that into their own complicated firing pattern that is not fully understood. They are not simple on/off switches. They are a hell of a lot more sophisticated then that.

Flaws in the calculations

With the greatest respect to Professor Koch, I have to say that his calculations are badly wrong, too. I would refer him to an article by Professor David Deamer, of the Department of Biomolecular Engineering, University of California, entitled Consciousness and Intelligence in Mammals: Complexity thresholds, in the Journal of Cosmology, 2011, Vol. 14. The upshot of Deamer’s calculation is that even if you think that consciousness resides in matter (as Stibel, Koch and Deamer all do), then the Internet still falls a long way short of the human brain, in terms of its complexity. In fact, it falls 40 orders of magnitude short.

In the article, Deamer proposes a way to estimate complexity in the mammalian brain using the number of cortical neurons, their synaptic connections and the encephalization quotient. His calculation assumes that the following three (materialistic) postulates hold:

The first postulate is that consciousness will ultimately be understood in terms of ordinary chemical and physical laws…

The second postulate is that consciousness is related to the evolution of anatomical complexity in the nervous system…. The second postulate suggests that consciousness can emerge only when a certain level of anatomical complexity has evolved in the brain that is directly related to the number of neurons, the number of synaptic connections between neurons, and the anatomical organization of the brain…

This brings us to the third postulate, that consciousness, intelligence, self-awareness and awareness are graded, and have a threshold that is related to the complexity of nervous systems. I will now propose a quantitative formula that gives a rough estimate of the complexity of nervous systems. Only two variables are required: the number of units in a nervous system, and the number of connections (interactions) each unit has with other units in the system. The formula is simple: C(complexity)=log(N)*log(Z) where N is the number of units and Z is the average number of synaptic inputs to a single neuron.

Deamer gets his figures for the human brain (and other animal brains) from Roth and Dicke’s 2005 article, Evolution of the brain and intelligence (Trends Cognitive Sciences 9: 250-257). The human brain contains 11,500,000,000 cortical neurons. That’s N in his formula. Log(N) is about 10.1. Z, the number of synapses per neuron, is astonishingly high: “Each human cortical neuron has approximately 30,000 synapses per cell.” Thus log(Z) is about 4.5. According to Deamer’s complexity formula, the complexity of the human brain is 10.1 x 4.5, or 45.5.

What about other animals’ brains? How complex are they?

A dog looking at himself in the mirror. Like most animals, dogs show no sign of self-recognition when looking at themselves in the mirror. Image courtesy of Georgia Pinaud and Wikipedia.

How do non-human animals compare with us? The raw figures are as follows: elephant 45, chimpanzee 44.1, dolphin 43.6, gorilla 43.2, horse 39.1, dog 37.8, rhesus (monkey) 39.1, cat 32.7, opossum 31.8, rat 31, mouse 23.4. Deamer then makes an adjustment for these animals, based on their body sizes and encephalization quotients: “The complexity equation then becomes C=log(N*EQa/EQh)*log(Z), where EQa is the animal EQ and EQh is the human EQ, taken to be 7.6.”

The normalized complexity figures are now as follows: humans 45.5, dolphins 43.2, chimpanzees 41.8, elephant 41.8, gorilla 40.0, rhesus (monkey) 36.5, horse 34.8, dog 34.4, cat 32.7, rat 25.4, opossum 24.9, mouse 23.2.

Commenting on the results, Deamer remarks:

If we asked a hundred thoughtful colleagues to rank this list of mammals according to their experience and observations, I predict that their lists, when averaged to reduce idiosyncratic choices, would closely reflect the calculated ranking. It is interesting that all six animals with normalized complexity values of 40 and above are self-aware according to the mirror test, the rhesus monkey is borderline at 36.5, while the animals with complexity values of 35 and below do not exhibit this behavior. This jump between C = 36.5 and 40 appears to reflect a threshold related to self-awareness.

Although mammals with normalized complexity values between 40 and 43.2 are self-aware and are perhaps conscious in a limited capacity, they do not exhibit what we recognize as human intelligence. It seems that a normalized complexity value of 45.5 is required for human consciousness and intelligence, that is, 10 – 20 billion neurons, each on average with 30,000 connections to other neurons, and an EQ of 7.6. Only the human brain has achieved this threshold.

Deamer concludes his article with a prediction:

…[B]ecause of the limitations of computer electronics, it will be virtually impossible to construct a conscious computer in the foreseeable future. Even though the number of transistors (N) in a microprocessor chip now approaches the number of neurons in a mammalian brain, each chip has a Z of 2, that is, its input-output response is directly connected to just two other transistors. This is in contrast to a mammalian neuron, in which function is modulated by thousands of synaptic inputs and output relayed to hundreds of other neurons. According to the quantitative formula described above, the complexity of the human nervous system is log(N)*log(Z)=45.5, while that of a microprocessor with 781 million transistors is 8.9*0.3=2.67, many orders of magnitude less… Interestingly, for the nematode the calculated complexity C=3.2, assuming an average of 20 synapses per neuron, so the functioning nervous system of this simple organism could very well be computationally modeled.

So there you have it. A microprocessor with around 1 billion transistors is in the same mental ballpark as … a worm. Rather an underwhelming result, don’t you think?

“What about the Internet as a whole?” you might ask. As we saw above, the number of transistors (N) in the entire Internet is 10^18, so log(N) is 18. log(Z) is log(2) or about 0.3, so C=(18*0.3)=5.4. That’s right: on Deamer’s scale, the complexity of the entire Internet is a miserable 5.4, or 40 orders of magnitude less than that of the human brain, which stands at 45.5.

Remember that Deamer’s formula is a logarithmic one, using logarithms to base 10. What that means is that the human brain is, in reality, 10,000,000,000,000,000,000,000,000,000,000,000,000,000 times more complex than the entire Internet! And that’s based on explicitly materialistic assumptions about consciousness.

I somehow don’t think we’re going to be seeing a conscious Internet for some time yet.

To be fair, Deamer does point out that “what the microprocessor lacks in connectivity can potentially be compensated in part by speed, which in the most powerful computers is measured in teraflops compared with the kilohertz activity of neurons.” For argument’s sake, I’m going to apply that figure to the Internet as a whole. 10^12 divided by 10^3 is 10^9, so let’s lop off nine zeroes. That still makes the human brain 10,000,000,000,000,000,000,000,000,000,000 or 10^31 times more complex than the entire Internet.

Moore’s law: definitely no Internet consciousness before 2115, and probably never

Gordon Moore in 2006. Image courtesy of Steve Jurvetson and Wikipedia.

Moore’s law tells us that over the history of computing hardware, the number of transistors on integrated circuits doubles approximately every two years. How long will it take, at that rate, for the Internet to catch up with the human brain? the answer is (1/log(2))*31, or 103 years from now. That’s in the year 2115.

That’s assuming, of course, that Moore’s law continues to hold that long. Who says so? Why, Moore himself! Here’s Wikipedia:

On 13 April 2005, Gordon Moore stated in an interview that the law cannot be sustained indefinitely: “It can’t continue forever. The nature of exponentials is that you push them out and eventually disaster happens.” He also noted that transistors would eventually reach the limits of miniaturization at atomic levels:

In terms of size [of transistors] you can see that we’re approaching the size of atoms which is a fundamental barrier, but it’ll be two or three generations before we get that far — but that’s as far out as we’ve ever been able to see. We have another 10 to 20 years before we reach a fundamental limit. By then they’ll be able to make bigger chips and have transistor budgets in the billions.

So it appears that even if we one day figured out how to build a conscious Internet, we’d never be able to afford to make one. I have to say I think that’s a rather fortunate thing. Although I don’t believe that the reflective consciousness which distinguishes human beings can be boiled down to neuronal connections (see my recent post, Is meaning located in the brain?), I nevertheless think that the primary consciousness which is found in most (and perhaps all) mammals and birds (and possibly also in cephalopods, such as octopuses) is the product of the interconnectivity of the neurons in their brains. An artificial entity which possessed all the world’s stored data and which had the same mental abilities as a dog or a dolphin could do quite a bit of mischief, if it had a mind to, and I think it’s dangerously naive to assume that such an entity would be benevolent. It’s far more likely that it would be amoral, spiteful or even insane (from loneliness). So I for one am somewhat relieved to discover that the Internet will never be out to get us.

Why the idea of a conscious Internet is biologically naive

Numerical calculations aside, Koch’s claim that the Internet could be conscious is biologically flawed, as was pointed out recently by Professor Daniel Dennett (who was also Jeff Stibel’s mentor) in the above-cited article (September 20, 2012) by Slate journalist Dan Falk:

“The connections in brains aren’t random; they are deeply organized to serve specific purposes,” Dennett says. “And human brains share further architectural features that distinguish them from, say, chimp brains, in spite of many deep similarities. What are the odds that a network, designed by processes serving entirely different purposes, would share enough of the architectural features to serve as any sort of conscious mind?

Dennett also pointed out that while the Internet had a very high level of connectivity, the difference in architecture “makes it unlikely in the extreme that it would have any sort of consciousness.”

Physicist Sean Carroll, who was also interviewed for the article, was also dismissive of the idea that the Internet would ever be conscious, even as he acknowledged that there was nothing stopping the Internet from having the computational capacity of a conscious brain:

“Real brains have undergone millions of generations of natural selection to get where they are. I don’t see anything analogous that would be coaxing the Internet into consciousness…. I don’t think it’s at all likely.”

Even on Darwinian grounds, then, the idea of a conscious Internet appears to be badly flawed.

What are the neural requirements for consciousness?

Midline structures in the brainstem and thalamus necessary to regulate the level of brain arousal. Small, bilateral lesions in many of these nuclei cause a global loss of consciousness. Image (courtesy of Wikipedia) taken from Christof Koch (2004), The Quest for Consciousness: A Neurobiological Approach, Roberts, Denver, CO, with permission from the author under license.

While we’re about it, we might ask: what are the neural requirements for consciousness? Well, that depends on what kind of consciousness you’re talking about. Neuroscientists distinguish two kinds of consciousness – primary consciousness and higher-order consciousness – according to a widely cited paper by Dr. James Rose, entitled, The Neurobehavioral Nature of Fishes and the Question of Awareness and Pain (Reviews in Fisheries Science, 10(1): 1–38, 2002):

Although consciousness has multiple dimensions and diverse definitions, use of the term here refers to two principal manifestations of consciousness that exist in humans (Damasio, 1999; Edelman and Tononi, 2000; Macphail, 1998): (1) “primary consciousness” (also known as “core consciousness” or “feeling consciousness”) and (2) “higher-order consciousness” (also called “extended consciousness” or “self-awareness”). Primary consciousness refers to the moment-to-moment awareness of sensory experiences and some internal states, such as emotions. Higher-order consciousness includes awareness of one’s self as an entity that exists separately from other entities; it has an autobiographical dimension, including a memory of past life events; an awareness of facts, such as one’s language vocabulary; and a capacity for planning and anticipation of the future. Most discussions about the possible existence of conscious awareness in non-human mammals have been concerned with primary consciousness, although strongly divided opinions and debate exist regarding the presence of self-awareness in great apes (Macphail, 1998)…

Although consciousness has been notoriously difficult to define, it is quite possible to identify its presence or absence by objective indicators. This is particularly true for the indicators of consciousness assessed in clinical neurology, a point of special importance because clinical neurology has been a major source of information concerning the neural bases of consciousness. From the clinical perspective, primary consciousness is defined by: (1) sustained awareness of the environment in a way that is appropriate and meaningful, (2) ability to immediately follow commands to perform novel actions, and (3) exhibiting verbal or nonverbal communication indicating awareness of the ongoing interaction (Collins, 1997; Young et al., 1998). Thus, reflexive or other stereotyped responses to sensory stimuli are excluded by this definition. (PDF, p. 5)

According to a paper by A. K. Seth, B. J. Baars and D. B. Edelman, entitled, Criteria for consciousness in humans and other mammals (Consciousness and Cognition, 14 (2005), 119–139), primary consciousness has three distinguishing features at the neurological level:

Physiologically, three basic facts stand out about consciousness.

2.1. Irregular, low-amplitude brain activity

Hans Berger discovered in 1929 that waking consciousness is associated with low-level, irregular activity in the raw EEG, ranging from about 20–70 Hz (Berger, 1929). Conversely, a number of unconscious states—deep sleep, vegetative states after brain damage, anesthesia, and epileptic absence seizures—show a predominance of slow, high-amplitude, and more regular waves at less than 4 Hz (Baars, Ramsoy, & Laureys, 2003). Virtually all mammals studied thus far exhibit the range of neural activity patterns diagnostic of both conscious states…

2.2. Involvement of the thalamocortical system

In mammals, consciousness seems to be specifically associated with the thalamus and cortex (Baars, Banks, & Newman, 2003)… To a first approximation, the lower brainstem is involved in maintaining the state of consciousness, while the cortex (interacting with thalamus) sustains conscious contents. No other brain regions have been shown to possess these properties… Regions such as the hippocampal system and cerebellum can be damaged without a loss of consciousness per se.

2.3. Widespread brain activity

Recently, it has become apparent that conscious scenes are distinctively associated with widespread brain activation (Srinivasan, Russell, Edelman, & Tononi, 1999; Tononi, Srinivasan, Russell, & Edelman, 1998c). Perhaps two dozen experiments to date show that conscious sensory input evokes brain activity that spreads from sensory cortex to parietal, prefrontal, and medial-temporal regions; closely matched unconscious input activates mainly sensory areas locally (Dehaene et al., 2001). Similar findings show that novel tasks, which tend to be conscious and reportable, recruit widespread regions of cortex; these tasks become much more limited in cortical representation as they become routine, automatic and unconscious (Baars, 2002)…

Together, these first three properties indicate that consciousness involves widespread, relatively fast, low-amplitude interactions in the thalamocortical core of the brain, driven by current tasks and conditions. Unconscious states are markedly different and much less responsive to sensory input or endogenous activity.

The neural requirements for higher-order consciousness, which is only known to occur in human beings, are even more demanding, as Rose points out in his article, where he contrasts the neurological prerequisites for primary and higher-order consciousness:

Primary consciousness appears to depend greatly on the functional integrity of several cortical regions of the cerebral hemispheres especially the “association areas” of the frontal, temporal, and parietal lobes (Laureys et al., 1999, 2000a-c). Primary consciousness also requires the operation of subcortical support systems such as the brainstem reticular formation and the thalamus that enable a working condition of the cortex. However, in the absence of cortical operations, activity limited to these subcortical systems cannot generate consciousness (Kandel et al., 2000; Laureys et al., 1999, 2000a; Young et al., 1998). Wakefulness is not evidence of consciousness because it can exist in situations where consciousness is absent (Laureys et al., 2000a-c). Dysfunction of the more lateral or posterior cortical regions does not eliminate primary consciousness unless this dysfunction is very anatomically extensive (Young et al., 1998).

Higher-order consciousness depends on the concurrent presence of primary consciousness and its cortical substrate, but the additional complexities of this consciousness require functioning of additional cortical regions. For example, long-term, insightful planning of behavior requires broad regions of the “prefrontal” cortex. Likewise, awareness of one’s own bodily integrity requires activity of extensive regions of parietal lobe cortex (Kolb and Whishaw, 1995). In general, higher-order consciousness appears to depend on fairly broad integrity of the neocortex. Widespread degenerative changes in neocortex such as those accompanying Alzheimer’s disease, or multiple infarcts due to repeated strokes, can cause a loss of higher-order consciousness and result in dementia, while the basic functions of primary consciousness remain (Kandel et al., 2000; Kolb and Whishaw, 1995). (PDF, pp. 5-6)

No consciousness without a neocortex: another reason why the Internet will never wake up

Anatomical subregions of the cerebral cortex. The neocortex is the outer layer of the cerebral hemispheres. It is made up of six layers, labelled I to VI (with VI being the innermost and I being the outermost). The neocortex part of the brain of mammals. A homologous structure also exists in birds. Image (courtesy of Wikipedia) taken from Patrick Hagmann et al. (2008) “Mapping the Structural Core of Human Cerebral Cortex,” PLoS Biology 6(7): e159. doi:10.1371/journal.pbio.0060159.

“That’s all very well,” some readers may object, “but who says that the Internet has to have a brain like ours? Why couldn’t it still be conscious, even with a very different architecture?” But that’s extremely unlikely, according to the above-cited article by Dr. James Rose:

It is a well-established principle in neuroscience that neural functions depend on specific neural structures. Furthermore, the form of those structures, to a great extent, dictates the properties of the functions they subserve. If the specific structures mediating human pain experience, or very similar structures, are not present in an organism’s brain, a reasonably close approximation of the pain experience can not be present. If some form of pain awareness were possible in the brain of a fish, which diverse evidence shows is highly improbable, its properties would necessarily be so different as to not be comparable to human-like experiences of pain and suffering…

… There may be other cortical regions and processes that are important for the totality of the pain experience. The most important point here is that the absolute dependence of pain experience on neocortical functions is now well established (Price, 1999; Treede et al., 1999).

It is also revealing to note that the cortical regions responsible for the experience of pain are essentially the same as the regions most vital for consciousness. Functional imaging studies of people in a persistent vegetative state due to massive cortical dysfunction (Laureys et al., 1999, 2000a,b) showed that unconsciousness resulted from a loss of brain activity in widespread cortical regions, but most specifically the frontal lobe, especially the cingulate gyrus, and parietal lobe cortex. (PDF, pp. 27, 18)

Thus according to Rose, a structure with a radically different structure from our neocortex would have a different function: whatever it would be for, it wouldn’t be consciousness. Therefore the odds that two structures as radically different as the human brain and the Internet would both be capable of supporting consciousness are very low indeed.

The human brain: skilfully engineered over a period of 3.5 million years

Many neo-Darwinists seem to be under the completely false impression that the human brain is merely a scaled-up, more powerful version of the chimpanzee brain. Nothing could be further from the truth: the two brains are radically different. In addition to the massive growth in the human brain over the last three million years, there have also been massive reorganizational changes in the human brain, which are not easy to account for on a Darwinian paradigm. The major reorganizational changes, which are listed in the paper,“Evolution of the Brain in Humans – Paleoneurology” by Ralph Holloway, Chet Sherwood, Patrick Hof and James Rilling (in The New Encyclopedia of Neuroscience, Springer, 2009, pp. 1326-1334), include the following:

(1) Reduction of primary visual striate cortex, area 17, and relative increase in posterior parietal cortex, between 2.0 and 3.5 million years ago;

(2) Reorganization of the frontal lobe (Third inferior frontal convolution, Broca’s area, widening prefrontal), between 1.8 and 2.0 million years ago;

(3) Cerebral asymmetries in the left occipital, right-frontal petalias, arising between 1.8 and 2.0 million years ago; and

(4) Refinements in cortical organization to a modern Homo pattern (1.5 million years ago to present).

Concerning the second reorganization, which affected Broca’s region, Holloway et al. write:

Certainly, the second reorganizational pattern, involving Broca’s region, cerebral asymmetries of a modern human type and perhaps prefrontal lobe enlargement, strongly suggests selection operating on a more cohesive and cooperative social behavioral repertoire, with primitive language a clear possibility. By Homo erectus times, ca. 1.6–1.7 MYA [million years ago – VJT], the body plan is essentially that of modern Homo sapiens – perhaps somewhat more lean-muscled bodies but statures and body weights within the modern human range. This finding indicates that relative brain size was not yet at the modern human peak and also indicates that not all of hominid brain evolution was a simple allometric exercise…

But that’s not all. A little over half a million years ago, the brains of our ancestors underwent a revolution which made it possible for them to make long-term commitments and control their impulses much better than ever before. Was this the crucial step that made us morally aware beings?

Homo heidelbergensis: one of us?

(Left: An artistic depiction of Heidelberg man, courtesy of Jose Luis Martinez Alvarez and Wikipedia.
Right: A hand-axe made by Heidelberg man 500,000 years ago in Boxgrove, England. Image courtesy of Midnightblueowl and Wikipedia.)

An additional reorganization of the brain occurred over 500,000 years ago, with the emergence of Homo heidelbergensis (Heidelberg man), according to Benoit Dubreuil’s article, Paleolithic public goods games: why human culture and cooperation did not evolve in one step (abstract only available online) (Biology and Philosophy, 2010, 25:53–73, DOI 10.1007/s10539-009-9177-7). Heidelberg man, who emerged in Africa and who also lived in Europe and Asia, was certainly a skilled hunter: a recent report by Alok Jha in The Guardian (15 November 2012) reveals that he was hunting animals with extra-lethal stone-tipped wooden spears, as early as half a million years ago. He also had a brain size of 1100-1400 cubic centimeters, which fell within the modern human range. Some authorities believe that Homo heidelbergensis possessed a primitive form of language. No forms of art or sophisticated artifacts by Heidelberg man other than stone tools have been discovered, although red ocher, a mineral that can be used to create a red pigment, has been found at a site in France.

In his article, Paleolithic public goods games: why human culture and cooperation did not evolve in one step, Benoit Dubreuil argues that around 500,000 years ago, big-game hunting (which is highly rewarding in terms of food, if successful, but is also very dangerous for the hunters, who might easily get gored by the animals they are trying to kill) and life-long monogamy (for the rearing of children whose prolonged infancy and whose large, energy-demanding brains would have made it impossible for their mothers to feed them alone, without a committed husband who would provide for the family) became entrenched features of human life. Dubreuil refers to these two activities as “cooperative feeding” and “cooperative breeding,” and describes them as “Paleolithic public good games” (PPGGs).

Dubreuil points out that for both of these activities, there would have been a strong temptation to defect when the going got tough: to run away from a big mammoth or walk out on one’s spouse and children. Preventing this anti-social behavior would have required extensive reorganization of the pre-frontal cortex (PFC) of the human brain, which plays a vital role in impulse control, in human beings. Thus it is likely that Homo heidelbergensis had a well-developed brain with a large pre-frontal cortex, that allowed him to keep his selfish impulses in check, for the good of his family and his tribe:

I present evidence that Homo heidelbergensis became increasingly able to secure contributions form others in two demanding Paleolithic public good games (PPGGs): cooperative feeding and cooperative breeding. I argue that the temptation to defect is high in these PPGGs and that the evolution of human cooperation in Homo heidelbergensis is best explained by the emergence of modern-like abilities for inhibitory control and goal maintenance. These executive functions are localized in the prefrontal cortex and allow humans to stick to social norms in the face of competing motivations. This scenario is consistent with data on brain evolution that indicate that the largest growth of the prefrontal cortex in human evolution occurred in Homo heidelbergensis and was followed by relative stasis in this part of the brain. One implication of this argument is that subsequent behavioral innovations, including the evolution of symbolism, art, and properly cumulative culture in modern Homo sapiens, are unlikely to be related to a reorganization of the prefrontal cortex, despite frequent claims to the contrary in the literature on the evolution of human culture and cognition. (Abstract)

Homo heidelbergensis was able to stick to very demanding cooperative arrangements in connection with feeding and breeding. The fact that such behaviors appear in our evolution much before art or symbolism, I contend, implies that human culture and cooperation did not evolve in one step… (pp. 54-55)

We know that the prefrontal cortex plays a central role in executive functions. The dorsolateral cortex, more particularly, one of the latest maturing parts of the prefrontal cortex in children, is associated with inhibitory control and goal maintenance in all kind of social tasks (Sanfey et al. 2003; van ‘t Wout et al. 2006; Knoch et al. 2006). I explain in [the] Section “Brain evolution and the case for a change in the PFC” why I think that a change in this part of the brain can be parsimoniously linked to the behavioral evolution found in Homo heidelbergensis. (p. 57)

There are serious and well-known limitations to the reconstruction of brain evolution… Consequently, I will not claim that there has been a single reorganization of the PFC in the human lineage and that it happened in Homo heidelbergensis. I will rather contend that, if there is only one point in our lineage where such reorganization happened, it was in all likelihood there. (p. 64)

Holloway et al. concur with Dubreuil’s assessment that the pre-frontal cortex, which plays an important part in impulse control, has not changed much in the last half million years, for they acknowledge that it was the same in Neanderthal man as in Homo sapiens:

…The only difference between Neandertal and modern human endocasts is that the former are larger and more flattened. Most importantly, the Neandertal prefrontal lobe does not appear more primitive.

It appears that Heidelberg man (Homo heidelbergensis) had not only the moral capacity for self-restraint, but certain limited artistic capcities as well. In his monograph, The First Appearance of Symmetry in the Human Lineage: where Perception meets Art (careful: large file!) (Symmetry, 2011, 3, 37-53; doi:10.3390/3010037), Dr. Derek Hodgson argues that later Acheulean handaxes have distinctively aesthetic features – in particular, a concern for symmetry. I would invite the reader to have a look at the handaxes in Figure 1 on page 40, which date back to 750,000 years ago – either at or just before the time when Heidelberg man emerged. Hodgson comments:

The fact that the first glimmerings of an “aesthetic” concern occurred at least 500,000 BP [years before the present – VJT] in a species that was not fully modern (either late Homo erectus or Homo heidelbergensis) suggests that the aesthetic sensibility of modern humans has extremely ancient beginnings. (p. 47)

The emergence of modern man (200,000 years ago)

75,000 year old shell beads from Blombos Cave, South Africa, made by Homo sapiens. Image courtesy of Chris Henshilwood, Francesco d’Errico and Wikipedia.

Nevertheless, it seems that in terms of his symbolic and artistic abilities, Heidelberg man fell short of what modern human beings could do. To quote Dubreuil again:

The relative stability of the PFC [prefrontal cortex] during the last 500,000 years can be contrasted with changes in other brain areas. One of the most distinctive features of Homo sapiens’ cranium morphology is its overall more globular structure. This globularization of Homo sapiens’ cranium occurred between 300,[000] and 100,000 years ago and has been associated with the relative enlargement of the temporal and/or parietal lobes (Lieberman et al. 2002; Bruner et al. 2003; Bruner 2004, 2007; Lieberman 2008). (p. 67)

The temporoparietal cortex is certainly involved in many complex cognitive tasks. It plays a central role in attention shifting, perspective taking, episodic memory, and theory of mind (as mentioned in Section “The role of perspective taking”), as well as in complex categorization and semantic processing (that is where Wernicke’s area is located)…

I have argued elsewhere (Dubreuil 2008; Henshilwood and Dubreuil 2009) that a change in the attentional abilities underlying perspective taking and high-level theory of mind best explains the behavioral changes associated with modern Homo sapiens, including the evolution of symbolic and artistic components in material culture. (p. 68)

So there we have it. The human brain is “fearfully and wonderfully made,” in the words of Psalm 139. It is extremely doubtful whether prideful man, with his much-vaunted Internet, will ever approach the level of skill and complexity shown by the Intelligent Designer who carefully engineered his brain, over a period of several million years.

The internet will never ever be conscious because of the way its information is stored. Second, there is no such thing as consciousness, it doesn't even emanates from the brain. Finally, the internet is binary, the brain is not. Google The Simplified Definition of Consciousness, The Caveman in the box, the Human Mental Handicaps and the Software Illusion to advance our ideas about consciousness and information materialization. iParticle
He’s even got an affable and thoughtful face...
And ought not detract from it with baseball caps. I agree. Mung
That was a mean and puerile comment I made about Christof Koch and the baseball cap. I regret the rancorous flippancy. I expect a tinge of jealousy overlaid my view of his theory and its inevitably metaphysical provenance. He's even got an affable and thoughtful face, the swine. Axel
Hi bornagain77, Thanks very much for the link to the Candy Gunther Brown video on science and miraculous healing at http://www.youtube.com/watch?v=rRfLooh3ZOk . Fascinating stuff! vjtorley
' So Man created the internet in his own image, in the image of Man he created it; nodes and connections he created them. And Man said, go forth and multiply. And add, and subtract, and divide. For a computer art thou. And that’s all thou shalt ever be. And from dust you came, and to dust you will return.' As dry as ever mung! Axel
Here is the song mentioned in the McQueen article: Why Me Lord Story - Told and Sung By kris kristofferson http://www.youtube.com/watch?v=1tA7E7pbUws bornagain77
OT Kantian: Greater Grace: A Story of God, Redemption, and Steve McQueen - December 9, 2012 Excerpt: (Steve) McQueen would not survive the operation. Four days after his meeting with (Billy) Graham, he died of a heart attack with the evangelist’s Bible resting on his chest. It was opened to his favorite verse, that old, familiar promise so simple a child could grasp it, yet so profound the angels cannot comprehend it: For God so loved the world, that he gave his only begotten Son, so that whosoever believeth in him should not perish, but have everlasting life. http://southerngospelyankee.wordpress.com/2012/12/09/greater-grace-a-story-of-god-redemption-and-steve-mcqueen/ bornagain77
OT: Dr. Torley, I just saw this and immediately thought of your recent defense of human life in the womb: Study: Women More Likely to Die After Abortion, Not Childbirth - September 2012 Excerpt: A new study of the medical records for nearly half a million women in Denmark reveals significantly higher maternal death rates following abortion compared to delivery. This finding has confirmed similar large-scale population studies conducted in Finland and the United States, but contradicts the widely held belief that abortion is safer than childbirth. http://www.lifenews.com/2012/09/05/study-shows-women-more-likely-to-die-after-abortion-not-childbirth/ bornagain77
Torley, Thanks for those links -- they look very interesting! I'm particularly interested in the Carruthers' article. I've wondered in the past if the whole "theory of mind" literature presupposes an overly Cartesian picture of how intersubjectivity works. By that I mean, a picture in which one first has knowledge of one's own mental states, and then somehow (analogically?) attributes mental states to others. That just doesn't fit the phenomenology of intersubjectivity. I haven't pursued that thought, though. I wonder how helpful Dennett's "orders of intentionality" model is here -- maybe the story would be that certain animals can attribute simple beliefs to others, but can't attribute beliefs about beliefs -- e.g. chimp A couldn't attribute to a chimp B any beliefs about what chimp A believes. Or something like that. Anyway, I'll take a look. And, just to avoid any misunderstanding, I have no in principle objections to the idea that some cognitive abilities are unique to human beings. Quite the contrary: it's perfectly evident that there are some uniquely human cognitive abilities. Most evolutionary theorists, I suspect, would say that humans are unique, but then again, that all species are unique. So humans are just another unique species. Truth be told, I'm actually quite fond of Dobzhansky's remark, "all species are unique, but the human is the uniquest." Whether that thought belongs within a scientific theory of cognition and evolution is one thing, but it seems to me to embody a deep insight, regardless of how much scientific traction it gets. But in that regard the work of Penn and Povinelli is certainly promising! Kantian Naturalist
OT: Dr. Torley, I remember some time ago that you did a excellent defense of the efficacy of prayer against atheistic claims to the contrary.,,, Along that line, I think if you have not seen this following video yet, it will be of interest to you: Testing Prayer: Science and Miraculous Healing - Candy Gunther Brown at Boston College - video http://www.youtube.com/watch?v=rRfLooh3ZOk bornagain77
So Man created the internet in his own image, in the image of Man he created it; nodes and connections he created them. And Man said, go forth and multiply. And add, and subtract, and divide. For a computer art thou. And that's all thou shalt ever be. And from dust you came, and to dust you will return. Mung
as to:
The outstanding intelligence of humans appears to result from a combination and enhancement of properties found in non-human primates, such as theory of mind, imitation and language, rather than from ‘unique’ properties.
slight correction:
The outstanding intelligence of humans appears to (be the) result of being made in the image of God!
There all better! notes:
Darwin's mistake: explaining the discontinuity between human and nonhuman minds. - 2008 Excerpt: Over the last quarter century, the dominant tendency in comparative cognitive psychology has been to emphasize the similarities between human and nonhuman minds and to downplay the differences as "one of degree and not of kind" (Darwin 1871).,,, To wit, there is a significant discontinuity in the degree to which human and nonhuman animals are able to approximate the higher-order, systematic, relational capabilities of a physical symbol system (PSS) (Newell 1980). We show that this symbolic-relational discontinuity pervades nearly every domain of cognition and runs much deeper than even the spectacular scaffolding provided by language or culture alone can explain,,, http://www.ncbi.nlm.nih.gov/pubmed/18479531 “Museum of Comparative Anthropogeny” Human Uniqueness Compared to "Great Apes" (Hundreds of differences listed between humans and 'great apes', including mental, social, differences, with references for each difference listed) https://docs.google.com/document/d/1dx8I5qpsDlsIxTTPgeZc559pIHe_mnYtKehgDqE-_fo/edit Earliest humans not so different from us, research suggests - February 2011 Excerpt: Shea argues that comparing the behavior of our most ancient ancestors to Upper Paleolithic Europeans holistically and ranking them in terms of their "behavioral modernity" is a waste of time. There are no such things as modern humans, Shea argues, just Homo sapiens populations with a wide range of behavioral variability. http://www.physorg.com/news/2011-02-earliest-humans.html Best Cave Art Is Still the Oldest - May 2012 Excerpt: The artwork on the walls of Chauvet Cave is unequalled in Paleolithic art, superior even to the better-known works of Lascaux dated much later. Evolutionists had expected that cave art would progress from simple to complex as man’s cognitive abilities evolved, but Chauvet challenged that idea by showing that the oldest was by far the best. The authors of the paper were astonished at its quality: http://crev.info/2012/05/best-cave-art-is-still-the-oldest/ Geometric Principles Appear Universal in Our Minds - May 2011 Excerpt: Villagers belonging to an Amazonian group called the Mundurucú intuitively grasp abstract geometric principles despite having no formal math education,,, Mundurucú adults and 7- to 13-year-olds demonstrate as firm an understanding of the properties of points, lines and surfaces as adults and school-age children in the United States and France,,, http://www.wired.com/wiredscience/2011/05/universal-geometry/ A scientist looks again at Project Nim - Trying to teach Chimps to talk fails Excerpt: "The language didn't materialize. A human baby starts out mostly imitating, then begins to string words together. Nim didn't learn. His three-sign combinations - such as 'eat me eat' or 'play me Nim' - were redundant. He imitated signs to get rewards. I published the negative results in 1979 in the journal Science, which had a chilling effect on the field." http://www.arn.org/blogs/index.php/literature/2011/07/19/a_scientist_looks_again_at_project_nim Evolution of the Genus Homo – Annual Review of Earth and Planetary Sciences – Ian Tattersall, Jeffery H. Schwartz, May 2009 Excerpt: “Definition of the genus Homo is almost as fraught as the definition of Homo sapiens. We look at the evidence for “early Homo,” finding little morphological basis for extending our genus to any of the 2.5–1.6-myr-old fossil forms assigned to “early Homo” or Homo habilis/rudolfensis.”,,,, “Unusual though Homo sapiens may be morphologically, it is undoubtedly our remarkable cognitive qualities that most strikingly demarcate us from all other extant species. They are certainly what give us our strong subjective sense of being qualitatively different. And they are all ultimately traceable to our symbolic capacity. Human beings alone, it seems, mentally dissect the world into a multitude of discrete symbols, and combine and recombine those symbols in their minds to produce hypotheses of alternative possibilities. When exactly Homo sapiens acquired this unusual ability is the subject of debate.” http://www.annualreviews.org/doi/abs/10.1146/annurev.earth.031208.100202
The authors of the 'Evolution of the Genus Homo' paper, Tattersal and Schwartz, try to find some evolutionary/materialistic reason for the extremely unique 'information capacity' of humans, but of course they never find a coherent reason. Indeed why should we ever consider a process, which is utterly incapable of ever generating any complex functional information at even the most foundational levels of molecular biology, to suddenly, magically, have the ability to generate our brain which can readily understand and generate functional information? A brain which has been repeatedly referred to as 'the Most Complex Structure in the Universe'? The authors never seem to consider it worthwhile to look at the 'spiritual angle' for why we would have such a unique capacity for such abundant information processing.
Genesis 1:27 So God created man in his own image, in the image of God he created him; male and female he created them. John 1:1-1 In the beginning, the Word existed. The Word was with God, and the Word was God.
Aaron Shust - O Come O Come Emmanuel - http://www.youtube.com/watch?v=gdrRueJjqo0
Hi Kantian Naturalist, The article you quote declares theory of mind to be a property found in non-human primates, and denies that it is unique to human beings. However, there is good experimental evidence suggesting that even clever animals like chimpanzees (see this video) and elephants (see this one) lack a theory of mind. A chimpanzee, for instance, is incapable of realizing that a man with a bucket over his head cannot see anything, while an elephant can be easily fooled by a scarecrow. Indeed, primate researchers Derek Penn and Daniel Povinelli have written a paper entitled, On the lack of evidence that non-human animals possess anything remotely resembling a theory of mind' (Philosophical Transactions of the Royal Society B, 362, 731-744, doi:10.1098/rstb.2006.2023) in which they not only discuss the abilities of chimpanzees but also those of corvids (crows and related birds), and carefully explain why there is no reason to suppose that these animals have the capacity to impute mental states to others. At first sight, the evidence for a theory of mind in these birds looks convincing:
Corvids are quite adept at pilfering the food caches of other birds and will adjust their own caching strategies in response to the potential risk of pilfering by others. Indeed, not only do they remember which food caches were observed by competitors, but also they appear to remember the specific individuals who were present when specific caches were made and modify their re-caching behaviour accordingly (Dally et al. 2006).
However, the experiments performed to date suffer from a crucial flaw, as Penn and Povinelli point out: "Unfortunately, none of the reported experiments with corvids require the subjects to infer or encode any information that is unique to the cognitive perspective of the competitor." The authors argue that simple rules can explain the birds' behavior:
In all of the experiments with corvids cited above, it suffices for the birds to associate specific competitors with specific cache sites and to reason in terms of the information they have observed from their own cognitive perspective: e.g. 'Re-cache food if a competitor has oriented towards it in the past’, 'Attempt to pilfer food if the competitor who cached it is not present', 'Try to re-cache food in a site different from the one where it was cached when the competitor was present', etc. The additional claim that the birds adopt these strategies because they understand that 'The competitor knows where the food is located' does no additional explanatory or cognitive work. (Emphasis mine - VJT.)
Penn and Povinelli also propose two carefully controlled experiments which could provide evidence of a "theory of mind" in non-human animals. Even adult chimpanzees who were used to interacting with human beings failed the first experiment proposed by the authors, while 18-month-old human infants passed the same test. Peter Carruthers is a philosopher in the area of philosophy of mind. He is Professor of Philosophy at the University of Maryland, associate member of Neuroscience and Cognitive Science Program and member of the Committee for Philosophy and the Sciences. Carruthers’ paper, "Meta-cognition in Animals: A Skeptical Look” (in Mind & Language, 23: 58-89) deftly pulls apart the arguments that are commonly put forward for metacognition in animals:
This paper examines the recent literature on meta-cognitive processes in non-human animals, arguing that in each case the data admit of a simpler, purely first-order, explanation. The topics discussed include the alleged monitoring of states of certainty and uncertainty, knowledge-seeking behavior in conditions of uncertainty, and the capacity to know whether or not the information needed to solve some problem is stored in memory. The first-order explanations advanced all assume that beliefs and desires come in various different strengths, or degrees. (Section 1) [T]here are good reasons for thinking that meta-cognition should be significantly more complex and demanding than regular first-order cognitive processes of the sort that I shall appeal to in my explanations, as I shall now briefly explain. The first point is simple: by their very nature, meta-cognitive processes contain an extra layer of representational complexity. A creature that is capable of meta-representing some of its own cognitive processes must first, of course, have the wherewithal to undergo the first-order processes in question. Then to this must be added whatever is necessary for the creature to represent, and come to believe, that it is undergoing those events. Put differently, a creature that is capable of thinking about its own thought that P must be capable of representing thoughts, in addition to representing whatever is represented by P. The second point is that in the decades that have elapsed since Premack and Woodruff (1978) first raised the question whether chimpanzees have a 'theory of mind', a general (but admittedly not universal) consensus has emerged that meta-cognitive processes concerning the thoughts, goals, and likely behavior of others is cognitively extremely demanding (Wellman, 1990; Baron-Cohen, 1995; Gopnik and Melzoff, 1997; Nichols and Stich, 2003), and some maintain that it may even be confined to human beings (Povinelli, 2000). For what it requires is a theory (either explicitly formulated, or implicit in the rules and inferential procedures of a domain-specific mental faculty) of the nature, genesis, and characteristic modes of causal interaction of the various different kinds of mental state. There is no reason at all to think that this theory should be easy to come by, evolutionarily speaking. And then on the assumption that he same or a similar theory is implicated in meta-cognition about one’s own mental states, we surely shouldn’t expect meta-cognitive processes to be very widely distributed in the animal kingdom. Nor should we expect to find meta-cognition in animals that are incapable of mindreading.
Let me add that as far as I'm aware, Penn, Povinelli and Carruthers are all materialists. I conclude that the claim that the distinction between humans and other animals is a purely quantitative one is an assertion that rests on poor scientific evidence. vjtorley
From "Evolution of the brain and intelligence" (Gerhard Rotha and Ursula Dicke, Trends in Cognitive Sciences, Volume 9, Issue 5, 250-257, 1 May 2005.)
Abstract: Intelligence has evolved many times independently among vertebrates. Primates, elephants and cetaceans are assumed to be more intelligent than ‘lower’ mammals, the great apes and humans more than monkeys, and humans more than the great apes. Brain properties assumed to be relevant for intelligence are the (absolute or relative) size of the brain, cortex, prefrontal cortex and degree of encephalization. However, factors that correlate better with intelligence are the number of cortical neurons and conduction velocity, as the basis for information-processing capacity. Humans have more cortical neurons than other mammals, although only marginally more than whales and elephants. The outstanding intelligence of humans appears to result from a combination and enhancement of properties found in non-human primates, such as theory of mind, imitation and language, rather than from ‘unique’ properties.
Kantian Naturalist
So there you have it. A microprocessor with around 1 billion transistors is in the same mental ballpark as … a worm. Rather an underwhelming result, don’t you think?
Underwhelming? Are you kidding? That's fantastic! It might be underwhelming from the point of view of science fiction, but if we're talking about doing real science, that's an extremely cool result -- now that we didn't know before, which is that it might be within the limits of existing technology to model a nematode. I find that fascinating, and I don't know anyone wouldn't. Kantian Naturalist
Good, eh?! A nice surprise. I'd thought it might be that they can't have 'holy days', by definition. Axel
OT: This should bring a smile: Judge sets atheist holiday day - Oct. 2012 In a small town in East Texas, an atheist filed a case against Easter and Passover Holy days. He hired an attorney from up North to bring a discrimination case against Christians and Jews and observances of their holy days. The argument was that it was unfair that atheists had no such recognized days. The case was brought before a judge, a lifelong resident of East Texas. After listening to the passionate presentation by the lawyer, the judge banged his gavel declaring, "Case dismissed!" The lawyer immediately stood and objecting to the ruling said, "Your honor! How can you possibly dismiss this case? The Christians have Christmas, Easter and other religious holidays. “The Jews have Passover, Yom Kippur and Hanukkah, yet my client and other atheists have no such holidays,” the attorney argued. The judge leaned forward in his chair and slowly said, "But you do. Your client, counselor, is woefully ignorant." The lawyer said," Your Honor, we are unaware of any special observance or holiday for atheists." The judge said, “Psalms 14:1 states, 'The fool hath said in his heart, there is no God.' Thus, it is the opinion of this court, that, if your client says there is no God, then he is a fool. Therefore, April 1st is his holiday. Court is adjourned." You gotta love an East Texas judge who knows his scripture. http://palestineherald.com/opinion/x688424722/LIFE-BEHIND-THE-PINE-CURTAIN-Judge-sets-atheist-holiday-day bornagain77
It's odd that the materialists should invoke 'counter-intuitiveness', rather than 'counter-rationality'. They are comfortable with the idea that they find paradoxes counter-intuitive, but feel personally offended that their ability to reason should be called into question. It seems that our intuition ranges from just above our autonomic intelligence to plainly preternatural insights, while the further one advances towards the latter pole, the more profound the level of intuition. Unsurprisingly, not least, in view of the divine persons of the Holy Trinity, as the late Jesuit palaeontologist, Teilhard de Chardin, had discovered, the deepest truths are personal. Even the discoveries of the great geniuses of quantum physics of the last century pale before the psychic intuitions of my sister's late mother-in-law, who was not academically-oriented, or particularly interested in the psychic world. In terms of the profundity and acuity of their respective intuitions, hers was as high above theirs as the heavens are above the earth, and this thread illustrates the gulf in a much more dramatic way, indeed, in a discreditable way. I jokingly alluded, above, to this fondness of the materialist advocates of the notion of 'living computers', for the notion of counter-intuitiveness, in lieu of counter-rationality; but it really does seem most telling that the failure of their intuition goes 'right the way down'. This also tallies with another article on here today or yesterday, describing the pusillanimity of the modus operandi of today's scientific community. No conceptual leaps, just dogged, pedantic drudgery. Epistemic 'genocide' by a cannibalistic 'scientific method'. Is it any wonder, having sold their souls 'for a mess of potage.' No matter for wonderment at all, since The Origin of Species amounts to such a classic primer in pervasively fallacious intuition, now routinely disproved, week by week, despite its Orwellian grip on the prevailing scientific 'orthodoxy'. It's a madhouse. Axel
'Garbage out, garbage in', Granville. You've just reverse-engineered some risibly perverse 'engineering' Axel
Another point of interest worth drawing out is that, as Dr. Torley pointed out,,,,
Human brain has more switches than all computers on Earth - November 2010 Excerpt: They found that the brain's complexity is beyond anything they'd imagined, almost to the point of being beyond belief, says Stephen Smith, a professor of molecular and cellular physiology and senior author of the paper describing the study: ...One synapse, by itself, is more like a microprocessor--with both memory-storage and information-processing elements--than a mere on/off switch. In fact, one synapse may contain on the order of 1,000 molecular-scale switches. A single human brain has more switches than all the computers and routers and Internet connections on Earth. http://news.cnet.com/8301-27083_3-20023112-247.html
And computers with many switches have a huge problem with heat,,,
Supercomputer architecture Excerpt: Throughout the decades, the management of heat density has remained a key issue for most centralized supercomputers.[4][5][6] The large amount of heat generated by a system may also have other effects, such as reducing the lifetime of other system components.[7] There have been diverse approaches to heat management, from pumping Fluorinert through the system, to a hybrid liquid-air cooling system or air cooling with normal air conditioning temperatures. http://en.wikipedia.org/wiki/Supercomputer_architecture
yet the brain, though have as many switches as all the computers on earth, does not have a problem with heat,,,
Appraising the brain's energy budget: Excerpt: In the average adult human, the brain represents about 2% of the body weight. Remarkably, despite its relatively small size, the brain accounts for about 20% of the oxygen and, hence, calories consumed by the body. This high rate of metabolism is remarkably constant despite widely varying mental and motoric activity. The metabolic activity of the brain is remarkably constant over time. http://www.pnas.org/content/99/16/10237.full THE EFFECT OF MENTAL ARITHMETIC ON CEREBRAL CIRCULATION AND METABOLISM Excerpt: Although Lennox considered the performance of mental arithmetic as "mental work", it is not immediately apparent what the nature of that work in the physical sense might be if, indeed, there be any. If no work or energy transformation is involved in the process of thought, then it is not surprising that cerebral oxygen consumption is unaltered during mental arithmetic. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC438861/pdf/jcinvest00624-0127.pdf Does Thinking Really Hard Burn More Calories? - By Ferris Jabr - July 2012 Excerpt: So a typical adult human brain runs on around 12 watts—a fifth of the power required by a standard 60 watt lightbulb. Compared with most other organs, the brain is greedy; pitted against man-made electronics, it is astoundingly efficient. http://www.scientificamerican.com/article.cfm?id=thinking-hard-calories
Moreover, the heat generated by computers is primarily because of the erasure of information,,,
Landauer's principle Of Note: "any logically irreversible manipulation of information, such as the erasure of a bit or the merging of two computation paths, must be accompanied by a corresponding entropy increase ,,, Specifically, each bit of lost information will lead to the release of an (specific) amount (at least kT ln 2) of heat.,,, Landauer’s Principle has also been used as the foundation for a new theory of dark energy, proposed by Gough (2008). http://en.wikipedia.org/wiki/Landauer%27s_principle
Thus the brain is either operating on reversible computation principles no computer can come close to emulating (Charles Bennett), or, as is much more likely, the brain is not erasing information from its memory as the material computer is required to do,, because our memories are stored on the 'spiritual' level rather than a material level,,,
A Reply to Shermer Medical Evidence for NDEs (Near Death Experiences) – Pim van Lommel Excerpt: For decades, extensive research has been done to localize memories (information) inside the brain, so far without success.,,,,So we need a functioning brain to receive our consciousness into our waking consciousness. And as soon as the function of brain has been lost, like in clinical death or in brain death, with iso-electricity on the EEG, memories and consciousness do still exist, but the reception ability is lost. People can experience their consciousness outside their body, with the possibility of perception out and above their body, with identity, and with heightened awareness, attention, well-structured thought processes, memories and emotions. And they also can experience their consciousness in a dimension where past, present and future exist at the same moment, without time and space, and can be experienced as soon as attention has been directed to it (life review and preview), and even sometimes they come in contact with the “fields of consciousness” of deceased relatives. And later they can experience their conscious return into their body. http://www.nderf.org/vonlommel_skeptic_response.htm
To support this view that 'memory/information' is not stored in the brain, one of the most common features of extremely deep near death experiences is the 'life review' where every minute detail of a person's life is reviewed:
Near Death Experience – The Tunnel, The Light, The Life Review – video http://www.metacafe.com/watch/4200200/
I think one of the main places that this fallacious idea arises, the idea that computers are 'intelligent' and are on their way to becoming conscious, is from the chess computer programs that have advanced to the point of beating Grand Masters (as well as the Jeopardy computer program that could recall trivia better than humans), yet Gil Dodgen shows why thinking this is wrong in its assumption:
Epicycling Through The Materialist Meta-Paradigm Of Consciousness GilDodgen: One of my AI (artificial intelligence) specialties is games of perfect knowledge. See here: worldchampionshipcheckers.com In both checkers and chess humans are no longer competitive against computer programs, because tree-searching techniques have been developed to the point where a human cannot overlook even a single tactical mistake when playing against a state-of-the-art computer program in these games. On the other hand, in the game of Go, played on a 19×19 board with a nominal search space of 19×19 factorial (1.4e+768), the best computer programs are utterly incompetent when playing against even an amateur Go player.,,, https://uncommondesc.wpengine.com/intelligent-design/epicycling-through-the-materialist-meta-paradigm-of-consciousness/
Related note:
Another reason why the human mind is not like a computer - June 2012 Excerpt: In computer chess, there is something called the “horizon effect”. It is an effect innate in the algorithms that underpin it. Due to the mathematically staggering number of possibilities, a computer by force has to restrict itself, to establish a fixed search depth. Otherwise the calculations would never end. This fixed search depth means that a ‘horizon’ comes into play, a horizon beyond which the software engine cannot peer. Anand has shown time and again that he can see beyond this algorithm-imposed barrier, to find new ways, methods of changing the game. Just when every successive wave of peers and rivals thinks they have got his number, Anand sees that one, all important, absolute move.” https://uncommondesc.wpengine.com/computer-science/another-reason-why-the-human-mind-is-not-like-a-computer/
Of related interest to 'human intuition' (Godel) vs. chess computer programs:
Another reason why the human mind is not like a computer - June 2012 Excerpt: In computer chess, there is something called the “horizon effect”. It is an effect innate in the algorithms that underpin it. Due to the mathematically staggering number of possibilities, a computer by force has to restrict itself, to establish a fixed search depth. Otherwise the calculations would never end. This fixed search depth means that a ‘horizon’ comes into play, a horizon beyond which the software engine cannot peer. Anand has shown time and again that he can see beyond this algorithm-imposed barrier, to find new ways, methods of changing the game. Just when every successive wave of peers and rivals thinks they have got his number, Anand sees that one, all important, absolute move.” https://uncommondesc.wpengine.com/computer-science/another-reason-why-the-human-mind-is-not-like-a-computer/ A chess prodigy explains how his mind works – video Excerpt: What’s the secret to Magnus’ magic? Once an opponent makes a move, Magnus "instantaneously" knows his own next move. http://www.cbsnews.com/8301-504803_162-57380913-10391709/a-chess-prodigy-explains-how-his-mind-works/?tag=segementExtraScroller;housing Magnus Carlsen becomes game’s ‘highest-rated player of all time’ - December 9, 2012 http://zeenews.india.com/sports/others/magnus-carlsen-becomes-games-highest-rated-player-of-all-time_752842.html Mozart of Chess: Magnus Carlsen – video http://www.cbsnews.com/video/watch/?id=7399370n&tag=contentMain;contentAux
Dr. Torley, perhaps another line of evidence, in order to bring some soberness to the thought 'that the Internet is showing signs of intelligence and may already be conscious', would be to show the immense challenge that a single 'simple' protein presents to computers:
Confronting Science’s Logical Limits - John L. Casti - 1996 Excerpt: It has been estimated that a supercomputer applying plausible rules for protein folding would need 10^127 years to find the final folded form for even a very short sequence consisting of just 100 amino acids. (The universe is 13.7 x 10^9 years old). In fact, in 1993 Aviezri S. Fraenkel of the University of Pennsylvania showed that the mathematical formulation of the protein-folding problem is computationally “hard” in the same way that the traveling-salesman problem is hard. http://www.cs.virginia.edu/~robins/Confronting_Sciences_Logical_Limits.pdf "Blue Gene's final product, due in four or five years, will be able to "fold" a protein made of 300 amino acids, but that job will take an entire year of full-time computing." Paul Horn, senior vice president of IBM research, September 21, 2000 http://www.news.com/2100-1001-233954.html
Networking a few hundred thousand computers together has reduced the time to a few weeks for simulating the folding of a single, relatively short, protein molecule:
A Few Hundred Thousand Computers vs. A Single Protein Molecule - video http://www.metacafe.com/watch/4018233
Interestingly, there are many complex protein folding problems found by scientists that have still refused to be solved by the brute number crunching power of super-computers, but, 'surprisingly', some of these problems have been solved by the addition of 'human intuition' (Godel);
So Much For Random Searches - PaV - September 2011 Excerpt: There’s an article in Discover Magazine about how gamers have been able to solve a problem in HIV research in only three weeks (!) that had remained outside of researcher’s powerful computer tools for years.,,, Thus,, Random search by powerful computer: 10 years and No Success Intelligent Agents guiding powerful computing: 3 weeks and Success. https://uncommondesc.wpengine.com/intelligent-design/so-much-for-random-searches/
And that is just the problem found for computers trying to simulate a single protein folding,,, as somewhat alluded to before in post #1 (the complexity brake), the problem gets much exponentially worse once one tries to simulate proteins interacting with each other:
The Humpty-Dumpty Effect: A Revolutionary Paper with Far-Reaching Implications - Paul Nelson October 23, 2012 Excerpt: The Levinthal Paradox, Old and New Versions Anyone who has studied the protein folding problem will have met the famous Levinthal paradox, formulated in 1969 by the molecular biologist Cyrus Levinthal. Put simply, the Levinthal paradox states that when one calculates the number of possible topological (rotational) configurations for the amino acids in even a small (say, 100 residue) unfolded protein, random search could never find the final folded conformation of that same protein during the lifetime of the physical universe. Therefore, concluded Levinthal, given that proteins obviously do fold, they are doing so, not by random search, but by following favored pathways. The challenge of the protein folding problem is to learn what those pathways are. That's the classical version of the paradox. But now consider the origin of an entire cell. All cells possess what has been called an "interactome," namely, "a complex network" comprising "a host of cellular constituents" -- proteins, nucleic acids, lipids, metal ion cofactors, and so on. If the Levinthal paradox (old version) arises from the difficulty of searching the space of possible configurations for a single protein, the new version of the paradox, formulated by Tompa and Rose, asks the same question for the possible arrangements of the cell's interactome, an enormously larger collection of objects with a correspondingly greater search space.,,, http://www.evolutionnews.org/2012/10/a_revolutionary065521.html
As well, despite some very optimistic claims, it seems future 'quantum computers' will not fair much better in finding functional proteins in sequence space than even a idealized 'material' supercomputer of today can do:
The Limits of Quantum Computers – March 2008 Excerpt: "Quantum computers would be exceptionally fast at a few specific tasks, but it appears that for most problems they would outclass today’s computers only modestly. This realization may lead to a new fundamental physical principle" http://www.scientificamerican.com/article.cfm?id=the-limits-of-quantum-computers Shtetl-Optimized - Scott Aaronson Excerpt: Quantum computers are not known to be able to solve NP-complete problems in polynomial time. http://scottaaronson.com/blog/?p=456
Protein folding is found to be a 'intractable NP-complete problem' by several different methods. Thus protein folding will not be able to take advantage of any advances in speed that quantum computation may offer to any other problems of computation that may be solved in polynomial time:
Combinatorial Algorithms for Protein Folding in Lattice Models: A Survey of Mathematical Results – 2009 Excerpt: Protein Folding: Computational Complexity 4.1 NP-completeness: from 10^300 to 2 Amino Acid Types 4.2 NP-completeness: Protein Folding in Ad-Hoc Models 4.3 NP-completeness: Protein Folding in the HP-Model http://www.cs.brown.edu/~sorin/pdfs/pfoldingsurvey.pdf
Related note:
Physicists Discover Quantum Law of Protein Folding – February 22, 2011 Quantum mechanics finally explains why protein folding depends on temperature in such a strange way. Excerpt: First, a little background on protein folding. Proteins are long chains of amino acids that become biologically active only when they fold into specific, highly complex shapes. The puzzle is how proteins do this so quickly when they have so many possible configurations to choose from. To put this in perspective, a relatively small protein of only 100 amino acids can take some 10^100 different configurations. If it tried these shapes at the rate of 100 billion a second, it would take longer than the age of the universe to find the correct one. Just how these molecules do the job in nanoseconds, nobody knows.,,, Their astonishing result is that this quantum transition model fits the folding curves of 15 different proteins and even explains the difference in folding and unfolding rates of the same proteins. That’s a significant breakthrough. Luo and Lo’s equations amount to the first universal laws of protein folding. That’s the equivalent in biology to something like the thermodynamic laws in physics. http://www.technologyreview.com/view/423087/physicists-discover-quantum-law-of-protein-folding/
Hi bornagain77, Thanks very much for the quotes from Godel. I didn't know that he rejected a Darwinian account of the evolution of the human brain. Very interesting! I'd also like to recommend to readers another article which you linked to: If Modern Humans Are So Smart, Why Are Our Brains Shrinking? It's very informative, and it makes for fascinating reading. Thanks once again. vjtorley
Of course if your assumption is that human consciousness is nothing more than the natural outcome of increasing complexity, the conclusion that the Internet could become conscious follows logically.
By that logic, I can ask the following questions: When did the universe first become aware of itself? How complex did it have to be? Mung
Of course if your assumption is that human consciousness is nothing more than the natural outcome of increasing complexity, the conclusion that the Internet could become conscious follows logically. By showing how absurd the logical consequences of their assumptions are, these people are showing us how absurd their assumptions are. Granville Sewell
I've been trying to figure out what's missing from that photo of Christof. and I've finally hit upon it: a baseball cap, on back-to-front! Axel
Yeah! That's it! Meaningless information. Kind of not too 'intuitive' is it, really? It'd be jist kinda more anecdotal. Axel
Meaningless information? Mung
Otherwise, it will be meaningless. Axel
'I say we excise all the information from the Internet and then try to have a conversation with it.' - mung Mung, make sure you log its GPS coordinates, once you've collated it all. We must exercise the most rigorous consistency in our research. Axel
And yet the US Congress has just banned the use of the L-word at a time when it is most needed. Mung
I don't think it's tendentious, but plainly factual, to consider this an example - I was going to say 'egregious example', but it's common enough to be the 'meat and drink' of the bloggers here to feast on - but of the truly outlandish, incomprehensible folly, that a person with the highest academic accreditations, but signally bereft of a substrate of wisdom, is capable of, without being subjected to any kind of psychological duress or torture. When a brilliant man's assumptions are barmy, he's barmy in a wonderfully exotic-seeming way. Axel
If the internet is conscious, where did it come from? Does it reside with the hardware or software? If we could delete all the software from all the PC's, servers, routers, etc., would the internet still be conscious? I think most would answer no. That implies that the consciousness resides with the software. So what exactly is software in material terms? jpg564
The title of the OP should not be "Could the Internet ever be conscious," but rather the title should be "When will the Internet become Aware of Itself." Mung
Neil Rickert @2: Well said. Eric Anderson
The working hypothesis ain't working. First our intelligence comes from being made in Gods image and so thinking like him. These computors are just memory machines. Anything they do is a mere function of memory. Memory is not intelligent. Its just stored information.it doesn't have a single thought of its own. Robert Byers
Kurt Godel was well aware of the deep implications of his theorem as the following quotes from Godel make clear: Quotes by Kurt Godel: "The brain is a computing machine connected with a spirit." [6.1.19] "Consciousness is connected with one unity. A machine is composed of parts." [6.1.21] "I don’t think the brain came in the Darwinian manner. In fact, it is disprovable. Simple mechanism can’t yield the brain. I think the basic elements of the universe are simple. Life force is a primitive element of the universe and it obeys certain laws of action. These laws are not simple, and they are not mechanical." [6.2.12] "The world in which we live is not the only one in which we shall live,,,." "Materialism is false." quotes taken from - Hao Wang’s supplemental biography of Gödel, A Logical Journey, MIT Press, 1996 http://kevincarmody.com/math/goedel.html bornagain77
Neil Rickert:
There is some ambiguity on what it would mean for the Internet to be conscious.
There is some ambiguity on what it would mean for the anti-IDist to be conscious. Joe
The hubris of it all reminds me of this clip: Frankenstein - 1931 - "It's Alive!" http://www.youtube.com/watch?v=rSCBvu_kijo ,,,Call me back when man creates a single living cell from dead chemicals, then perhaps I will take such unrestrained imagination a bit more seriously for man even entertaining the thought that he may 'accidentally' create consciousness.,, "To grasp the reality of life as it has been revealed by molecular biology, we must first magnify a cell a thousand million times until it is 20 kilometers in diameter and resembles a giant airship large enough to cover a great city like London or New York. What we would see then would be an object of unparalleled complexity,...we would find ourselves in a world of supreme technology and bewildering complexity." Geneticist Michael Denton PhD., Evolution: A Theory In Crisis, pg.328 At last, a Darwinist mathematician tells the truth about evolution - VJT - November 2011 Excerpt: In Chaitin’s own words, “You’re allowed to ask God or someone to give you the answer to some question where you can’t compute the answer, and the oracle will immediately give you the answer, and you go on ahead.” https://uncommondesc.wpengine.com/intelligent-design/at-last-a-darwinist-mathematician-tells-the-truth-about-evolution/ Alan Turing and Kurt Godel - Incompleteness Theorem and Human Intuition - video (notes in video description) http://www.metacafe.com/watch/8516356/ Verse and music: John 1:4 In him was life, and that life was the light of all mankind. Evanescence - Bring Me To Life http://www.youtube.com/watch?v=3YxaaGgTQYM&ob=av2e Lyric from song: "Only you are life among the dead" New song: Evanescence - The Other Side (Lyric Video) http://www.vevo.com/watch/evanescence/the-other-side-lyric-video/USWV41200024?source=instantsearch bornagain77
Count on NEVER! The complexity being dealt with in molecular biology is so over the top extreme.. But the internet nodes include humans and all their complexity. In fact the research and discussions about internet on internet is a self-awareness of the internet. Its consciousness is to "you" what your consciousness is to that of one of your neurons (a form of panpsychism). nightlight
I say we excise all the information from the Internet and then try to have a conversation with it. Mung
There is some ambiguity on what it would mean for the Internet to be conscious. If "the Internet" includes all of the connected people, then that could be talk of a kind of group consciousness. This is plausible in a vague sense, but it would be hard to narrow down what it is supposed to mean. If Internet consciousness is referring only to the machine and wiring, then it isn't going go happen. Neil Rickert
As to:
Moore’s law: definitely no Internet consciousness before 2115, and probably never
Count on NEVER! The complexity being dealt with in molecular biology is so over the top extreme that even when factoring in future advances in computer technology (Moore's law), man will NEVER be able to completely understand the complexity being dealt in molecular biology:
"Complexity Brake" Defies Evolution - August 2012 Excerpt: In a recent Perspective piece called "Modular Biological Complexity" in Science, Christof Koch (Allen Institute for Brain Science, Seattle; Division of Biology, Caltech) explained why we won't be simulating brains on computers any time soon: "Although such predictions excite the imagination, they are not based on a sound assessment of the complexity of living systems. Such systems are characterized by large numbers of highly heterogeneous components, be they genes, proteins, or cells. These components interact causally in myriad ways across a very large spectrum of space-time, from nanometers to meters and from microseconds to years. A complete understanding of these systems demands that a large fraction of these interactions be experimentally or computationally probed. This is very difficult." Physicists can use statistics to describe a homogeneous system like an ideal gas, because one can assume all the member particles interact the same. Not so with life. When describing heterogeneous systems each with a myriad of possible interactions, the number of discrete interactions grows faster than exponentially. Koch showed how Bell's number (the number of ways a system can be partitioned) requires a comparable number of measurements to exhaustively describe a system. Even if human computational ability were to rise exponentially into the future (somewhat like Moore's law for computers), there is no hope for describing the human "interactome" -- the set of all interactions in life. "This is bad news. Consider a neuronal synapse -- the presynaptic terminal has an estimated 1000 distinct proteins. Fully analyzing their possible interactions would take about 2000 years. Or consider the task of fully characterizing the visual cortex of the mouse -- about 2 million neurons. Under the extreme assumption that the neurons in these systems can all interact with each other, analyzing the various combinations will take about 10 million years..., even though it is assumed that the underlying technology speeds up by an order of magnitude each year. " Even with shortcuts like averaging, "any possible technological advance is overwhelmed by the relentless growth of interactions among all components of the system," Koch said. "It is not feasible to understand evolved organisms by exhaustively cataloging all interactions in a comprehensive, bottom-up manner." He described the concept of the Complexity Brake: "Allen and Greaves recently introduced the metaphor of a "complexity brake" for the observation that fields as diverse as neuroscience and cancer biology have proven resistant to facile predictions about imminent practical applications. Improved technologies for observing and probing biological systems has only led to discoveries of further levels of complexity that need to be dealt with. This process has not yet run its course. We are far away from understanding cell biology, genomes, or brains, and turning this understanding into practical knowledge." Why can't we use the same principles that describe technological systems? Koch explained that in an airplane or computer, the parts are "purposefully built in such a manner to limit the interactions among the parts to a small number." The limited interactome of human-designed systems avoids the complexity brake. "None of this is true for nervous systems.",,, to read more go here: http://www.evolutionnews.org/2012/08/complexity_brak062961.html
Can a Computer Think? - Michael Egnor - March 31, 2011 Excerpt: The Turing test isn't a test of a computer. Computers can't take tests, because computers can't think. The Turing test is a test of us. If a computer "passes" it, we fail it. We fail because of our hubris, a delusion that seems to be something original in us. The Turing test is a test of whether human beings have succumbed to the astonishingly naive hubris that we can create souls. It's such irony that the first personal computer was an Apple. http://www.evolutionnews.org/2011/03/failing_the_turing_test045141.html Read Your References Carefully: Paul McBride's Prized Citation on Skull-Sizes Supports My Thesis, Not His - Casey Luskin - August 31, 2012 Excerpt of Conclusion: This has been a long article, but I hope it is instructive in showing how evolutionists deal with the fossil hominin evidence. As we've seen, multiple authorities recognize that our genus Homo appears in the fossil record abruptly with a complex suite of characteristics never-before-seen in any hominin. And that suite of characteristics has remained remarkably constant from the time Homo appears until the present day with you, me, and the rest of modern humanity. The one possible exception to this is brain size, where there are some skulls of intermediate cranial capacity, and there is some increase over time. But even there, when Homo appears, it does so with an abrupt increase in skull-size. ,,, The complex suite of traits associated with our genus Homo appears abruptly, and is distinctly different from the australopithecines which were supposedly our ancestors. There are no transitional fossils linking us to that group.,,, http://www.evolutionnews.org/2012/08/read_your_refer_1063841.html McBride Misstates My Arguments in Science and Human Origins - Casey Luskin September 5, 2012 Excerpt: At the end of the day, I leave this exchange more confident than before that the evidence supports the abrupt appearance of our genus Homo. http://www.evolutionnews.org/2012/09/mcbride_misstat063931.html If Modern Humans Are So Smart, Why Are Our Brains Shrinking? - January 20, 2011 Excerpt: John Hawks is in the middle of explaining his research on human evolution when he drops a bombshell. Running down a list of changes that have occurred in our skeleton and skull since the Stone Age, the University of Wisconsin anthropologist nonchalantly adds, “And it’s also clear the brain has been shrinking.” “Shrinking?” I ask. “I thought it was getting larger.” The whole ascent-of-man thing.,,, He rattles off some dismaying numbers: Over the past 20,000 years, the average volume of the human male brain has decreased from 1,500 cubic centimeters to 1,350 cc, losing a chunk the size of a tennis ball. The female brain has shrunk by about the same proportion. “I’d call that major downsizing in an evolutionary eyeblink,” he says. “This happened in China, Europe, Africa—everywhere we look.” http://discovermagazine.com/2010/sep/25-modern-humans-smart-why-brain-shrinking

Leave a Reply