Uncommon Descent Serving The Intelligent Design Community

IPCC Caught in Another Lie

arroba Email

The chairman of the leading climate change watchdog was informed that claims about melting Himalayan glaciers were false before the Copenhagen summit, The Times has learnt.
Rajendra Pachauri was told that the Intergovernmental Panel on Climate Change assessment that the glaciers would disappear by 2035 was wrong, but he waited two months to correct it. He failed to act despite learning that the claim had been refuted by several leading glaciologists.
The IPCC’s report underpinned the proposals at Copenhagen for drastic cuts in global emissions.
Dr Pachauri, who played a leading role at the summit, corrected the error last week after coming under media pressure. He told The Times on January 22 that he had only known about the error for a few days. He said: “I became aware of this when it was reported in the media about ten days ago. Before that, it was really not made known. Nobody brought it to my attention. There were statements, but we never looked at this 2035 number.” Asked whether he had deliberately kept silent about the error to avoid embarrassment at Copenhagen, he said: “That’s ridiculous. It never came to my attention before the Copenhagen summit. It wasn’t in the public sphere.” 

Continue reading at the Times of London.

nullasalus, Its good to see you back on UD. Upright BiPed
Really, the biggest lesson of the robot study is that "darwinian" mechanisms, systems, and outcomes are demonstrably capable of being themselves designed by intelligent agents. In other words, yet again - that evolution took place is no argument or evidence against a designer. That evolutionary systems are demonstrably used to achieve goals by intelligent and purposeful agents is more a problem for a critic of design than a proponent. nullasalus
Hi Endoplasmic, this work is very interesting and original. However, it subtly sidelines the question of design, rather than answering it, because each of these experiments was designed to work by human experimenters, and (a) there is no comparison of their version of evolution to the effects of random sampling (b) we dont know how the outcome has been subtly directed by the experimenters. To abstract this out a bit, these robots are using neural networks, and neural networks need to be trained. The training algorithm is a kind of search algorithm, and all known neural-network-training-algorithms are among those that obey the No Free Lunch theorems. Which means roughly that they put out no more information than is contained within the starting system and training algorithm combined, plus what pure chance alone would hit upon. That is, there are two reasons why this might work; (a) the problems are so much simplified that it is trivial to solve or (b) there exists problem-specific information somewhere in the design of the experiment, that could not be relied upon under naturalistic conditions. Most likely it is a combination of these. You conclude that these "verify the power of evolution by mutation, recombination, and natural selection", but I conclude these verify the power of intelligence to design interesting and adaptive systems. It is the principle of Conservation of Information that you need to beat, and this paper does not even ask that question. Read Dembski's recent work on Avida, to understand how he would set about deconstructing this experiment. http://marksmannet.com/EILab/Publications/Evita.html http://marksmannet.com/EILab/Publications/CostOfSuccess.html andyjones
OFF TOPIC: Robots are now evolving via Darwinian mechanisms: Robots Display Predator-Prey Co-Evolution, Evolve Better Homing Techniques (with video) And the paper at PLoS Biology: Evolution of Adaptive Behaviour in Robots by Means of Darwinian Selection
Conclusions These examples of experimental evolution with robots verify the power of evolution by mutation, recombination, and natural selection. In all cases, robots initially exhibited completely uncoordinated behaviour because their genomes had random values. However, a few hundreds of generations of random mutations and selective reproduction were sufficient to promote the evolution of efficient behaviours in a wide range of environmental conditions. The ability of robots to orientate, escape predators, and even cooperate is particularly remarkable given that they had deliberately simple genotypes directly mapped into the connection weights of neural networks comprising only a few dozen neurons.

Leave a Reply