Recently, we discussed well-known chemist and atheist proponent Peter Atkins’s claim that science, not philosophy, answers the Big Questions:
One class consists of invented questions that are often based on unwarranted extrapolations of human experience. They typically include questions of purpose and worries about the annihilation of the self, such as Why are we here? and What are the attributes of the soul? They are not real questions, because they are not based on evidence. Thus, as there is no evidence for the Universe having a purpose, there is no point in trying to establish its purpose or to explore the consequences of that purported purpose. As there is no evidence for the existence of a soul (except in a metaphorical sense), there is no point in spending time wondering what the properties of that soul might be should the concept ever be substantiated. More.
Having dismissed all this in favor of the study of physics and chemistry, he went on to say that he thought AI could be built that would understand the universe and consciousness better than we do, given the impasse we are in:
Of course, foothills have given way to mountains, and rapid progress cannot be expected in the final push. Maybe effort will take us, at least temporarily, down blind alleys (string theory perhaps) but then the blindness of that alley might suddenly be opened and there is a surge of achievement. Perhaps whole revised paradigms of thought, such as those a century or so ago when relativity and quantum mechanics emerged, will take comprehension in currently unimaginable directions. Maybe we shall find that the cosmos is just mathematics rendered substantial. Maybe our comprehension of consciousness will have to be left to the artificial device that we thought was merely a machine for simulating it. Maybe, indeed, circularity again, only the artificial consciousness we shall have built will have the capacity to understand the emergence of something from nothing. More.
What do you think?
Also new at Mind Matters Today:
Imagining life after Google: Reviewers of George Gilder’s new book weigh in If we have simply taken the big software, hardware, and social media companies who dominate our lives for granted, the reactions from the business world to Life after Google: The Fall of Big Data and the Rise of the Blockchain Economy should give us a lot to think about.
Silicon Valley grew old before it grew up By April of this year, 100 employees were complaining about the Google groupthink Quip making the rounds: Would you trust a self-driving car from Google? Answer: Sure, if I needed a car that decided for me where I should go and then just drove me there.
Our anonymity may be an illusion Because we talk about ourselves so much online, few leaked pieces may even be required to identify us. Dr. Dinerstein: In what is now a classic study, researchers used de-identified credit card data for 1.1 million people, in 10,000 stores over a three-month period. Using just four pieces of “outside” data they could identify 90% of the shoppers.
Karl Marx’s eerie AI prediction He felt that capitalism would fall when machines replaced human labor Because Marx held that the value of goods resided in the labor required to produce them, if goods were produced by automatons, without human labor, the economy would fall apart and capitalism would fail.
and:
Slaughterbots
Is it ethical to develop a swarm of killer AI drones? For threats like slaughterbots, the answer is the development of newer technology. Like it or not, history is replete with accounts of new military technology replacing old. Evil, seeking influence, demands a response, so the technology to provide one must be developed. (Robert J.Marks)
vs.
Slaughterbots: How far is too far? And how will we know if we have crossed a line? A greater focus should be on restoring the foundations of our nation over building superweapons. And the key foundation is all human beings’ right to life. (Eric Holloway)