Uncommon Descent Serving The Intelligent Design Community
Category

Artificial Intelligence

Finance prof: Artificial intelligence does not threaten complex jobs

Not any time soon, according to an analyst at Bloomberg: It’s important to note that machine learning hasn’t yet made its mark on the economy — to paraphrase economist Robert Solow, you can see the machine learning age everywhere but in the economic statistics. Employment levels have returned to healthy levels, and there’s no evidence that machines are taking many of our jobs yet. … The authors generally don’t envision a world of full automation, with machines replacing humans at every step of the production process. Instead, they see machine learning being deployed selectively at some nodes of the value chain where data is plentiful, leaving human judgment to focus on the rest. Though “judgment” is a fuzzy word, Agrawal Read More ›

Tech sector guru (and ID sympathizer) says life after Google will be okay

People will take ownership of their own data, cutting out the giant “middle man.” George Gilder is an early sympathizer of intelligent design and taken his lumps for that. It wasn’t how the media wanted to perceive a tech sector guru. His most recent book, Life after Google: The Fall of Big Data and the Rise of the Blockchain Economy, offers a more hopeful view of the world after AI, based on the creativity of uniquely individual human beings. Gilder thinks that the enormous tech companies will be replaced by flattened hierarchies in which people take ownership of their own data, cutting out the giant “middle man.” He calls the successor era he envisions the “cryptocosm,” referring to the private encryption Read More ›

Could one single machine invent everything?

The king was pleased with Schmedrik’s proposal. But just as he was about to hand over the requested amount, his wise advisor Previsio pulled him aside and whispered, “Dear king, before we pay Schmedrik his fee, do you not think it prudent to first determine if the Innovator works?” Read More ›

Philosopher suggests another reason why machines can’t think as we do

As philosopher Michael Polanyi has noted, much that we know is hard to codify or automate. From Denyse O’Leary at Mind Matters Today We have all encountered that problem. It’s common in healthcare and personal counseling. Some knowledge simply cannot be conveyed—or understood or accepted—in a propositional form. For example, a nurse counselor may see clearly that her elderly post-operative patient would thrive better in a retirement home than in his rundown private home with several staircases. The analysis, as such, is straightforward. But that is not the challenge the nurse faces. Her challenge is to convey to the patient, not the information itself, but her tacit knowledge that the proposed move would liberate, rather than restrict him. More. Reality Read More ›

Polanyi’s Paradox: Why machines can’t think as we do

From Mind Matters today: Recently, we looked at Moravec’s Paradox, the fact that it is hard to teach machines to do things that are easy for most humans (walking, for example) but comparatively easy to teach them things that are challenging for most humans (chess comes to mind). Another paradox worth noting is Polanyi’s Paradox, named in honor of philosopher Michael Polanyi (1891-1976), who developed the concept of “tacit knowledge” … … Here’s [Polanyi’s] Paradox, as formulated by law professor John Danaher, who studies emerging technologies, at his blog Philosophical Disquisitions: We can know more than we can tell, i.e. many of the tasks we perform rely on tacit, intuitive knowledge that is difficult to codify and automate. We have Read More ›

Coffee!! Is a Politically Correct chatbot as bad as Twitter? Worse?

  From Mind Matters Today: Many tweaks later, is Zo correct enough? Is everyone pleased? Well, maybe the digital teen is too Correct now. From Quartz, where Chloe Rose Stuart-Ulin has been checking in with Zo for over a year and finds her “sort of convincing,”speaking “fluent meme”: But there’s a catch. In typical sibling style, Zo won’t be caught dead making the same mistakes as her sister. No politics, no Jews, no red-pill paranoia. Zo is politically correct to the worst possible extreme; mention any of her triggers, and she transforms into a judgmental little brat. One wonders, what is the market potential for judgmental little brats? More. See also: GIGO alert: AI can be racist and sexist, researchers Read More ›

AI That Can Read Minds? Deconstructing AI Hype

From computer engineering prof Robert J. Marks at Mind Matters Today: Fake and misleading AI news is everywhere today. Here’s an example I  ran across recently: A  headline from a large-circulation daily’s  web page screams: “No more secrets! New mind-reading machine can translate your thoughts and display them as text INSTANTLY!” Not just “instantly,” notice, but “INSTANTLY!” The Daily Mail is the United Kingdom’s second biggest-selling daily newspaper. … As with all hype, there is some truth in the piece. A headline like “New AI outperforms humans by a factor of a BILLION!” could be written about a calculator that computes specific values of trig functions. Calculating the cosine of 27.3 degrees from scratch to six significant places is a laborious Read More ›

Is Bitcoin Safe? Why the human side of security is critical

From Jonathan Bartlett at Mind Matters Today: Bitcoin solves a lot of tough problems in very ingenious ways. Unfortunately, however, those benefits don’t tend to translate well for end users, who are not nearly as ingenious as the people developing the system. More. Readers will recognize Johnny Bartlett Jonathan Bartlett, Research and Education Director of the Blyth Institute, as a longtime author here.

At Mind Matters Today: AI is not (yet) an intelligent cause

From Mind Matters Today: A recent conference raises concerns, according to Science Magazine, that our machines may never be able to get wise to human deviancy. So-called “white hat” hackers who test the security of AI have found it surprisingly easy to fool. Matthew Hutson reports, Last week, here at the International Conference on Machine Learning (ICML), a group of researchers described a turtle they had 3D printed. Most people would say it looks just like a turtle, but an artificial intelligence (AI) algorithm saw it differently. Most of the time, the AI thought the turtle looked like a rifle. Similarly, it saw a 3D-printed baseball as an espresso. These are examples of “adversarial attacks”—subtly altered images, objects, or sounds Read More ›

Bill Dembski on how AI can solve our problems…

… maybe by changing the landscape in ways we might not like. Referring to a mathematical concept discussed by Bertrand Russell, he calls it “theft” vs. “honest toil.” From Bill Dembski at Mind Matters Today: AI (artificial intelligence) poses a challenge to human work, threatening to usurp many human jobs in coming years. But a related question that’s too often ignored and needs to be addressed is whether this challenge will come from AI in fact being able to match and exceed human capabilities in the environments in which humans currently exercise those capabilities, or whether it will come from AI also manipulating our environments so that machines thrive where otherwise they could not. AI never operates in a vacuum. Read More ›

Transhumanism is a curious blip…

  … in a science and technology culture in which it is otherwise axiomatic that humans are merely evolved animals. So, can we cheat death by uploading ourselves as virtual AI entities? Cheating death is a serious goal of some transhumanists. Futurist Ray Kurzweil(now a Google innovator) calls such a digital fate the Singularity, as in his book, The Singularity Is Near: When Humans Transcend Biology. Published in 2006, it is still in the top ten in artificial intelligence and biotechnology. In 2017, he announced, 2029 is the consistent date I have predicted for when an AI will pass a valid Turing test and therefore achieve human levels of intelligence. I have set the date 2045 for the ‘Singularity’ which is when we will multiply our effective intelligence Read More ›

Neurosurgeon outlines why machines can’t think

From Michael Egnor at Mind Matters Today: The hallmark of human thought is meaning, and the hallmark of computation is indifference to meaning. That is, in fact, what makes thought so remarkable and also what makes computation so useful. You can think about anything, and you can use the same computer to express your entire range of thoughts because computation is blind to meaning. Thought is not merely not computation. Thought is the antithesis of computation. Thought is precisely what computation is not. Thought is intentional. Computation is not intentional. A reader may object at this point that the output of computation seems to have meaning. After all, the essay was typed on a computer. Yes, but all of the Read More ›

Bill Dembski: Descartes (1596—1650) could tell you why “smart machines” are stalled

From design theorist William Dembski at Mind Matters Today: The computational literature on No Free Lunch theorems and Conservation of Information (see the work of David Wolpert and Bill Macready on the former as well as that of Robert J. Marks and myself on the latter) imply that all problem-solving algorithms, including such a master algorithm, must be adapted to specific problems. Yet a master algorithm must also be perfectly general, transforming AI into a universal problem solver. The No Free Lunch theorem and Conservation of Information demonstrate that such universal problem solvers do not exist. Yet what algorithms can’t do, humans can. True intelligence, as exhibited by humans, is a general faculty for taking wide-ranging, diverse abilities for solving Read More ›

Bill Dembski: Machines will never supersede humans!

On July 11, the Walter Bradley Center for Natural and Artificial Intelligence officially launched and design theorist William Dembski offered some thoughts: The Walter Bradley Center, to the degree that it succeeds, will not merely demonstrate a qualitative difference between human and machine intelligence; more so, it will chart how humans can thrive in a world of increasing automation. … Yet the Walter Bradley Center exists not merely to argue that we are not machines. Yes, singularity theorists and advocates of strong AI continue to vociferate, inflating the prospects and accomplishments of artificial intelligence. They need a response, if only to ensure that silence is not interpreted as complicity or tacit assent. But if arguing, even persuasively, with a Ray Read More ›