Uncommon Descent Serving The Intelligent Design Community

Henry Kissinger: The End of the Enlightenment dawns, due to artificial intelligence

arroba Email
Henry Kissinger
Henry Kissinger (1923-)

Readers may remember Henry Kissinger, a 70s-era American diplomat (“U.S. secretary of state under Richard Nixon, winning the 1973 Nobel Peace Prize for the Vietnam War accords”). From Kissinger at The Atlantic:

How the Enlightenment Ends: Philosophically, intellectually—in every way—human society is unprepared for the rise of artificial intelligence.

As the internet and increased computing power have facilitated the accumulation and analysis of vast data, unprecedented vistas for human understanding have emerged. Perhaps most significant is the project of producing artificial intelligence—a technology capable of inventing and solving complex, seemingly abstract problems by processes that seem to replicate those of the human mind.

This goes far beyond automation as we have known it. Automation deals with means; it achieves prescribed objectives by rationalizing or mechanizing instruments for reaching them. AI, by contrast, deals with ends; it establishes its own objectives. To the extent that its achievements are in part shaped by itself, AI is inherently unstable. AI systems, through their very operations, are in constant flux as they acquire and instantly analyze new data, then seek to improve themselves on the basis of that analysis. Through this process, artificial intelligence develops an ability previously thought to be reserved for human beings. It makes strategic judgments about the future, some based on data received as code (for example, the rules of a game), and some based on data it gathers itself (for example, by playing 1 million iterations of a game).

Third, that AI may reach intended goals, but be unable to explain the rationale for its conclusions. In certain fields—pattern recognition, big-data analysis, gaming—AI’s capacities already may exceed those of humans. If its computational power continues to compound rapidly, AI may soon be able to optimize situations in ways that are at least marginally different, and probably significantly different, from how humans would optimize them. But at that point, will AI be able to explain, in a way that humans can understand, why its actions are optimal? Or will AI’s decision making surpass the explanatory powers of human language and reason? More.

This all sounds way overblown. It’s not clear that AI will have motives at all, other than those of the programmers, or that algorithms can produce vast amounts of new information that no human can understand. Granted, it is fashionable to freak out about such things, as Elon Musk and the late Stephen Hawking have done. When Kissinger was a household name, the computer was a dust collector in some eccentric’s basement and the Big Freakout was the Population Bomb. We wish that the freakout industry would organize itself efficiently, the way high fashion does, so that mavens can get a handle on what’s coming down the runway this season. Helps when doing research.

Return to product information We’d suggest running this kind of stuff past computer Engineering prof Robert Marks II, an author of Evolutionary Informatics, for an information theory perspective on the situation.

See also: Robert Marks on the Turing Test vs the Lovelace Test for computer intelligence


And you thought they were kidding?: First Church of AI

Kind of disappointing to read stuff from this important figure of the last century that is his wandering out of his depth. I mean how can he come up with this knee slapper: "AI, by contrast, deals with ends; it establishes its own objectives. To the extent that its achievements …" blah blah. It makes me tired. The naïve reader of that mag probably thinks there is something profound here coming out of this smart guy. groovamos

Leave a Reply