Uncommon Descent Serving The Intelligent Design Community

AI, intelligent agency and the intersection with ID

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

This is a theme of increasing significance for the ID debate, but also it has overtones for an era where AI technologies may be driving the next economic long wave.

Which is of instant, global importance, hence the Perez idealised Long wave illustration:

However, this is not about economics (save, as a context for major trends) but about AI, Intelligent Agents as conceived under AI and the intersection with ID. Intelligent Design.

Where, it is important to recognise that the concept of intelligence and of agency we will increasingly encounter will be shaped by the dogmas of what is often termed, Strong AI.

Techopedia summarises:

The Derek Smith two-tier controller cybernetic model

>>Strong artificial intelligence (strong AI) is an artificial intelligence construct that has mental capabilities and functions that mimic the human brain. In the philosophy of strong AI, there is no essential difference between the piece of software, which is the AI, exactly emulating the actions of the human brain, and actions of a human being, including its power of understanding and even its consciousness. >>

Already, this bristles with a whole worldview and exhibits some highly questionable radical secularist dogmas posing as matter-of-fact knowledge. Duly, dressed up in the computer scientist’s version of the lab coat.

Now, Wikipedia is of course a handy reference for the typical conventional wisdom of our day driven by the typically progressivist, radically secularist, scientism-based evolutionary materialistic perspectives we are already noticing. Here, then, is the opening for its article on AI:

>>Artificial intelligence (AI, also machine intelligence, MI) is intelligence displayed by machines, in contrast with the natural intelligence (NI) displayed by humans and other animals. In computer science AI research is defined as the study of “intelligent agents“: any device that perceives its environment and takes actions that maximize its chance of success at some goal.[1] Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.[2] See glossary of artificial intelligence.

The scope of AI is disputed: as machines become increasingly capable, tasks considered as requiring “intelligence” are often removed from the definition, a phenomenon known as the AI effect, leading to the quip “AI is whatever hasn’t been done yet.”[3] For instance, optical character recognition is frequently excluded from “artificial intelligence”, having become a routine technology.[4] Capabilities generally classified as AI as of 2017 include successfully understanding human speech,[5] competing at a high level in strategic game systems (such as chess and Go[6]), autonomous cars, intelligent routing in content delivery networks, military simulations, and interpreting complex data, including images and videos.

Artificial intelligence was founded as an academic discipline in 1956, and in the years since has experienced several waves of optimism,[7][8] followed by disappointment and the loss of funding . . . >>

Notice the highlighted definition?

In computer science AI research is defined as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of success at some goal . . .

Thus, Intelligent Design and the design inference are issues that are inextricably entangled with AI concerns regarding intelligence and agency.

A “reflex agent” Intelligent Agent, per Wikipedia reflecting conventional wisdom on AI

However, in looking at Intelligent Agents, Wiki observes:

A reflex machine, such as a thermostat, is considered an example of an intelligent agent.

Let us now look at a fair-use clipping or two from a current text by Poole and Mackworth, which I headline from the current Voynich Manuscript thread:

KF, 7: >>I notice a discussion of agency, from the AI paradigm:

It is important to distinguish between the knowledge in the mind of the designer and the knowledge in the mind of the agent. Consider the extreme cases:

* At one extreme is a highly specialized agent that works well in the environment for which it was designed, but is helpless outside of this niche. The designer may have done considerable work in building the agent, but the agent may not need to do very much to operate well. An example is a thermostat. It may be difficult to design a thermostat so that it turns on and off at exactly the right temperatures, but the thermostat itself does not have to do much computation. Another example is a car painting robot that always paints the same parts in an automobile factory. There may be much design time or offline computation to get it to work perfectly, but the painting robot can paint parts with little online computation; it senses that there is a part in position, but then it carries out its predefined actions. These very specialized agents do not adapt well to different environments or to changing goals. The painting robot would not notice if a different sort of part were present and, even if it did, it would not know what to do with it. It would have to be redesigned or reprogrammed to paint different parts or to change into a sanding machine or a dog washing machine.

* At the other extreme is a very flexible agent that can survive in arbitrary environments and accept new tasks at run time. Simple biological agents such as insects can adapt to complex changing environments, but they cannot carry out arbitrary tasks. Designing an agent that can adapt to complex environments and changing goals is a major challenge. The agent will know much more about the particulars of a situation than the designer. Even biology has not produced many such agents. Humans may be the only extant example, but even humans need time to adapt to new environments.

Even if the flexible agent is our ultimate dream, researchers have to reach this goal via more mundane goals. Rather than building a universal agent, which can adapt to any environment and solve any task, they have built particular agents for particular environmental niches.

The mind-set is clear.

It would seem obvious that a thermostat is a simple regulator using negative feedback and well known control loop dynamics, not an agent in any sense worth talking about.

As to simply slipping in the word “mind,” that is itself suggestive of anthropomorphising.

Going further, a robot is a fairly complex cybernetic system, but it is in the end an extension of numerical control of machines and of automation, though there is some inherent flexibility in developing a reprogrammable manipulator-arm that can use various tool-tips.

The complaint on want of adaptability points to the root cause of performance: programming. Where, obviously, programming for detailed step by step response to an indefinitely wide array of often unforeseen circumstances is an obviously futile supertask.

Programming in common sense, deep understanding of language and of visual-spatial environments seems to also be difficult. So instead, there has been a shift towards so-called learning machines, which is where the AI approach comes in. The idea is, put enough in for the machine to teach itself the rest. But is it really teaching itself, so that it understands, forms responsible goals, makes free and rational decisions then supervises its interactions towards its goal?

And, doubtless, more . . . .

The same authors (Poole and Mackworth) define and expand:

Artificial intelligence, or AI, is the field that studies the synthesis and analysis of computational agents that act intelligently. [–> instantly, of high relevance to ID] Let us examine each part of this definition.

An agent is something that acts in an environment; it does something. [–> far too broad] Agents include worms, dogs, thermostats [–> that’s a negative f/b loop regulator not a self-moved initiating causal entity], airplanes, robots, humans, companies, and countries.

We are interested in what an agent does; that is, how it acts. We judge an agent by its actions.

An agent acts intelligently when

* what it does is appropriate for its circumstances and its goals, taking into account the short-term and long-term consequences of its actions [–> for agency, goals must be freely chosen, not preprogrammed or controlled]

* it is flexible to changing environments and changing goals

* it learns from experience [–> what is learning without understanding?]

* it makes appropriate choices [–> is pre-programmed branching actual choice?] given its perceptual and computational limitations

A computational agent is an agent whose decisions about its actions can be explained in terms of computation. [–> is computation equivalent to rational contemplation?]

[Pardon an insert:]

That is, the decision can be broken down into primitive operations that can be implemented in a physical device. [–> stepwise signal processing based action per functional organisation and algorithm-driven programming] This computation can take many forms. In humans this computation is carried out in “wetware”; [–> huge assumption just put down as though it were established fact] in computers it is carried out in “hardware.” Although there are some agents that are arguably not computational, such as the wind and rain eroding a landscape [–> agency just lost any definite meaning if this is taken literally: agent = entity, structure or phenomenon with dynamic processes], it is an open question whether all intelligent agents are computational.

All agents are limited. No agents are omniscient or omnipotent. [–> huge worldview level questions not needed for an AI course] Agents can only observe everything about the world in very specialized domains, where “the world” is very constrained. Agents have finite memory. Agents in the real world do not have unlimited time to act. [–> implicit physicalism]

The central scientific goal of AI is to understand the principles that make intelligent behavior possible in natural or artificial systems [–> so, a necessary intersection with ID] . . . >>

Remember, this is not just Wikipedia at this point, we have here a College-level AI textbook published in its 2nd Edn by Cambridge University Press. This is what students studying computer science will be taught as more or less established knowledge, given that AI is embedded in programme requirements for accreditation.

As a first, classical pointer to an alternative, let us pause to see Plato in The Laws Bk X (yes, when we go in a given direction we find Plato, Socrates and Aristotle on the way back). Here is his Athenian Stranger in action:

>>Ath. . . . when one thing changes another, and that another, of such will there be any primary changing element? How can a thing which is moved by another ever be the beginning of change? Impossible. But when the self-moved changes other, and that again other, and thus thousands upon tens of thousands of bodies are set in motion, must not the beginning of all this motion be the change of the self-moving principle? . . . . self-motion being the origin of all motions, and the first which arises among things at rest as well as among things in motion, is the eldest and mightiest principle of change, and that which is changed by another and yet moves other is second.

[ . . . .]

Ath. If we were to see this power existing in any earthy, watery, or fiery substance, simple or compound-how should we describe it?

Cle. You mean to ask whether we should call such a self-moving power life?

Ath. I do.

Cle. Certainly we should.

Ath. And when we see soul in anything, must we not do the same-must we not admit that this is life?[ . . . . ]

Cle. You mean to say that the essence which is defined as the self-moved is the same with that which has the name soul?

Ath. Yes; and if this is true, do we still maintain that there is anything wanting in the proof that the soul is the first origin and moving power of all that is, or has become, or will be, and their contraries, when she has been clearly shown to be the source of change and motion in all things?

Cle. Certainly not; the soul as being the source of motion, has been most satisfactorily shown to be the oldest of all things.

Ath. And is not that motion which is produced in another, by reason of another, but never has any self-moving power at all, being in truth the change of an inanimate body, to be reckoned second, or by any lower number which you may prefer?  

Cle. Exactly.  

Ath. Then we are right, and speak the most perfect and absolute truth, when we say that the soul is prior to the body, and that the body is second and comes afterwards, and is born to obey the soul, which is the ruler?

[ . . . . ]

Ath. If, my friend, we say that the whole path and movement of heaven, and of all that is therein, is by nature akin to the movement and revolution and calculation of mind, and proceeds by kindred laws, then, as is plain, we must say that the best soul takes care of the world and guides it along the good path. [Plato here explicitly sets up an inference to design (by a good soul) from the intelligible order of the cosmos.]>>

A beginning of a very different approach, and food for thought. END

Comments
Followed up: https://uncommondescent.com/informatics/ai-state-configuration-space-search-and-the-id-search-challenge/kairosfocus
February 2, 2018
February
02
Feb
2
02
2018
01:52 AM
1
01
52
AM
PDT
Well worth a chuckle Latemarch!kairosfocus
February 1, 2018
February
02
Feb
1
01
2018
05:50 AM
5
05
50
AM
PDT
Excellent post KF. It brought to mind the evolution of AI. It all began with lightning (electrons) striking rocks (silicon) for billions of years (might a nearby warm pond be helpful?) until now we have the delicate motions of electrons thru silicon that we know of as computers. The software is the result of random noise in the bits and bytes of the operating system (we're still working out how that originated. Any day now!) that were duplicated as a separate file and eventually, driven by natural selection, resulting in the wonderful programs we enjoy today. At the furious rate of evolution we see today I expect to be able to salute our machine overlords any day now.Latemarch
February 1, 2018
February
02
Feb
1
01
2018
05:45 AM
5
05
45
AM
PDT
AI, intelligent agency and the intersection with IDkairosfocus
February 1, 2018
February
02
Feb
1
01
2018
02:46 AM
2
02
46
AM
PDT

Leave a Reply