Uncommon Descent Serving The Intelligent Design Community

Logic & First Principles, 21: Insightful intelligence vs. computationalism

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

One of the challenges of our day is the commonplace reduction of intelligent, insightful action to computation on a substrate. That’s not just Sci Fi, it is a challenge in the academy and on the street — especially as AI grabs more and more headlines.

A good stimulus for thought is John Searle as he further discusses his famous Chinese Room example:

The Failures of Computationalism
John R. Searle
Department of Philosophy
University of California
Berkeley CA

The Power in the Chinese Room.

Harnad and I agree that the Chinese Room Argument deals a knockout blow to Strong AI, but beyond that point we do not agree on much at all. So let’s begin by pondering the implications of the Chinese Room.

The Chinese Room shows that a system, me for example, could pass the Turing Test for understanding Chinese, for example, and could implement any program you like and still not understand a word of Chinese. Now, why? What does the genuine Chinese speaker have that I in the Chinese Room do not have?

The answer is obvious. I, in the Chinese room, am manipulating a bunch of formal symbols; but the Chinese speaker has more than symbols, he knows what they mean. That is, in addition to the syntax of Chinese, the genuine Chinese speaker has a semantics in the form of meaning, understanding, and mental contents generally.

But, once again, why?

Why can’t I in the Chinese room also have a semantics? Because all I have is a program and a bunch of symbols, and programs are defined syntactically in terms of the manipulation of the symbols.

The Chinese room shows what we should have known all along: syntax by itself is not sufficient for semantics. (Does anyone actually deny this point, I mean straight out? Is anyone actually willing to say, straight out, that they think that syntax, in the sense of formal symbols, is really the same as semantic content, in the sense of meanings, thought contents, understanding, etc.?)

Why did the old time computationalists make such an obvious mistake? Part of the answer is that they were confusing epistemology with ontology, they were confusing “How do we know?” with “What it is that we know when we know?”

This mistake is enshrined in the Turing Test(TT). Indeed this mistake has dogged the history of cognitive science, but it is important to get clear that the essential foundational question for cognitive science is the ontological one: “In what does cognition consist?” and not the epistemological other minds problem: “How do you know of another system that it has cognition?”

What is the Chinese Room about? Searle, again:

Imagine that a person—me, for example—knows no Chinese and is locked in a room with boxes full of Chinese symbols and an instruction book written in English for manipulating the symbols. Unknown to me, the boxes are called “the database” and the instruction book is called “the program.” I am called “the computer.”

People outside the room pass in bunches of Chinese symbols that, unknown to me, are questions. I look up in the instruction book what I am supposed to do and I give back answers in Chinese symbols.

Suppose I get so good at shuffling the symbols and passing out the answers that my answers are indistinguishable from a native Chinese speaker’s. I give every indication of understanding the language despite the fact that I actually don’t understand a word of Chinese.

And if I do not, neither does any digital computer, because no computer, qua computer, has anything I do not have. It has stocks of symbols, rules for manipulating symbols, a system that allows it to rapidly transition from zeros to ones, and the ability to process inputs and outputs. That is it. There is nothing else. [Cf. Jay Richards here.]

What is “strong AI”? Techopedia:

Strong artificial intelligence (strong AI) is an artificial intelligence construct that has mental capabilities and functions that mimic the human brain. In the philosophy of strong AI, there is no essential difference between the piece of software, which is the AI, exactly emulating the actions of the human brain, and actions of a human being, including its power of understanding and even its consciousness.

Strong artificial intelligence is also known as full AI.

In short, Reppert has a serious point:

. . . let us suppose that brain state A [–> notice, state of a wetware, electrochemically operated computational substrate], which is token identical to the thought that all men are mortal, and brain state B, which is token identical to the thought that Socrates is a man, together cause the belief [–> concious, perceptual state or disposition] that Socrates is mortal. It isn’t enough for rational inference that these events be those beliefs, it is also necessary that the causal transaction be in virtue of the content of those thoughts . . . [But] if naturalism is true, then the propositional content is irrelevant to the causal transaction that produces the conclusion, and [so] we do not have a case of rational inference. In rational inference, as Lewis puts it, one thought causes another thought not by being, but by being seen to be, the ground for it. But causal transactions in the brain occur in virtue of the brain’s being in a particular type of state that is relevant to physical causal transactions.

This brings up the challenge that computation [on refined rocks] is not rational, insightful, self-aware, semantically based, understanding-driven contemplation:

While this is directly about digital computers — oops, let’s see how they work —

. . . but it also extends to analogue computers (which use smoothly varying signals):

. . . or a neural network:

A neural network is essentially a weighted sum interconnected gate array, it is not an exception to the GIGO principle

A similar approach uses memristors, creating an analogue weighted sum vector-matrix operation:

As we can see, these entities are about manipulating signals through physical interactions, not essentially different from Leibniz’s grinding mill wheels in Monadology 17:

It must be confessed, however, that perception, and that which depends upon it, are inexplicable by mechanical causes, that is to say, by figures and motions. Supposing that there were a machine whose structure produced thought, sensation, and perception, we could conceive of it as increased in size with the same proportions until one was able to enter into its interior, as he would into a mill. Now, on going into it he would find only pieces working upon one another, but never would he find anything to explain perception [[i.e. abstract conception]. It is accordingly in the simple substance, and not in the compound nor in a machine that the perception is to be sought . . .

In short, computationalism falls short.

I add [Fri May 31], that is, computational substrates are forms of general dynamic-stochastic systems and are subject to their limitations:

The alternative is, a supervisory oracle-controlled, significantly free, intelligent and designing bio-cybernetic agent:

As context (HT Wiki) I add [June 10] a diagram of a Model Identification Adaptive Controller . . . which, yes, identifies a model for the plant and updates it as it goes:

MIAC action, notice supervisory control and observation of “visible” outputs fed back to in-loop control and to system ID, where the model creates and updates a model of the plant being controlled. Parallels to the Smith model are obvious.

As I summarised recently:

What we actually observe is:

A: [material computational substrates] –X –> [rational inference]
B: [material computational substrates] —-> [mechanically and/or stochastically governed computation]
C: [intelligent agents] —-> [rational, freely chosen, morally governed inference]
D: [embodied intelligent agents] —-> [rational, freely chosen, morally governed inference]

The set of observations A through D imply that intelligent agency transcends computation, as their characteristics and capabilities are not reducible to:

– components and their device physics,
– organisation as circuits and networks [e.g. gates, flip-flops, registers, operational amplifiers (especially integrators), ball-disk integrators, neuron-gates and networks, etc],
– organisation/ architecture forming computational circuits, systems and cybernetic entities,
– input signals,
– stored information,
– processing/algorithm execution,
– outputs

It may be useful to add here, a simplified Smith model with an in the loop computational controller and an out of the loop oracle that is supervisory, so that there may be room for pondering the bio-cybernetic system i/l/o the interface of the computational entity and the oracular entity:

The Derek Smith two-tier controller cybernetic model

In more details, per Eng Derek Smith:

So too, we have to face the implication of the necessary freedom for rationality. That is, that our minds are governed by known, inescapable duties to truth, right reason, prudence (so, warrant), fairness, justice etc. Rationality is morally governed, it inherently exists on both sides of the IS-OUGHT gap.

That means — on pain of reducing rationality to nihilistic chaos and absurdity — that the gap must be bridged. Post Hume, it is known that that can only be done in the root of reality. Arguably, that points to an inherently good necessary being with capability to found a cosmos. If you doubt, provide a serious alternative under comparative difficulties: ____________

So, as we consider debates on intelligent design, we need to reflect on what intelligence is, especially in an era where computationalism is a dominant school of thought. Yes, we may come to various views, but the above are serious factors we need to take such into account. END

PS: As a secondary exchange developed on quantum issues, I take the step of posting a screen-shot from a relevant Wikipedia clip on the 1999 Delayed choice experiment by Kim et al:

Wiki clip on Kim et al

The layout in a larger scale:

Gaasbeek adds:

Weird, but that’s what we see. Notice, especially, Gaasbeek’s observation on his analysis, that “the experimental outcome (encoded in the combined measurement outcomes) is bound to be the same even if we would measure the idler photon earlier, i.e. before the signal photon by shortening the optical path length of the downwards configuration.” This is the point made in a recent SEP discussion on retrocausality.

PPS: Let me also add, on radio halos:

and, Fraunhoffer spectra:

These document natural detection of quantised phenomena.


Comments
Now I have a hypothetical for you KF Under the pretense of computational innovations, I was wondering what you thought or how you felt about these possibilities Innoway it almost seems like there is a drive to prove that the human mind is nothing more than a meat machine Therefore any purely physical system capable of reproducing exactly what we do, would show that many of the faculties that we think are uniquely human or products of the soul are nothing more than neurological processes in our brain I sadly feel that that is their goal when they’re trying to create neural networks and software programs that mimic our intelligence The other thing that they’re trying to do is train a neural network to predict 100% of our choices and brain activity Now even though I am 100% neural prediction is impossible To do for everyone as a whole it is still possible for neural network to be trained to protect at least one human being 100% Or that’s at least what they suggest I think both of these can be viewed as blows towards the possibility of an immaterial soul or an immaterial mind and also a major blow against free will So what do you think I would like to hear your input on thisAaronS1978
May 29, 2019
May
05
May
29
29
2019
10:16 PM
10
10
16
PM
PDT
There is a stark contrast between a silicon-based machine created with the sole purpose of emulating what we think and what we do Comically we ended up creating a machine with the duality software And hardware Nobody ever makes that distinction or points that out that the very thing our computers that didn’t exist until about 70 But is currently being used as an analogy for our brain and our mind We’re literally putting the cart before the horse as it is our brain that created the computer and our mind that drives it We often make these analogies because they do similar things But they do similar things because we made them do similar things based off of our understanding of ourselves and what we needed Our brain isn’t just a computer it’s a living organism A computer is not a living organism I no means There are a myriad of differences between the brain and a computer But there is one string of logic I would like to point out which was a statement made by Kristof Koch “A software program can faithfully simulate every droplet of water in a storm but that program will never be wet” Now Kristof might of been quoting someone else But I love that quote because of the fact that it does show a fundamental difference between plastic metal and electric and our carbon-based beautiful brains that rewire them selves when they need to We might be able to simulate every single thought that we have but it will never be anything more than a simulation and that’s it. It will never be wet much like a program created to simulate every drop of a storm and the storm itself will never produce moisture. The program can never produce wetness much like it will probably never produce consciousness unless we made it “alive”AaronS1978
May 29, 2019
May
05
May
29
29
2019
07:40 PM
7
07
40
PM
PDT
F/N: More on Computationalism: >>Computationalism in the Philosophy of Mind Gualtiero Piccinini University of Missouri – St. Louis Computationalism has been the mainstream view of cognition for decades. There are periodic reports of its demise, but they are greatly exaggerated. This essay surveys some recent literature on computationalism and reaches the following conclusions. Computationalism is a family of theories about the mechanisms of cognition. The main relevant evidence for testing computational theories comes from neuroscience, though psychology and AI are relevant too. Computationalism comes in many versions, which continue to guide competing research programs in philosophy of mind as well as psychology and neuroscience. Although our understanding of computationalism has deepened in recent years, much work in this area remains to be done . . . . Computationalism is the view that intelligent behavior is causally explained by computations performed by the agent’s cognitive system (or brain). 1 In roughly equivalent terms, computationalism says that cognition is com- putation. Computationalism has been mainstream in philosophy of mind – as well as psychology and neuroscience – for several decades. Many aspects of computationalism have been investigated and debated in recent years. Several lessons are being learned: (1) computationalism is consistent with different metaphysical views about the mind, (2) com- putationalism must be grounded in an adequate account of computation, (3) computationalism provides a mechanistic explanation of behavior, (4) computationalism was originally introduced on the grounds of neurological evidence, (5) all computationalists (yes, even classicists) are connectionists in the most general sense, although not all connectionists are computa- tionalists, and (6) which, if any, variety of computationalism is correct depends on how the brain works.>> Also: >>Computationalism, Connectionism, and the Philosophy of Mind Brian P. McLaughlin The central questions of the philosophy of mind are the nature of mental phenomena, and how mental phenomena ?t into the causal structure of reality. The computational theory of mind aims to answer these questions. The central tenet of the theory is that a mind is a computer. Ac- cording to the theory, mental states and events enter into causal relations via operations of the computer. The main aim of the theory is to say what kind of computer – what kind of computa- tional mechanism – a mind is. The answer is still unknown. Pursuing it is the main research pro- gram of the theory. In the most general sense, a computer is, roughly, a system of structures functionally organized in such a way as to be able to com- pute. The structures, their functional organiza- tion, and the basic modes of operation of the system when it computes comprise the functional architecture of the computer. The two tasks of the computational theory of mind are: (1) to identify the functional architecture of the com- puting system that grounds our mental abilities and (2) to explain how those abilities are exer- cised via operations of the system. The tasks are related. The explanation of how operations of the system constitute exercises of our mental abilities will justify the claim that our possession of those abilities consists in our being at least partly constituted by the system. Computationalists hold that the functional architecture of the computing system that grounds our mental abilities resides in our brains. There is, however, no consensus as to what even the general character of that architec- ture is. The symbols-system paradigm and the connectionist paradigm are the two dominant research paradigms within the computational theory of mind. They differ primarily in what kind of computer the mind is assumed to be, and thus in the kinds of functional architectures explored. The symbol-system paradigm pre- supposes that the mind is a kind of automatic formal system, while the connectionist paradigm presupposes that it is a system of connectionist networks.>> Showing the general lie of the land. KFkairosfocus
May 29, 2019
May
05
May
29
29
2019
11:42 AM
11
11
42
AM
PDT
F/N: As I began my discussion on mind in IOSE, this is where I began: >>The first and most directly evident fact of “man-nishness” is that we are individual, conscious, intelligent, purposeful, designing, minded beings; with consciences. Thus, Aristotle long ago observed that rational animality is the essence of being human. So, not only must we be able to credibly account for our anatomical similarity to the mammalian primates (including the commonly made claim that our genes show a "98%" overlap with those of the chimpanzees), but also for the things that seem to make us unique: that pattern of conscious, language-using abstract reasoning, intuitiveness and sense of obligation to the truth and the right that embraces both the intellectual and the moral. (a) What is “mind”? We have always wondered about where we came from, and why we so obviously share bodily existence with the broad world of animals, but simultaneously seem to be ever so distinctively different from what some have called "dumb animals." The word "dumb" offers a key clue: man is the user of words, those symbolic sounds and pen-strokes that are so important in both practical survival and abstract thought. So, no scientific account of man can be correct or credible, if it cannot coherently and satisfactorily account for not just the bodily facts of man, but the common evidence of our inner life. For, we are only aware of and can only analyse and argue about our bodily existence and the external world through the instrumentality of our inner life of the mind. In this sense, Descartes' "I think, so I exist," is undeniably and self-evidently true. This leads to what David Chalmers (1995) called The Hard Problem of Consciousness. As one might outline: >The term . . . refers to the difficult problem of explaining why we have qualitative phenomenal experiences. It is contrasted with the "easy problems" of explaining the ability to discriminate, integrate information, report mental states, focus attention, etc. Easy problems are easy because all that is required for their solution is to specify a mechanism that can perform the function. That is, their proposed solutions, regardless of how complex or poorly understood they may be, can be entirely consistent with the modern materialistic conception of natural phenomen[[a]. Hard problems are distinct from this set because they "persist even when the performance of all the relevant functions is explained."> Let us note a key phrase: "the modern materialistic conception of natural phenomen[[a]." That is, we again see the evolutionary materialistic presumption at work. As the University of California's Center for Evolutionary Psychology at Santa Barbera posits: >Evolutionary psychology is based on the recognition that the human brain consists of a large collection of functionally specialized computational devices that evolved to solve the adaptive problems regularly encountered by our hunter-gatherer ancestors. Because humans share a universal evolved architecture, all ordinary individuals reliably develop a distinctively human set of preferences, motives, shared conceptual frameworks, emotion programs, content-specific reasoning procedures, and specialized interpretation systems--programs that operate beneath the surface of expressed cultural variability, and whose designs constitute a precise definition of human nature.> But, the matter is not so simple as that, for as another generic source aptly summarises, there are many issues on mind and its relation to body, issues that (whether labelled science or not) are plainly directly relevant to any origins science project to account for the origin of man: [NWE, Mind:] >Mind is a concept developed by self-conscious humans trying to understand what is the self that is conscious and how does that self relate to its perceived world . . . Aspects of mind are also attributed to complex animals, which are commonly considered to be conscious. Studies in recent decades suggest strongly that the great apes have a level of self-consciousness as well. Philosophers have long sought to understand what is mind and its relationship to matter and the body . . . Based on his world model that the perceived world is only a shadow of the real world of ideal Forms, Plato, a dualist, conceived of mind (or reason) as the facet of the tripartite soul that can know the Forms. The soul existed independent of the body, and its highest aspect, mind, was immortal. Aristotle, apparently both a monist and a dualist, insisted in The Soul that soul was unitary, that soul and body are aspects of one living thing, and that soul extends into all living things. Yet in other writings from another period of his life, Aristotle expressed the dualistic view that the knowing function of the human soul, the mind, is distinctively immaterial and eternal. Saint Augustine adapted from the Neoplatonism of his time the dualist view of soul as being immaterial but acting through the body. He linked mind and soul closely in meaning. Some 900 years later, in an era of recovering the wisdom of Aristotle, Saint Thomas Aquinas identified the species, man, as being the composite substance of body and soul (or mind), with soul giving form to body, a monistic position somewhat similar to Aristotle's. Yet Aquinas also adopted a dualism regarding the rational soul, which he considered to be immortal. Christian views after Aquinas have diverged to cover a wide spectrum, but generally they tend to focus on soul instead of mind, with soul referring to an immaterial essence and core of human identity and to the seat of reason, will, conscience, and higher emotions. Rene Descartes established the clear mind-body dualism that has dominated the thought of the modern West. He introduced two assertions: First, that mind and soul are the same and that henceforth he would use the term mind and dispense with the term soul; Second, that mind and body were two distinct substances, one immaterial and one material, and the two existed independent of each other except for one point of interaction in the human brain. In the East, quite different theories related to mind were discussed and developed by Adi Shankara, Siddh?rtha Gautama, and other ancient Indian philosophers, as well as by Chinese scholars. As psychology became a science starting in the late nineteenth century and blossomed into a major scientific discipline in the twentieth century, the prevailing view in the scientific community came to be variants of physicalism with the assumption that all the functions attributed to mind are in one way or another derivative from activities of the brain. Countering this mainstream view, a small group of neuroscientists has persisted in searching for evidence suggesting the possibility of a human mind existing and operating apart from the brain. In the late twentieth century as diverse technologies related to studying the mind and body have been steadily improved, evidence has emerged suggesting such radical concepts as: the mind should be associated not only with the brain but with the whole body; and the heart may be a center of consciousness complementing the brain. [[New World Enc., article, Mind]> So, we may pose a cluster of challenges in seeking a scientific account of our human-ness. 1 --> While the evolutionary materialists plainly dominate institutional science, it faces the hard -- and plainly unsolved -- problem of consciousness. 2 --> Available philosophical resources and the history of ideas suggest that alternative explanatory models will raise the issue of the reality of an immaterial mind. 3 --> A central challenge for any such alternative model, is whether it can produce empirically testable hypotheses, a key touchstone of science. 4 --> Equally, the materialistic approach must face the challenge as to whether its favoured methodological naturalism imposes an undue censorship that hobbles science from being able to be an unfettered (but intellectually and ethically responsible) pursuit of the truth about our world. 5 --> Similarly, we now must ask: what does (or should) "empirical" mean? And, does thoughtful reflection on our common inner life experience count as empirical evidence? Why, or why not? 6 --> If not, how can we then use the deliverances of said inner life as we undertake scientific activities, which, are plainly an intellectual – i.e. minded – exercise?>> KFkairosfocus
May 29, 2019
May
05
May
29
29
2019
11:23 AM
11
11
23
AM
PDT
F/N: More from Wikipedia on intelligence: >>The definition of intelligence is controversial.[5] Some groups of psychologists have suggested the following definitions: From "Mainstream Science on Intelligence" (1994), an op-ed statement in the Wall Street Journal signed by fifty-two researchers (out of 131 total invited to sign):[6] A very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings—"catching on," "making sense" of things, or "figuring out" what to do.[7] From Intelligence: Knowns and Unknowns (1995), a report published by the Board of Scientific Affairs of the American Psychological Association: Individuals differ from one another in their ability to understand complex ideas, to adapt effectively to the environment, to learn from experience, to engage in various forms of reasoning, to overcome obstacles by taking thought. Although these individual differences can be substantial, they are never entirely consistent: a given person's intellectual performance will vary on different occasions, in different domains, as judged by different criteria. Concepts of "intelligence" are attempts to clarify and organize this complex set of phenomena. Although considerable clarity has been achieved in some areas, no such conceptualization has yet answered all the important questions, and none commands universal assent. Indeed, when two dozen prominent theorists were recently asked to define intelligence, they gave two dozen, somewhat different, definitions.[8] >> KFkairosfocus
May 29, 2019
May
05
May
29
29
2019
11:16 AM
11
11
16
AM
PDT
laying out issueskairosfocus
May 29, 2019
May
05
May
29
29
2019
10:24 AM
10
10
24
AM
PDT
Are you feeling ignored over here? :)Brother Brian
May 29, 2019
May
05
May
29
29
2019
05:12 AM
5
05
12
AM
PDT
PPS: Observe Wikipedia further: >>In philosophy, the computational theory of mind (CTM) refers to a family of views that hold that the human mind is an information processing system and that cognition and consciousness together are a form of computation. Warren McCulloch and Walter Pitts (1943) were the first to suggest that neural activity is computational. They argued that neural computations explain cognition.[1] The theory was proposed in its modern form by Hilary Putnam in 1967, and developed by his PhD student, philosopher and cognitive scientist Jerry Fodor in the 1960s, 1970s and 1980s.[2][3] Despite being vigorously disputed in analytic philosophy in the 1990s due to work by Putnam himself, John Searle, and others, the view is common in modern cognitive psychology and is presumed by many theorists of evolutionary psychology.[citation needed] In the 2000s and 2010s the view has resurfaced in analytic philosophy (Scheutz 2003, Edelman 2008).[citation needed] The computational theory of mind holds that the mind is a computational system that is realized (i.e. physically implemented) by neural activity in the brain. The theory can be elaborated in many ways and varies largely based on how the term computation is understood. Computation is commonly understood in terms of Turing machines which manipulate symbols according to a rule, in combination with the internal state of the machine. The critical aspect of such a computational model is that we can abstract away from particular physical details of the machine that is implementing the computation.[3] This is to say that computation can be implemented by silicon chips or neural networks, so long as there is a series of outputs based on manipulations of inputs and internal states, performed according to a rule. CTM, therefore holds that the mind is not simply analogous to a computer program, but that it is literally a computational system.[3] Computational theories of mind are often said to require mental representation because 'input' into a computation comes in the form of symbols or representations of other objects. A computer cannot compute an actual object, but must interpret and represent the object in some form and then compute the representation. The computational theory of mind is related to the representational theory of mind in that they both require that mental states are representations. However, the representational theory of mind shifts the focus to the symbols being manipulated. This approach better accounts for systematicity and productivity.[3] In Fodor's original views, the computational theory of mind is also related to the language of thought. The language of thought theory allows the mind to process more complex representations with the help of semantics. (See below in semantics of mental states). Recent work has suggested that we make a distinction between the mind and cognition. Building from the tradition of McCulloch and Pitts, the Computational Theory of Cognition (CTC) states that neural computations explain cognition.[1] The Computational Theory of Mind asserts that not only cognition, but also phenomenal consciousness or qualia, are computational. That is to say, CTM entails CTC. While phenomenal consciousness could fulfill some other functional role, computational theory of cognition leaves open the possibility that some aspects of the mind could be non-computational. CTC therefore provides an important explanatory framework for understanding neural networks, while avoiding counter-arguments that center around phenomenal consciousness. >>kairosfocus
May 29, 2019
May
05
May
29
29
2019
05:00 AM
5
05
00
AM
PDT
F/N: TFD cluster on Intelligence: >> intelligence Also found in: Thesaurus, Medical, Legal, Financial, Acronyms, Idioms, Encyclopedia, Wikipedia. Related to intelligence: intelligence test, military intelligence in·tel·li·gence (?n-t?l??-j?ns) n. 1. The ability to acquire, understand, and use knowledge: a person of extraordinary intelligence. 2. a. Information, especially secret information gathered about an actual or potential enemy or adversary. b. The gathering of such information: "Corporate intelligence relies on a slew of tools, some sophisticated, many quite basic" (Neil King and Jess Bravin). c. An agency or organization whose purpose is to gather such information: an officer from military intelligence. 3. An intelligent, incorporeal being, especially an angel. American Heritage® Dictionary of the English Language, Fifth Edition. Copyright © 2016 by Houghton Mifflin Harcourt Publishing Company. Published by Houghton Mifflin Harcourt Publishing Company. All rights reserved. intelligence (?n?t?l?d??ns) n 1. (Psychology) the capacity for understanding; ability to perceive and comprehend meaning 2. good mental capacity: a person of intelligence. 3. old-fashioned news; information 4. (Military) military information about enemies, spies, etc 5. (Military) a group or department that gathers or deals with such information 6. (often capital) an intelligent being, esp one that is not embodied 7. (Military) (modifier) of or relating to intelligence: an intelligence network. [C14: from Latin intellegentia, from intellegere to discern, comprehend, literally: choose between, from inter- + legere to choose] in?telli?gential adj Collins English Dictionary – Complete and Unabridged, 12th Edition 2014 © HarperCollins Publishers 1991, 1994, 1998, 2000, 2003, 2006, 2007, 2009, 2011, 2014 in•tel•li•gence (?n?t?l ? d??ns) n. 1. capacity for learning, reasoning, and understanding; aptitude in grasping truths, relationships, facts, meanings, etc. 2. mental alertness or quickness of understanding. 3. manifestation of a high mental capacity. 4. the faculty or act of understanding. 5. information received or imparted; news. 6. a. secret information, esp. about an enemy or potential enemy. b. the gathering or distribution of such information. c. the evaluated conclusions drawn from such information. d. an organization engaged in gathering such information: military intelligence. 7. (often cap.) an intelligent being or spirit, esp. an incorporeal one. [1350–1400; Middle English > KF PS: Observe Wikipedia: >>Intelligence has been defined in many ways, including: the capacity for logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem solving. More generally, it can be described as the ability to perceive or infer information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context. Intelligence is most often studied in humans but has also been observed in both non-human animals and in plants. Human intelligence research belongs to the field of psychology. Intelligence in machines is called artificial intelligence, which is commonly implemented in computer systems using programs and, sometimes, appropriate hardware. >> And again: >>In computer science, artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and animals. Colloquially, the term "artificial intelligence" is used to describe machines that mimic "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving".[1] As machines become increasingly capable, tasks considered to require "intelligence" are often removed from the definition of AI, a phenomenon known as the AI effect.[2] A quip in Tesler's Theorem says "AI is whatever hasn't been done yet."[3] For instance, optical character recognition is frequently excluded from things considered to be AI, having become a routine technology.[4] Modern machine capabilities generally classified as AI include successfully understanding human speech,[5] competing at the highest level in strategic game systems (such as chess and Go),[6] autonomously operating cars, intelligent routing in content delivery networks, and military simulations. Artificial intelligence can be classified into three different types of systems: analytical, human-inspired, and humanized artificial intelligence.[7] Analytical AI has only characteristics consistent with cognitive intelligence; generating a cognitive representation of the world and using learning based on past experience to inform future decisions. Human-inspired AI has elements from cognitive and emotional intelligence; understanding human emotions, in addition to cognitive elements, and considering them in their decision making. Humanized AI shows characteristics of all types of competencies (i.e., cognitive, emotional, and social intelligence), is able to be self-conscious and is self-aware in interactions with others. >>kairosfocus
May 29, 2019
May
05
May
29
29
2019
04:57 AM
4
04
57
AM
PDT
Logic & First Principles, 21: Insightful intelligence vs. computationalismkairosfocus
May 28, 2019
May
05
May
28
28
2019
08:37 AM
8
08
37
AM
PDT
1 9 10 11

Leave a Reply