Uncommon Descent Serving The Intelligent Design Community

Does ID ASSUME “contra-causal free will” and “intelligence” (and so injects questionable “assumptions”)?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Those who have been following recent exchanges at UD will recognise that the headlined summarises the current objection highlighted by objector RDFish, an AI advocate and researcher.

A bit of backdrop will be useful; a clip from Luke Muehlhauser in the blog/site “Common Sense Atheism” will aid us in understanding claim and context:

Contra-causal free will is the power to do something without yourself being fully caused to do it. This is what most people mean by “free will.” Contra-causal free will is distinct from what you might call caused free will, which is the type of free will compatibilists like Frankfurt and Dennett accept. Those with caused free will are able to do what they want. But this doesn’t mean that their actions are somehow free from causal determination. What you want, and therefore how you act, are totally determined by the causal chain of past events (neurons firing, atoms moving, etc.) Basically, if humans have only caused free will, then we are yet another species of animal. If humans have contra-causal free will, then we have a very special ability to transcend the causal chain to which the rest of nature is subject.

This obviously reflects the underlying view expressed by William Provine in his well known 1998 U Tenn Darwin Day keynote address:

Naturalistic evolution has clear consequences that Charles Darwin understood perfectly. 1) No gods worth having exist; 2) no life after death exists; 3) no ultimate foundation for ethics exists; 4) no ultimate meaning in life exists; and 5) human free will is nonexistent . . . .  The first 4 implications are so obvious to modern naturalistic evolutionists that I will spend little time defending them. Human free will, however, is another matter. Even evolutionists have trouble swallowing that implication. I will argue that humans are locally determined systems that make choices. They have, however, no free will . . .

However, it is hard to see how such views — while seemingly plausible in a day dominated by a priori evolutionary Materialism  and Scientism — can escape the stricture made by J B S Haldane at the turn of the 1930s:

“It seems to me immensely unlikely that mind is a mere by-product of matter. For if my mental processes are determined wholly by the motions of atoms in my brain I have no reason to suppose that my beliefs are true. They may be sound chemically, but that does not make them sound logically. And hence I have no reason for supposing my brain to be composed of atoms. In order to escape from this necessity of sawing away the branch on which I am sitting, so to speak, I am compelled to believe that mind is not wholly conditioned by matter.” [“When I am dead,” in Possible Worlds: And Other Essays [1927], Chatto and Windus: London, 1932, reprint, p.209.]

It is not helpful to saw off the branch on which we all must sit: in order to do science, as well as to think, reason and know we must be sufficiently free and responsible to be self-moved by insight into meanings and associated ground-consequent relationships not blindly programmed and controlled by mechanical necessity and/or chance, directly or indirectly. (It does not help, too, that the only empirically known, adequate cause of functionally specific, complex organisation and associated information — FSCO/I — is design.)

That is, we must never forget the GIGO-driven limitations of blindly mechanical cause-effect chains in computers:

mpu_model

. . . and in neural networks alike:

A neural network is essentially a weighted sum interconnected gate array, it is not an exception to the GIGO principle
A neural network is essentially a weighted sum interconnected gate array, it is not an exception to the GIGO principle

 

That is, it is quite evident that for cause, we can reasonably conclude that mechanical cause-effect chain based computation is categorically distinct from self-aware, self-moved responsible, rational contemplation.

[U/D Aug. 21:] Where, it will help to note on the classic structured programming structures, which — even if they incorporate a stochastic, chance based process — are not examples of freely made insight based decisions (save those of the programmer) but instead are cases of blind GIGO-limited computation based on programmed cause-effect sequences:

The classic programming structures, which are able to carry out any algorithmic procedure
The classic programming structures, which are able to carry out any algorithmic procedure

In turn, that points to intelligence, an observed and measurable phenomenon.

This, too, is being stridently dismissed as a dubious metaphysically driven assumption; so let us note from an Educational Psychology 101 site:

E. G. Boring, a well-known Harvard psychologist in the 1920′s defined intelligence as whatever intelligence tests measure. Wechsler, one of the most influential researchers in the area of intelligence defined it as the global capacity of a person to act purposefully, to think rationally, and to deal effectively with his/her environment. Notice that there is a conative aspect to this definition. [–> AmHD: co·na·tion (k-nshn) n. Psychology The aspect of mental processes or behavior directed toward action or change and including impulse, desire, volition, and striving.] Many modern psychology textbooks would accept a working definition of intelligence as the general ability to perform cognitive tasks. Others might favor a more behaviorally-oriented definition such as the capacity to learn from experience or the capacity to adapt to one’s environment. Sternberg has combined these two viewpoints into the following: Intelligence is the cognitive ability of an individual to learn from experience, to reason well, to remember important information, and to cope with the demands of daily living.

That is, we have an empirically founded, measurable concept. One that sees major application in science and daily life.

Where, further, design can then be understood as intelligently, purposefully directed contingency — that is, design (and its characteristic outputs such as FSCO/I) will be manifestations of intelligent action. So, it is unsurprising to see leading ID researcher William Dembski remarking:

We know from experience that intelligent agents build intricate machines that need all their parts to function [–> i.e. he is specifically discussing “irreducibly complex” objects, structures or processes for which there is a core group of parts all of which must be present and properly arranged for the entity to function (cf. here, here and here)], things like mousetraps and motors. And we know how they do it — by looking to a future goal and then purposefully assembling a set of parts until they’re a working whole. Intelligent agents, in fact, are the one and only type of thing we have ever seen doing this sort of thing from scratch. In other words, our common experience provides positive evidence of only one kind of cause able to assemble such machines. It’s not electricity. It’s not magnetism. It’s not natural selection working on random variation. It’s not any purely mindless process. It’s intelligence  . . . . 

When we attribute intelligent design to complex biological machines that need all of their parts to work, we’re doing what historical scientists do generally. Think of it as a three-step process: (1) locate a type of cause active in the present that routinely produces the thing in question; (2) make a thorough search to determine if it is the only known cause of this type of thing; and (3) if it is, offer it as the best explanation for the thing in question. 

[William Dembski and Jonathan Witt, Intelligent Design Uncensored: An Easy-to-Understand Guide to the Controversy, pp. 20-21, 53 (InterVarsity Press, 2010). HT, CL of ENV & DI.]

But, one may ask, why is it that FSCO/I and the like are observed as characteristic products of intelligence? Is that a mere matter of coincidence?

No.

Because of the blind, needle- in- haystack challenge (similar to that which grounds the second law of thermodynamics in its statistical form) faced by a solar system of 10^57 atoms or an observed cosmos of some 10^80 atoms, a 10^17 s blind chance and mechanical necessity driven search process faces empirically insuperable odds:

csi_defnSo, even the notion that our brains have been composed and programmed by a blind chance and necessity search process over 4 bn years of life on earth is dubious, once we see that FSCO/I beyond 500 – 1,000 bits faces a super-search challenge.

As for the notion that blind chance and mechanical necessity adequately account for the origin and diversification across major body plans, of cell based life, let the advocates of such adequately account — on observed evidence not a priori materialist impositions dressed up in lab coats — for something like protein synthesis (HT, VJT, onward thanks Wiki Media):

Protein Synthesis (HT: Wiki Media)
Protein Synthesis (HT: Wiki Media)

 

That is the context in which, on Sunday, I responded to RDF at 235 in the Do We Need a Context thread, as follows — only to be studiously ignored (as is his common tactic):

______________

>>I find it important to speak for record:

[RDF to SB:] . . . ID rests on the assumption of libertarianism, an unprovable metaphysical assumption

This characterisation of SB’s reasoning is false to the full set of options he puts on the table, but I leave answering that to SB.

What is more interesting is how you[–> RDF]  switch from an empirical inference to projection of a phil assumption you reject while ignoring something that is easily empirically and analytically verifiable. Which, strongly implicates that the root problem we face is ideological, driven and/or influenced by a priori evolutionary materialism [perhaps by the back door of methodological impositions] and/or its fellow travellers.

First, intelligence is a summary term for the underlying capacity of certain observed beings to emit characteristic behaviours, most notably to generate FSCO/I in its various forms.

For example as your posts in this thread demonstrate, you understand and express yourself in textual language in accord with well known specifications of written English. It can be shown that it is extremely implausible for blind chance and/or mechanical necessity to stumble upon zones of FSCO/I in the sea of possible configurations, once we pass 500 – 1,000 bits of complexity. Where as 3-d descriptions of complex functional objects can easily be reduced to strings [cf. AutoCAD etc], discussion on strings is WLOG.

At no point in years of discussion have you ever satisfactorily addressed this easily shown point. (Cf. here.)

Despite your skepticism, the above is sufficient to responsibly accept the significance of intelligence per a basic description and/or examples such as humans and dam-building beavers or even flint-knapping fire-using omelette-cooking chimps — there is reportedly at least one such. Then there was a certain bear who was a private in the Polish Army during WW II. Etc.

Being human is obviously neither necessary to nor sufficient for being intelligent.

Nor for that matter — given the significance of fine tuning of our observed cosmos from its origin, would it be wise to demand embodiment in a material form. Where also, it has been sufficiently pointed out — whether or no you are inclined to accept such — that a computational material substrate is not enough to account for insightful, self-aware rational contemplation.

We should not ideologically lock out possibilities.

Where also, the notion of “proof” — as opposed to warrant per inference to best explanation — is also material. In both science and serious worldviews discussion, IBE is more reasonable as a criterion of reasonableness than demonstrative proof on premises acceptable to all rational individuals etc. The projection of such a demand while one implicitly clings to a set of a prioris that are at least as subject to comparative difficulties challenge is selective hyperskepticism.

So, already we see a functional framework for identifying the attribute intelligence and using it as an empirically founded concept. One that is in fact a generally acknowledged commonplace. Let me again cite Wiki, via the UD WACs and Glossary as at 206 above . . . which of course you ignored:

Intelligence – Wikipedia aptly and succinctly defines: “capacities to reason, to plan, to solve problems, to think abstractly, to comprehend ideas, to use language, and to learn.” . . . .

Chance – undirected contingency. That is, events that come from a cluster of possible outcomes, but for which there is no decisive evidence that they are directed; especially where sampled or observed outcomes follow mathematical distributions tied to statistical models of randomness. (E.g. which side of a fair die is uppermost on tossing and tumbling then settling.)

Contingency – here, possible outcomes that (by contrast with those of necessity) may vary significantly from case to case under reasonably similar initial conditions. (E.g. which side of a die is uppermost, whether it has been loaded or not, upon tossing, tumbling and settling.). Contingent [as opposed to necessary] beings begin to exist (and so are caused), need not exist in all possible worlds, and may/do go out of existence.

Necessity — here, events that are triggered and controlled by mechanical forces that (together with initial conditions) reliably lead to given – sometimes simple (an unsupported heavy object falls) but also perhaps complicated — outcomes. (Newtonian dynamics is the classical model of such necessity.) In some cases, sensitive dependence on [or, “to”] initial conditions may leads to unpredictability of outcomes, due to cumulative amplification of the effects of noise or small, random/ accidental differences between initial and intervening conditions, or simply inevitable rounding errors in calculation. This is called “chaos.”

Design — purposefully directed contingency. That is, the intelligent, creative manipulation of possible outcomes (and usually of objects, forces, materials, processes and trends) towards goals. (E.g. 1: writing a meaningful sentence or a functional computer program. E.g. 2: loading of a die to produce biased, often advantageous, outcomes. E.g. 3: the creation of a complex object such as a statue, or a stone arrow-head, or a computer, or a pocket knife.) . . . .

Intelligent design [ID] – Dr William A Dembski, a leading design theorist, has defined ID as “the science that studies signs of intelligence.” That is, as we ourselves instantiate [thus exemplify as opposed to “exhaust”], intelligent designers act into the world, and create artifacts. When such agents act, there are certain characteristics that commonly appear, and that – per massive experience — reliably mark such artifacts. It it therefore a reasonable and useful scientific project to study such signs and identify how we may credibly reliably infer from empirical sign to the signified causal factor: purposefully directed contingency or intelligent design . . .

Indeed, on just this it is you who have a burden of warranting dismissal of the concept.

Where also, design can be summed up as intelligently directed contingency that evidently targets a goal, which may be functional, communicative etc. We easily see this from text strings in this thread and the PCs etc we are using to interact.

Again, empirically well founded.

So, the concept of intelligent design is a reasonable one, and FSCO/I as reliable sign thereof is also reasonable.

In that context the sort of rhetorical resorts now being championed by objectors actually indicate the strength of the design inference argument. Had it been empirically poorly founded, it would long since have been decisively undermined on those grounds. The resort instead to debating meanings of widely understood terms and the like is inadvertently revealing.

But also, this is clearly also a worldviews level issue.

So, I again highlight from Reppert (cf. here on) on why it is highly reasonable to point to a sharp distinction between ground-consequent rational inference and blindly mechanical cause effect chains involved in the operation of a computational substrate such as a brain and CNS are:

. . . let us suppose that brain state A, which is token identical to the thought that all men are mortal, and brain state B, which is token identical to the thought that Socrates is a man, together cause the belief that Socrates is mortal. It isn’t enough for rational inference that these events be those beliefs, it is also necessary that the causal transaction be in virtue of the content of those thoughts . . . [[But] if naturalism is true, then the propositional content is irrelevant to the causal transaction that produces the conclusion, and [[so] we do not have a case of rational inference. In rational inference, as Lewis puts it, one thought causes another thought not by being, but by being seen to be, the ground for it. But causal transactions in the brain occur in virtue of the brain’s being in a particular type of state that is relevant to physical causal transactions.

Unless we are sufficiently intelligent to understand and infer based on meanings, and unless we are also free enough to follow rational implications or inferences rather than simply carry out GIGO-limited computational cause-effect chains, rationality itself collapses. So, any system of thought that undermines rationality through computational reductionism, or through dismissing responsible rational freedom is delusional and self referentially incoherent.

You may wish to dismissively label responsible freedom as “contra-causal free will,” or the like and dismiss such as “unprovable.” That is of no effective consequence to the fact of responsible rational freedom that is not plausibly explained on blindly mechanical and/or stochastic computation. Which last is a condition of even participating in a real discussion — I dare to say, a meeting of minds.

That is, we again see the fallacy of trying to get North by heading due West.

It is time to reform and renew our thinking again in our civilisation, given the patent self-refutation of the ever so dominant evolutionary materialism. As Haldane pointed out so long ago now:

“It seems to me immensely unlikely that mind is a mere by-product of matter. For if my mental processes are determined wholly by the motions of atoms in my brain I have no reason to suppose that my beliefs are true. They may be sound chemically, but that does not make them sound logically. And hence I have no reason for supposing my brain to be composed of atoms. In order to escape from this necessity of sawing away the branch on which I am sitting, so to speak, I am compelled to believe that mind is not wholly conditioned by matter.” [[“When I am dead,” in Possible Worlds: And Other Essays [1927], Chatto and Windus: London, 1932, reprint, p.209.]

It is time for fresh, sound thinking.  >>

______________

I actually think this is a good sign. In the 1980’s and 90’s as Marxism gradually crumbled, many Marxists redoubled their efforts, until the ship went down under them. So, the trend that objections to the design inference are now being commonly rooted in hyperskeptically challenging common sense, empirically warranted concepts such as design, intelligence and functionally specific quantifiably complex organisation and associated information all point to the gradual crumbling of the objector case on the actual empirical and analytical merits. END

Comments
*sigh* A simple switch. Push it up, one effect. Push it down, the opposite effect. This idea is at the root of all electronics. It's what allows people to communicate over the internet. And yet no one believes their communications are determined. Take a common light switch. Push it up the lights come on, push it down the lights go off. Unscrew the wall plate and carefully pull out the switch and rotate it 180 degrees and reseat it and reattach the wall plate. There has been NO CHANGE to the wiring. Yet now pushing the switch down turns the light on and pushing it up turns the light off. ON and OFF are INFORMATIONAL terms. Apparently being a Popperian entails groping around in the dark.Mung
September 12, 2014
September
09
Sep
12
12
2014
10:23 PM
10
10
23
PM
PDT
franklin wins the prize. almost.Mung
September 9, 2014
September
09
Sep
9
09
2014
04:10 PM
4
04
10
PM
PDT
mung
Have you figured out yet whether a light switch must be in the up position or in the down position in order to turn on the lights?
it can be in either position given what little information you've provided. For example I'm looking at a light switch which turned the lights on this morning in the up position and now this evening will turn the light on when in the down position. the only answer possible, given the lack of detail provided, is 'it depends'.franklin
September 7, 2014
September
09
Sep
7
07
2014
05:45 PM
5
05
45
PM
PDT
Mung, Since Daniel's response is technically accurate, yet supposedly wrong, you must be working with some kind of theory to interpret those observations. What theory that seems problematic are you criticizing? That's our starting point. Here's another response: The orientation the switch was in when installed in the wall. is that wrong too? if so, why?Popperian
September 7, 2014
September
09
Sep
7
07
2014
04:17 PM
4
04
17
PM
PDT
The position of the switch.Daniel King
September 7, 2014
September
09
Sep
7
07
2014
12:54 PM
12
12
54
PM
PDT
DK, Wrong. Try again.Mung
September 7, 2014
September
09
Sep
7
07
2014
10:48 AM
10
10
48
AM
PDT
What determines that pushing the switch to the up position will turn on the lights on rather than turning the lights off? The wiring. Next question?Daniel King
September 7, 2014
September
09
Sep
7
07
2014
10:32 AM
10
10
32
AM
PDT
P: Correct me if I’m wrong, but you appear to assume we would have first intended to design a computer create explanations. And since we didn’t, it can’t.
Here's another example. A coffee shop I frequent was out of knives this morning, so they gave me a fork with my bagel and cream cheese. A fork was not designed to be a knife, yet I was still able to use it to solve the same problem: spreading cream cheese on a bagel. Was it as effective as a knife? No, Was I able to use the fork's handle as a knife? Yes. These are the observations that ID proponents seem to ignore, which are better explained by conjecture and criticism. Just as we cannot guarantee any solution we conjecture to solve a specific problem will be successful at solving that problem, we cannot guarantee that same solution will not be successful in solving some other problem we did not intend to solve, either. This is known as the law of unexpected consequences. For example, the sinking of ships in shallow waters at wartime has resulted in many artificial coral reefs, which are beneficial for the environment, attracted divers to the area, etc. Yet, ships were designed with the explicit (and polar opposite) purpose of staying afloat. On a side note: when I mentioned I could use the handle of the fork as a knife, the barista said "you must be an engineer."Popperian
September 7, 2014
September
09
Sep
7
07
2014
07:59 AM
7
07
59
AM
PDT
Mung, I'm quite capable of understanding the simple task of bring light into a room. Again, how is that relevant? Observations of a light switch in operation are just that - observations. They don't imply anything outside of a theory. So, what problem are we trying to solve? What theory are we criticizing? IOW, I'm asking you to make explicit some implicit theory you're applying to those observations.Popperian
September 7, 2014
September
09
Sep
7
07
2014
07:08 AM
7
07
08
AM
PDT
Popperian, You appear incapable of understanding the simple task of bringing light to an unlit room. So why should anyone here give any regard to anything you write? A light switch. What determines that pushing the switch to the up position will turn on the lights on rather than turning the lights off?Mung
September 6, 2014
September
09
Sep
6
06
2014
07:15 PM
7
07
15
PM
PDT
P: You have provided no substantiation.
No substantiation of what? You seem to be agreeing with me, yet disagreeing with me. Again, it's not clear that we're talking about the same thing. For example... I have no doubt that computational systems, suitably programmed, can do various (and often wonderful) things as designed, but that is worlds different from exhibiting self aware, insightful, reasoned contemplation and independent creative thought, decision and action. I agree that a computer will not never exhibit Artificial General Intelligence following the same conception of knowledge you mentioned in [232]: justified true belief. That was the point of the article. We need a major breakthrough in philosophy. And the field is held back due to the lack of a wider adoption of an existing philosophical breakthrough already made by Popper. Furthermore, your objection illustrates a key problem with Intelligent Design. Correct me if I'm wrong, but you appear to assume we would have first intended to design a computer create explanations. And since we didn't, it can't. But that's a theory about how the world works, which comes first. That's how you're interpreting your observations. However, Charles Babbage didn't intend to design what would have been the first Universal Turing Machine (UTM), had he actually managed to build it. He stumbled upon it while trying to build a way perform calculations more accurately and efficiently. It was only till much later that Alan Turning realized the great importance of Babbage had stumbled upon. Computation does not depend on a particular implementation such as transistors, or even cogs in the case of Babbage. Computers "work" because of a deeper principle: the universal principle of computation. For example, no one specially designed a IBM PowerPC processor so it could emulate a Intel 386 processor using Parallels virtualization software - which is something I did regularly while I was running Windows XP on my PPC Mac to test websites on Internet Explorer. This universality emerges from a particular repertoire of computations and allows any UTM to emulate any other UTM. This is known as the Church-Turning principle. A stronger version of this principle, the Church-Turning-Deutsch (CTD) principle, has been developed based on developments in quantum computation. To summarize the principle..
‘every finitely realizable physical system can be perfectly simulated by a universal model computing machine operating by finite means’
So, the very same, deepest theory of why computers work, which explains the observations we've both experienced and the universality of computers, implies that GAI is possible. It explains why we would expect computers to do something you've we've observed it doing before.Popperian
September 6, 2014
September
09
Sep
6
06
2014
06:55 PM
6
06
55
PM
PDT
M: It would seem then that inference would be essential to the sciences.
Are you genuinely asking a question? I'm asking because I've already made a distinction in regards to empirical observations in [175].
In the history of science, empiricism was an improvement in that it helped promote the importance of empirical observations in science. However, it got the role those observations play backwards. Theories are tested by observations, not derived from them.
Criticism, in some form, is essential to the universal creation of knowledge. In the case of scientific knowledge, rational criticism includes devising and performing tests via empirical observations. But the contents of theories do not come from observations themselves. We take theories seriously, as if they were true in reality, for the purpose of criticism, in that all of our observations should conform to them. But, for this to occur, we must also take into account all of our other, current best theories as well. So, inference comes into play as we infer what observations we should expect if our theories about how the world worked, in reality, really were true. That's criticism. For example, take the claim that inductive reasoning is underlying basis for ID in this very thread. For the purpose of criticism, we can take that theory seriously, as if it were true in reality, and that all observations should conform to it. Specifically, entities with complex, material brains are the one and only type of thing we’ve seen exhibiting intelligence. So, if one really used inductive reasoning, we would infer that one would also claim we supposedly “know from experience” that all intelligence requires complex material brains. Yet I'm guessing doesn't seem to be the case. So, this theory doesn't survive rational criticism. The same can be said about other criticism I've presented about inductive reasoning. We can take the idea that we use inductive reasoning seriously, as if it were true in reality. Specifically, if we could use evidence to prove any theory is more probable, then an piece of evidence would need to be compatible with only theory. However, this isn't the case, as evidence is comparable with a number of theories, including an infinite number we haven't even conceived of yet. So this idea doesn't withstand rational criticism. Any probability calculus can only be based on an explanation that tells us where the probability comes from. Those numbers cannot be the probability of that explanation itself. They are only applicable in an intra-theory context, in which we assume the theory is true for the purpose of criticism.Popperian
September 6, 2014
September
09
Sep
6
06
2014
06:53 PM
6
06
53
PM
PDT
Popperian:
My perspective is that science isn’t primarily about stuff you can observe, yet empirical observations play an important role.
It would seem then that inference would be essential to the sciences.Mung
September 4, 2014
September
09
Sep
4
04
2014
10:12 AM
10
10
12
AM
PDT
P: You have provided no substantiation. I have no doubt that computational systems, suitably programmed, can do various (and often wonderful) things as designed, but that is worlds different from exhibiting self aware, insightful, reasoned contemplation and independent creative thought, decision and action. As for the issues on inductive reasoning, you are the one who stated that induction is "impossible." I have simply pointed out what induction is [you put up an outdated view], and pointed out that to reject induction is to surrender to a general delusion fallacy, with reasons. And, as an empirically controlled cluster of disciplines that are subject to correction and adjustment, the sciences are inductive. KFkairosfocus
September 4, 2014
September
09
Sep
4
04
2014
10:11 AM
10
10
11
AM
PDT
Hi Popperian, Have you figured out yet whether a light switch must be in the up position or in the down position in order to turn on the lights?Mung
September 4, 2014
September
09
Sep
4
04
2014
10:10 AM
10
10
10
AM
PDT
I put it to you that your difference of perspective has nothing to do with the actual empirical and analytical facts, which fully substantiates my point that mechanical computation, whether analogue or digital, is a blind, non insight based cause effect processing of signals per the underlying dynamics that some designer has harnessed, not at all a matter of rational insight, understanding of meaning, and inference based on that contemplation. To pretend or to actually imagine otherwise, is patently without merit. KF
My perspective is that science isn't primarily about stuff you can observe, yet empirical observations play an important role. I've said that explicitly. And I've explained that I hold that view based on rational criticism. So, apparently, you're not actually reading what I've written. Or perhaps you're unable to conceive of a non-foundationalist epistemology?Popperian
September 4, 2014
September
09
Sep
4
04
2014
09:18 AM
9
09
18
AM
PDT
Do you see anything in the fetch decode execute cycle of a micro, whether hard wired or micro-coded that escapes the premise of blindly mechanical processing with possibilities of glitches [PSU glitches being a fav prob], thus giving a combination of blind mechanical necessity and chance? If you do so, kindly explain it.
Did you actually read the referenced article? There are two positions it criticizes. 01. Artificial General Intelligence (AGI) is impossible camp, which appears to be your position.
Despite this long record of failure, AGI must be possible. And that is because of a deep property of the laws of physics, namely the universality of computation. This entails that everything that the laws of physics require a physical object to do can, in principle, be emulated in arbitrarily fine detail by some program on a general-purpose computer, provided it is given enough time and memory. The first people to guess this and to grapple with its ramifications were the 19th-century mathematician Charles Babbage and his assistant Ada, Countess of Lovelace. It remained a guess until the 1980s, when I proved it using the quantum theory of computation.
02. GAI is imminent camp. We just need faster computers with more memory, along with solving an "integration problem."
For example, it is still taken for granted by almost every authority that knowledge consists of justified, true beliefs and that, therefore, an AGI’s thinking must include some process during which it justifies some of its theories as true, or probable, while rejecting others as false or improbable. But an AGI programmer needs to know where the theories come from in the first place. The prevailing misconception is that by assuming that ‘the future will be like the past’, it can ‘derive’ (or ‘extrapolate’ or ‘generalise’) theories from repeated experiences by an alleged process called ‘induction’. But that is impossible.
The article suggests that both camps are wrong. It explains why the hardware you described doesn't result in AGI. More important to the subject of inductive reasoning, any AGI would represent the ability to create explanatory knowledge. That's the key point. Such a proces isn't a function of it's inputs and outputs. You get more out than you get in. From the article...
Traditionally, discussions of AGI have evaded that issue by imagining only a test of the program, not its specification — the traditional test having been proposed by Turing himself. It was that (human) judges be unable to detect whether the program is human or not, when interacting with it via some purely textual medium so that only its cognitive abilities would affect the outcome. But that test, being purely behavioural, gives no clue for how to meet the criterion. Nor can it be met by the technique of ‘evolutionary algorithms’: the Turing test cannot itself be automated without first knowing how to write an AGI program, since the ‘judges’ of a program need to have the target ability themselves. (For how I think biological evolution gave us the ability in the first place, see my book The Beginning of Infinity.) And in any case, AGI cannot possibly be defined purely behaviourally. In the classic ‘brain in a vat’ thought experiment, the brain, when temporarily disconnected from its input and output channels, is thinking, feeling, creating explanations — it has all the cognitive attributes of an AGI. So the relevant attributes of an AGI program do not consist only of the relationships between its inputs and outputs. The upshot is that, unlike any functionality that has ever been programmed to date, this one can be achieved neither by a specification nor a test of the outputs. What is needed is nothing less than a breakthrough in philosophy, a new epistemological theory that explains how brains create explanatory knowledge and hence defines, in principle, without ever running them as programs, which algorithms possess that functionality and which do not.
However "evolutionary algorithms" can create non-explantory knowledge, which is the sort of knowledge that biological Darwinism creates. That's a different thread. See [142] or [153].Popperian
September 4, 2014
September
09
Sep
4
04
2014
09:07 AM
9
09
07
AM
PDT
P: I don't have a lot of time just now, but I will ask you this. Do you see anything in the fetch decode execute cycle of a micro, whether hard wired or micro-coded that escapes the premise of blindly mechanical processing with possibilities of glitches [PSU glitches being a fav prob], thus giving a combination of blind mechanical necessity and chance? If you do so, kindly explain it. Then also, do the same for something like a Thompson Ball and disk integrator, or an op amp ckt, or an artificial neural network based on summing gates. Compare the latter with the electrochemical neural networks in CNSes and explain to me where in this there is any empirically or analytically warranted basis for saying that any of these amounts to more than blind mechanism and chance processes [often, noise]. I put it to you that your difference of perspective has nothing to do with the actual empirical and analytical facts, which fully substantiates my point that mechanical computation, whether analogue or digital, is a blind, non insight based cause effect processing of signals per the underlying dynamics that some designer has harnessed, not at all a matter of rational insight, understanding of meaning, and inference based on that contemplation. To pretend or to actually imagine otherwise, is patently without merit. KFkairosfocus
September 4, 2014
September
09
Sep
4
04
2014
08:21 AM
8
08
21
AM
PDT
None of this is novel or even significantly controversial, it is the stuff of basic real world science and of sciene education from grade school up, with elaborate techniques being overlaid on the basic inductive logic involved and outlined above.
First, you're again pointing to definitions, not making an argument that refutes the criticisms presented. Everyone knows we use "inductive reasoning", isn't an argument. Nor do I (or did Popper) claim to be a conventionalist. Second, by claiming it's not "even significantly controversial", you're denying that we've made progress beyond even what you just described via the field of epistemology and philosophy of science. Namely, we can be more specific about how the cast of characters you presented (Observations, Hypotheses, Inference / Prediction and Empirical Tests)
We start out with a problem, conjecture an explanatory theory about how the world works, in reality, that we hope will solve it, then criticize that theory and discard errors we might find. In the case of science, criticism takes the form of empirical observations. Then, the process starts again when new observations indicate there is a problem in one of our ideas.
For example, we do not know where to start looking without first having some kind of existing theory which has become problematic. Your description doesn't even address this. Again, science isn't primarally about stuff you can observe. From [177]
For example, are dinosaurs merely an interpretation of our best explanation of fossils? Or are they *the* explanation for fossils? We never speak of the existence of dinosaurs, millions of years ago, as an interpretation of our best theories of fossils. Rather, we say that dinosaurs are *the* explanation for fossils. Nor is the theory primarily about fossils, but about dinosaurs, in that they are assumed to actually exist as part of the explanation. And we do so despite the fact that there are an infinite number of rival interpretations of the same data that make all the same predictions, yet say the dinosaurs were not there, millions of years ago, in reality.
Trying to portray me as suggesting facts are not important is a strawman. The role they play is where we disagree.
“All the ‘facts’ Darwin used as evidence for his theory of evolution were known before he used them … What Darwin contributed was a profoundly radical way of rearranging these materials” (p38). - Hughes, 1990
Popperian
September 4, 2014
September
09
Sep
4
04
2014
04:40 AM
4
04
40
AM
PDT
KF: Perhaps it has not registered, that I have studied and even designed and built computer processors from hardware ground up, that I worked in Electronics for years, and as such have a firm understanding of how such processing by refined rock — analogue or digital — occurs.
I've build simple microprocessors with IO ports as part of a microcomputer repair program and board level repaired computers for 3 years. Currently, I program mobile devices as an independent consultant. Yet, we seem to have different outlooks on the subject.
The attempt to lift oneself out of the swamp of cause-effect bond, blind GIGO-limited processing to attain to the contemplative, insightful processes required for actual inferences on real thought and decision, is futile.
Again the article's criticism was, it's futile to think we can create general artificial intelligence under the very assumption you hold: knowledge is justified true belief. So, it comes as no surprise that you'd think GAI futile. That assumption is the theory you use to interpret the same observations I've made about computers. Deutsch's point is that we don't have GAI because we're not asking the right questions, not because it's impossible.
In fact it is shutting one’s eye tot he fact that is as obvious as the sun: processors have designers, and programmers, analogue computers ditto, and there is no good reason to see the FSCO/I in neural networks that makes them do their processing, is any different. FSCO/I simply is not credibly going to come about by blind chance and necessity.
You're not taking into account (shutting one's eye to) the distinction I've made between non-explanatory and explanatory knowledge. An error correcting system of biological Darwinism (variation and selection) is not merely random. Rather, it's random to any particular problem to solve. Nor is it clear what credibility has to do with it. What we want is the contents of theories, not their providence.
KF:Which may well go a long way towards explaining the puzzling hostility to inductive, evidence controlled reasoning in your remarks.
Did you induce that from observations? I'm asking because it's wrong. Before you could induce a false conclusion, you had to first hold a false theory, that I was hostile to inductive reasoning. But I'm not "hostile". I'm merely presenting criticisms of inductive reasoning.Popperian
September 4, 2014
September
09
Sep
4
04
2014
04:26 AM
4
04
26
AM
PDT
KF: I give not one whit for Popper’s views as such, insofar as there is a reasonable basis to accept a major domain of reasoning, inductive reasoning and insofar as Popper’s views as represented by you try to deem such reasoning “impossible.”
Ok, then how is it reasonable? In the absence of a "principle of induction" that works, in practice, it's unclear how it's possible. That's the criticism. Nor is "Everyone knows we use inductive reasoning" an argument. Furthermore, the idea that anything "is obvious", let alone inductivism, is a philosophical position. Again, if you want to call conjecture and criticism “inductive reasoning”, that’s fine. But that only serves to confuse the issue, as you have now. Also, are you suggesting you're not aware of the outstanding problems with justificationism? Or perhaps you're aware of them, but simply cannot believe any other way of assuming we know things. However, incureduility is not an argument.
KF: In short, I will always reject the grand delusion fallacy type spectacular assertion, which seems to be a hubris of modern scholarship.
So, you have a no-concession policy on inductive reasoning? Also, you reject intelligent design, which rejects the modern scholarship of evolution? Or perhaps the scholarship you accept or reject is actually dependent on something else?
Any time one asserts a grand delusion, one decisively undermines thought and rationality including one’s own — an own-goal. So, any species of grand delusion can be dismissed at once as self-refuting. So, your assertion that inductive reasoning — abundantly in evidence all around us — is “impossible,” is patently absurd.
The idea that we know by experience that induction works is circular reasoning: induction must work because we've always experienced it working in the past. But, again, the same evidence is also compatible with multiple theories, including conjecture and criticism. You're presenting a false dilemma by assuming I must be a disappointed justificationist and, therefore, must undermine rationality. From this paper on "Justificationism and the Abuse of Reason"....
3. Responses to the dilemma of the infinite regress versus dogmatism In the light of the dilemma of the infinite regress versus dogmatism, we can discern three attitudes towards positions: relativism, “true belief” and critical rationalism [Note 3] Relativists tend to be disappointed justificationists who realise that positive justification cannot be achieved. From this premise they proceed to the conclusion that all positions are pretty much the same and none can really claim to be better than any other. There is no such thing as the truth, no way to get nearer to the truth and there is no such thing as a rational position. True believers embrace justificationism. They insist that some positions are better than others though they accept that there is no logical way to establish a positive justification for an belief. They accept that we make our choice regardless of reason: "Here I stand!". Most forms of rationalism up to date have, at rock bottom, shared this attitude with the irrationalists and other dogmatists because they share the theory of justificationism. According to the critical rationalists, the exponents of critical preference, no position can be positively justified but it is quite likely that one (or more) will turn out to be better than others in the light of critical discussion and tests. This type of rationality holds all its positions and propositions open to criticism and a standard objection to this stance is that it is empty; just holding our positions open to criticism provides no guidance as to what position we should adopt in any particular situation. This criticism misses its mark for two reasons. First, critical rationalism is not a position. It is not directed at solving the kind of problems that are solved by fixing on a position. It is concerned with the way that such positions are adopted, criticised, defended and relinquished. Second, Bartley did provide guidance on adopting positions; we may adopt the position that to this moment has stood up to criticism most effectively. Of course this is no help for people who seek stronger reasons for belief, but that is a problem for them, and it does not undermine the logic of critical preference.
IOW, you're projecting your problem, as a justificatoinist, on me.
And if Popper is on the same ground, he is just as wrong.
Ok, then what points do you disagree with? I haven't seen any response that addresses the arguments I've presented that haven't been addressed. "No it's not" isn't an argument.
I find it astonishing just how often objectors to the design inference score such own goals by way of grand delusion claims or implications. If, to reject design inferences, one ifs forced into absurdities, that is not a reason to cling to the absurdities, but an evidence that he rejections are in grave error. KF
For the umpteenth time, suggesting one is confused about how knowledge grows is not the same as suggesting there is no knowledge. Nor am I suggesting that empirical observations do not play an important role. It's as if no one is actually reading my comments. (tap - tap - tap. Is this thing on?)Popperian
September 4, 2014
September
09
Sep
4
04
2014
04:24 AM
4
04
24
AM
PDT
Mung: Ah yes. Popper, the authoritative source.
You're disingenuously conflated pointing out Popper has rejoinders for these objections with an appeal to authority.
Mung: I’m sorry Popperian, but in order to show that induction is impossible you’ll need to visit every instance of induction that has ever been and then draw an inference from those instances to your conclusion.
You're funny. Wait. That's a joke, right? I mean, no number of single observations can justify a universal. That's the crux of the issue. For example, take the universal theory that gravity operates the same way everywhere in the universe. While we have an overwhelming number of observations of gravity in our local vicinity that's not even a drop in the bucked compared to the astronomical number of places in the universe that we haven't measured it. So, using that "logic" one could claim the uniformity of gravity is astronomically unlikely. See how that works (or should I say, doesn't work)?. The idea that gravity is uniform is a conjectured theory. All of the overwhelming number of local observations represent an overwhelming amount of criticism, which the theory has survived. So, we've adopted it. But we know GR is wrong, because it conflicts with quantum mechanics. Furthermore, our explanation for space-time itself includes gravity. Einstein predicted gravitational lensing, something we had never observed before, via theory. It's an implication of our current best theories about how the world works, in reality.Popperian
September 4, 2014
September
09
Sep
4
04
2014
04:22 AM
4
04
22
AM
PDT
F/N: It seems we need to go back to grade school science basics, informed by Newton's classic remarks in Opticks Query 31 as already cited: >>Updating the language and simplifying how Newton described the generic methods of modern science, we may use the acronym, O, HI PET:
O – Observations (as accurate as we can get) are the anchor for science, allowing us to spot patterns, make measurements and test explanations. H – Hypotheses (educated guesses) are made to explain the patterns we observe, and are then compared to see which is the best current explanation. I, P – Inference and Prediction (based on logic and mathematics) allow us to see the expected consequences of hypotheses in new situations. ET – Empirical Tests (though experiments we set up and carry out, and/or further observations we make in new situations) allow us to compare hypotheses to see which is best supported, and so choose the best explanations. Bodies of more or less well-supported explanations form scientific models and theories. Such models and theories always have strengths, limitations and weaknesses, so scientific research is an ongoing exercise.
Of course, as “best current explanation” implies [[and as Newton emphasised], scientific knowledge claims are provisional, subject to correction and development, or even replacement based on further work. This comes out repeatedly in the survey above, as we see how new work repeatedly partly builds on and partly corrects or replaces old thought. When one carries out a scientific investigation, then s/he first needs to clarify what s/he is trying to do, in light of the classic sequence of scientific work: describe, explain, predict, control. Typically, one may try to: explore, observe and accurately describe or measure facts or quantities, or explain observed patterns and test the reliability of such models, or use the ability to predict to influence or control the way a situation plays out This naturally leads to the design of an investigation. Exploratory exercises focus on getting a balanced, accurate view of what is “there,” and so emphasise fieldwork and recording of accurate facts and measurements where measurement is possible. Explanatory hypotheses or models that show patterns and perhaps the driving forces that cause them may be suggested based on “known” observations and measurements, and techniques for testing them may be recommended for “further work.” Experiments or observation studies may be designed and carried out, to test such models against further real world observations and measurements. Once an empirically reliable model – one that accurately predicts outcomes in new situations -- has been developed, it can be used to set up and control further situations. But, always, it remains provisional . . . >> None of this is novel or even significantly controversial, it is the stuff of basic real world science and of science education from grade school up, with elaborate techniques being overlaid on the basic inductive logic involved and outlined above. KFkairosfocus
September 2, 2014
September
09
Sep
2
02
2014
07:47 AM
7
07
47
AM
PDT
PS: Perhaps it has not registered, that I have studied and even designed and built computer processors from hardware ground up, that I worked in Electronics for years, and as such have a firm understanding of how such processing by refined rock -- analogue or digital -- occurs. Analogue processors are similar as are neural networks which are weighted sum gates. The attempt to lift oneself out of the swamp of cause-effect bond, blind GIGO-limited processing to attain to the contemplative, insightful processes required for actual inferences on real thought and decision, is futile. It is a case of trying to get North by going due West. In fact it is shutting one's eye tot he fact that is as obvious as the sun: processors have designers, and programmers, analogue computers ditto, and there is no good reason to see the FSCO/I in neural networks that makes them do their processing, is any different. FSCO/I simply is not credibly going to come about by blind chance and necessity. Not on the gamut of solar system or observed cosmos. But if one insists on clinging to substituting imagination and unfounded wishes for actual empirical evidence grounding, one ends up in uncontrolled speculation. Which may well go a long way towards explaining the puzzling hostility to inductive, evidence controlled reasoning in your remarks. KFkairosfocus
September 2, 2014
September
09
Sep
2
02
2014
05:56 AM
5
05
56
AM
PDT
P: I give not one whit for Popper's views as such, insofar as there is a reasonable basis to accept a major domain of reasoning, inductive reasoning and insofar as Popper's views as represented by you try to deem such reasoning "impossible." In short, I will always reject the grand delusion fallacy type spectacular assertion, which seems to be a hubris of modern scholarship. Any time one asserts a grand delusion, one decisively undermines thought and rationality including one's own -- an own-goal. So, any species of grand delusion can be dismissed at once as self-refuting. So, your assertion that inductive reasoning -- abundantly in evidence all around us -- is "impossible," is patently absurd. And if Popper is on the same ground, he is just as wrong. I find it astonishing just how often objectors to the design inference score such own goals by way of grand delusion claims or implications. If, to reject design inferences, one ifs forced into absurdities, that is not a reason to cling to the absurdities, but an evidence that he rejections are in grave error. KFkairosfocus
September 2, 2014
September
09
Sep
2
02
2014
05:45 AM
5
05
45
AM
PDT
I'm sorry Popperian, but in order to show that induction is impossible you'll need to visit every instance of induction that has ever been and then draw an inference from those instances to your conclusion.Mung
September 1, 2014
September
09
Sep
1
01
2014
04:41 PM
4
04
41
PM
PDT
I long since laid out the foundations for inductive reasoning (which is a major aspect of reasoning), showing why Popper’s project is ill-founded;
Again, you’ve presented misconceptions of Popper’s views, which have already been addressed. For example, critical preference takes into account empirical observations. However, it does so in a non-justiicationist way. This suggests something significantly different is going on in reality. Good theories make prohibitions. The more prohibitions a theory contains the more ways it can be found wrong. However, ID’s designer is abstract and has no defined limitations. It’s a bad explanation because it’s easily varied.
The computer does just that, it computes, it cannot understand or explain. It is irrelevant to inductive reasoning. It simply calculates out the step by step sequence programmed in it, blindly and mechanically, per GIGO.
Again, we seem to be arguing past each other. Did you actually read the article or even the entire excerpt? The article does not suggest creating general artificial intelligence (GAI) is impossible, which you seem to suggest in your response. Rather it argues the field of GAI is being held back by an approach based on your description of knowledge: justified true belief.
For example, it is still taken for granted by almost every authority that knowledge consists of justified, true beliefs and that, therefore, an AGI’s thinking must include some process during which it justifies some of its theories as true, or probable, while rejecting others as false or improbable. But an AGI programmer needs to know where the theories come from in the first place. The prevailing misconception is that by assuming that ‘the future will be like the past’, it can ‘derive’ (or ‘extrapolate’ or ‘generalise’) theories from repeated experiences by an alleged process called ‘induction’. But that is impossible.
Again, my objection is the idea that we use observations to form the contents of theories. This idea has been appealed to multiple times in this thread exclude alternative explanations for the appearance of design. Specifically, any piece of evidence is compatible with multiple theories. This includes an infinite number that has yet to be proposed. As such, evidence cannot be used to prove a theory or make it more probable.. Furthermore, what counts as repetition is not a sensory experience. Rather, it’s based on theory. In addition, probabilities can only be assigned based on an explanation that tells us where the probability comes from. Those numbers cannot be the probability of that explanation itself. They are only applicable in an intra-theory context, in which we assume the theory is true for the purpose of criticism. You seem to have missed or chosen to ignore this subtle but important difference. Are these not claims based on what you consider “inductive reasoning” and therefore relevant? Given the above, if you want to retreat even further and call conjecture and criticism “inductive reasoning”, that’s fine. But that only serves to confuse the issue. Nor does it reflect the sort of objections made here to evolutionary theory.Popperian
September 1, 2014
September
09
Sep
1
01
2014
09:18 AM
9
09
18
AM
PDT
P: In a pause during a web hunt. The issue is not the corpus of Popper's papers and views -- slip sliding away to a new subject again. I trust you will at least acknowledge that appealing to brain processing etc is off the table, in response to 182. The focal issue for this side discussion is your dismissal -- in the name of Popper, I note [cf your handle] -- that claims that induction is "impossible," and that there is no principle of induction, compounded by an outdated strawmannish caricature of what inductive reasoning is understood to be. All of which were documented above. In reply, I have shown why there is a longstanding principle of provisional universality, and how a sufficient sample of experience or observation that reveals a pattern allowing a best explanation to be developed and tested for reliability. Under those circumstances, we can have support for a conclusion that in certain cases is morally certain. In others -- notably scientific theorising, the degree of provisionality is higher. Especially as regards grand narratives concerning the unobserved past of origins. Which is too often presented as fact fact FACT. Your problem is to back your hand on how induction is impossible -- your term, and the answer so far is, you have no "backative" . . . as they say in Ja. KFkairosfocus
August 31, 2014
August
08
Aug
31
31
2014
02:58 PM
2
02
58
PM
PDT
IOW, your criticism of Popper suggests you are unfamiliar with his books and papers.
Ah yes. Popper, the authoritative source.Mung
August 31, 2014
August
08
Aug
31
31
2014
02:29 PM
2
02
29
PM
PDT
Popperian:
I’m not quite sure what you mean by “translated” into a physical effect. Can you elaborate?
UB:
The arrangement of an informational medium evokes an effect within a system capable of producing that effect. The arrangement of the medium evokes the effect, but a second arrangement within the system determines what the effect will be. In order for the system to function, the organization of the system must preserve the physical discontinuity between the two. That is how information is translated into a physical effect.
Popperian:
I’m still not clear as to what you mean by translated into physical effect.
Mung:
If you want the light to come on, do you flip the light switch up or do you flip the light switch down? (c.f. WJM @ 220)
Popperian:
How is that relevant?
Does the context help you see the relevance? Is it that the effect is too vague and mysterious? Have you never turned on a light before, or turned one off? Or do you not see how it is a physical effect? Perhaps you call on spirits to make the light shine rather than flipping a light switch? Do you understand the concept of electricity? Of the electric light? Of the relationship between light switches and lights going on and off? Help me out here. What concepts are you having difficulty with?Mung
August 31, 2014
August
08
Aug
31
31
2014
02:26 PM
2
02
26
PM
PDT
1 2 3 10

Leave a Reply