First, a happy thanksgiving.
Then, while digesting turkey etc, here is something to ponder.
One of the underlying issues surrounding the debates over the design inference is the question of responsible, rational freedom as a key facet of intelligent action, as opposed to blind chance and/or mechanical necessity. It has surfaced again, e.g. the WD400 thread.
Some time back, this is part of how I posed the issue, emphasising the difference between self-aware responsible freedom and blindly mechanical causal chains used in computing:
Even if deluded about circumstances a self-aware being is just that, self-evidently, incorrigibly self-aware. And, a key facet of that self awareness is of responsible, rational freedom. Without which we cannot choose to follow and accept a rational case, we would just be mechanically grinding out our programming or and/or hard wiring.
Like, say, a full adder circuit:
Wire it right, designate the correct voltages as 1 and 0, and the outputs will add one bit with carry in and out. Indeed, more consistently correctly than we do.
Mis-wire, and it won’t, just as if the voltage-state assignments are wrong. But the circuits neither know nor care that they are performing arithmetic, they simply respond to inputs per the mechanical performance of the given circuits.
That is the context of my comment at 79 in the thread:
mohammadnursyamsu: All current programs on computers work in a forced way, there is no freedom in it, the flexibility does not increase the freedom one bit.
[Z:] All you have done is introduce yet another term, “freedom”, which is not well-defined in this context.
Absent responsible, rational freedom — exactly what a priori evolutionary materialist scientism cannot account for — you could not actually compose comment 73 above.
In short, freedom is always there once the mind is brought to bear, and without it we cannot be rationally creative.
And per observation, computation is a blind, mechanical cause effect process imposed on suitably organised substrates by mind. In fact, a fair summary of decision node based processing is that coded algorithms reduced to machine code act on suitably coded inputs and stored data by means of a carefully designed and developed . . . troubleshooting in a multi-fault environment required . . . physical machine, to generate desired outputs. At least, once debugging is sufficiently complete. (Which is itself an extremely complex, highly intuitive, non algorithmic procedure critically dependent on creative, responsible, rational freedom. [Where, this crucial aspect tends to get overlooked in discussions of finished product programs and processing.])
There really is a wizard behind the curtain.
Freedom, responsible rational freedom is not to be dismissed as a vague, unnecessary and suspect addition to the discussion, it is the basis on which we can at all think, ground and accept conclusions on their merits instead of being a glorified full adder circuit.
Where, of course, inserting decision nodes amounts to this: set up some operation, which throws an intermediate result, a test condition. In turn, that feeds a flag bit in a flag register. On one alternative, go to chain A of onward instructions, on the other go to chain B. And this can be set up as a loop.
First, the classic 6800 MPU as an example:
Let me add [Nov 28] a more elaborate diagram of a generalised microprocessor and its peripheral components, noting that an adder is a key component of an Arithmetic and Logic Unit, ALU . . . laying out the mechanisms and machinery that, properly organised, will execute algorithms:
Next, the structured programming patterns that can implement any computing task:
It should be clear that no actual decisions are being made, only pre-programmed sequences of mechanical steps are taken based on the designer’s prior intent. (Of course one problem is that a computer will do exactly what it is programmed to, whether or not it makes sense.)
As a related point, trying to derive rational, contemplative self aware mindedness from computation is similar to trying to get North by heading due West.
Samuel Johnson, reportedly responding to the enthusiasm for mechanistic thinking in his day, is apt: All theory is against the freedom of the will; all experience for it. (Nor does this materially change if we inject chance processes, as such noise is no closer to being responsible and rational.)
If we are wise, we will go with the experience. END
95 Replies to “The freedom/mind issue surfaces again”
Good as usual!
Maybe if Samuel Johnson lived today, and knew the advances of physics and of scientific thought, he would be more optimistic about “theory”. 🙂
Maybe one day theory and experience will go hand in hand.
Ah GP, will we or our grand kids live to see that day? KF
ID’s case for the existence of intelligent causation is firmly based on the best knowledge we have.
One can doubt everything external to the “I” (consciousness), but for each “I” goes that it is a logical impossibility for “I” to doubt its own existence.
In other words, the existence of consciousness is the most certain knowledge one has. All other knowledge must be in compliance with the extraordinary status of “I exist”.
Next, reflecting on knowledge and rationality, one invariably comes to the conclusion that for these things to exist and have value there has to be an “I” in control of thoughts.
If thoughts are produced by blind non-rational forces and events — beyond our control — then there can be no rationality. There can be no rationality simply because the entire process, from start to finish, is controlled by non-rational forces.
Those who refuse to give up on rationality are forced to reject naturalism.
Maybe our grand kids! 🙂
And we are happily working to help make that future possible.
“In other words, the existence of consciousness is the most certain knowledge one has. All other knowledge must be in compliance with the extraordinary status of “I exist”.”
Very well said!
I often say that consciousness is “the fact of all facts”. Building an empirical map of reality ignoring consciousness as the primordial fact is complete folly.
And yet, it’s what “scientists” have been doing for decades, in the recent past. Very sad indeed.
Box: In other words, the existence of consciousness is the most certain knowledge one has.
It’s not clear that the experience of consciousness is an entirely accurate depiction of what is happening in the mind.
Zachriel: All you have done is introduce yet another term, “freedom”, which is not well-defined in this context.
kairosfocus: Actually, not.
You forgot the definition suitable to the context.
“It’s not clear that the experience of consciousness is an entirely accurate depiction of what is happening in the mind.”
It’s not clear what you mean!
However you may like to define “mind”, it is obvious that consciousness is the seat of all those events which can be called “conscious representations”. In all conscious representations, and that includes all states of consciousness, including subconsciousness, dreams, and so on, it is the “I” which perceives different forms. The I is the connection between all mental contents which are represented. It is the common subject to a variety of mental objects.
Conscious things happen in consciousness, not in the mind. The I perceives mental objects.
gpuccio: it is obvious that consciousness is the seat of all those events which can be called “conscious representations”.
It’s been known for a long time that many mental representations occur in the subconscious mind.
Can’t you read?
“In all conscious representations, and that includes all states of consciousness, including subconsciousness, dreams, and so on,”
Where’s the problem?
Happy Thanksgiving to all!
‘Am I doing the thinking or is something else — beyond my control — doing the thinking?’
If the latter is true, then “I” am outside the domain of rationality.
If I don’t think, don’t ask me anything. I’m lost in irrationality. Don’t make me part of any rational inquiry. Address your questions to whatever it is that is thinking.
gpuccio: “In all conscious representations, and that includes all states of consciousness, including subconsciousness, dreams, and so on,”
That’s a very odd definition of consciousness, which usually refers to awareness. Nor is it consistent with the discussion you responded to above, which has repeatedly referred to the “I”, or ego. The claim that we can be certain of the existence of consciousness seemingly must be referring to the awareness of the mind, not the parts that are subconscious, or functions that have only recently been uncovered.
You probably mean mind, the consciousness being the surface awareness of the mind.
You speak of “the experience of consciousness” happening in the “mind”. Not sure what you mean here. Are you saying that the mind is having this “experience of consciousness”? If so, that would be fundamentally incoherent…
Do you not understand that it is incoherent to suggest that something other than consciousness is having “the experience of consciousness”? The experience of consciousness presupposes consciousness.
Consciousness is the ultimate starting point — the primordial datum — , there is not something else, which is more basic, that is having “consciousness” as an experience. The only thing that can have “consciousness as an experience” is consciousness itself.
No. We are using a completely different language here.
I will try to clarify.
Consciousness is any subjective experience. Conscious events require a subject which experiences them. That’s what I call the I.
The I is not what you seem to mean by “ego”. Of course, it’s a question of how one uses words. You seem to refer to self-awareness, or to some more complex structure.
No. The I, in my context, is simply the subject.
Therefore, all conscious experiences, including subconscious processes and dreams, are perceived by the I.
We can define the ego as some more complex super-structure, usually the sum of mental features which prevails during the wakeful state. In that sense, we could say that that ego is less present during subconscious processes and dreams. But the perceiving I is always there.
That I is simple and transcendental. It is not a mental structure. It perceives all mental structure, and can more ore less be identified with them.
YOu must not make confusion between “subconscious” and “inconscious”.
Subconscious processes are conscious representations which are represented at a different level. Think of it as the difference between macular vision and peripheral vision. Peripheral vision is represented in consciousness, but in a different way compared to macular vision.
The wakeful state and the “conscious” ego are only part of our consciousness. They are the tip of the iceberg, not the whole thing.
Z, freedom is in the context of the crucial difference between a non-rational, mechanical computational entity and the agent who must actually be able to reason, following ground and consequent by choice as opposed to GIGO-limited mechanical computational steps perhaps with some injection of randomness. If we in particular — and among other things the designers of hardware and programmers of code — are not significantly, responsibly free the project of rationality collapses. Or more realistically worldv views that would squeeze such out end in self referential absurdity. With evolutionary materialistic scientism as exhibit A. KF
After dinner thought exercise.
If one did have a soul/free will then , for deniers of this, how would that be different from the way we are now?
What would deniers notice about themselves different if they had a soul independent of chance etc?
I doubt that I have an eternal soul, but I do believe I have free will to some degree. At least it seems that way.
To answer your question, I don’t know if I would notice any difference. That’s what I conclude from life experience, reading, and discussion with theists, including my wife of a number of years.
Ponder the computational challenge.
The computer — whether digital, analogue or neural net — is strictly non-rational, it only has significance because of complex, functionally specific organisation and associated information (in codes and/or signals), which allows signal/info processing in ways that are based on mechanical causal chains but produce useful outputs. (And of course, it “works” is no proper substitute for truth or right.)
Reppert puts the matter thusly:
See the category-jump problem and the inescapable need for the perceiving, aware I if one is to move to insightful, rational contemplation rather than blindly mechanical computation?
Now, ponder what would be if you are actually ensouled, enconscienced and under actual moral government and ultimate accountability before the truth and the right. However, just for argument, ponder the further case that you have been immersed in a culture and in institutions that indoctrinate us in the message that only the material, concrete and physically manifest causal chains of chance and necessity are real, have objective character, the subjective is illusory. Is not scientific as you cannot put it in a test tube or encode it in signals and process it. Is thus dubious, spooky, superstition, even nonsense and fairy tales. And if you go there your intelligence and acceptability in circles that count will be degraded.
Would you not be inclined to lock out or dismiss lines of reflection that go beyond the circle of such scientism dominated by an implicit evolutionary materialism?
As was once noted in a famous sermon, the eye is the lamp of the body. If your eyes are good, your body will be full of light. But if your eyes are bad your body will be full of darkness. And if therefore what you imagine to be light is in reality darkness — think here the shadow-shows in Plato’s cave — how great is your darkness.
So, maybe the issue is that there is need for fresh, transformational insights.
Then, in light of such, there would be a whole new way of seeing the world.
As a start, I again highlight that on an evolutionary materialist basis, there is an inherent, self-falsifying incoherence by way of self-reference. Haldane put it this way:
Does this come across to you with any force?
Do you sense in it the force of self-reference?
Is there rising up some deflection, some dismissal that is not cogently answering the point without smuggled in assumptions that one standing on evolutionary materialism has no right to?
As in, on what ground logic, meaning, insight, warrant, knowledge — these are not mV potentials or micro Amp currents in neural networks.
Is there not a better base, to start from the evident facts of conscience and mind and recognise that access to the observed world is through the workings of the conscious self, at whatever levels are relevant?
Could it not be that at quantum influence levels, the mental and material could interact, so that there is no lock-in of a Laplace’s Demon world of material determinism?
That, chance stochastic noise etc cannot be equated to rational insight and choice, with responsible freedom?
That, just perhaps, dear Horatio, there are more things in heaven and on earth than are dreamed of in our scientism-riddled philosophies?
Especially, when we ponder the roots of a reality where our cosmos evidently had a beginning and is credibly contingent. Multiplied by the fact of moral government, leading to the challenge of finding a world-root necessary being IS that can also ground OUGHT.
Where, after centuries of back-forth, there is but one serious candidate: the inherently good Creator God, a necessary and maximally great being worthy of ultimate loyalty and the reasonable service of doing the good in light of our evident nature.
Box: Are you saying that the mind is having this “experience of consciousness”? If so, that would be fundamentally incoherent…
Consciousness is the awareness of the world. That awareness may include the awareness of the mind itself. The “I” is the model the mind uses to understand itself. All models are wrong, but some are useful.
Box: The experience of consciousness presupposes consciousness.
It’s due to self-reflection, the mind looking at itself.
gpuccio: Consciousness is any subjective experience.
A slug has subjective experiences, is conscious of those experiences.
gpuccio: Conscious events require a subject which experiences them. That’s what I call the I.
A slug has subjective experiences, but doesn’t have a sense of ego, the “I”. It goes through life without the benefit of an internal dialogue, or abstracting about itself.
gpuccio: The I is not what you seem to mean by “ego”.
ego, the “I” or self of any person; a person as thinking, feeling, and willing, and distinguishing itself from the selves of others and from objects of its thought.
gpuccio: Subconscious processes are conscious representations which are represented at a different level.
You’re using terminology in a very odd fashion. The subconscious is the part of the mind which is largely outside of conscious awareness.
kairosfocus: The computer — whether digital, analogue or neural net — is strictly non-rational
Which is contrary to how most people use the term. Computers are thought to be rational, but unfeeling.
Z, computers – in the current sense, in the old days computer was a job title and these people did reason — are strictly mechanical machines that do not reason. (NB, AmHD, rational — Having or exercising the ability to reason.) These machines process input signals based on how they are organised and programmed yielding outputs in a strictly blind mechanical force based cause-effect not ground-consequent way empty of insight into meaning, inference, evidential support etc. The organisation is based on rational purposes, knowledge and skill, as will be the programming and algorithms. But the blind mechanisms will just as mindlessly process on buggy software or corrupted hardware to yield nonsense — and may well crash in some cases. Cf. the OP. KF
gpuccio: “We are using a completely different language here.”
You confirm my statement.
gpuccio: “I will try to clarify.”
And I did. Apparently, you are only trying to confound.
The point is, we don’t need link to a dictionary. We simply need good will and honesty in trying to understand what we are saying.
gpuccio: “Consciousness is any subjective experience. Conscious events require a subject which experiences them. That’s what I call the I. The I is not what you seem to mean by “ego”. Of course, it’s a question of how one uses words. You seem to refer to self-awareness, or to some more complex structure.”
I will add emphasis throughout this post, just to help you in your reading and understanding.
You apparently confirm my point:
Zachriel: “ego, the “I” or self of any person; a person as thinking, feeling, and willing, and distinguishing itself from the selves of others and from objects of its thought.”
As I said:
gpuccio: “You seem to refer to self-awareness, or to some more complex structure.”
gpuccio: “Subconscious processes are conscious representations which are represented at a different level. Think of it as the difference between macular vision and peripheral vision. Peripheral vision is represented in consciousness, but in a different way compared to macular vision. The wakeful state and the “conscious” ego are only part of our consciousness. They are the tip of the iceberg, not the whole thing.”
That seems like a detailed clarification of my point of view.
Zachriel: “You’re using terminology in a very odd fashion. The subconscious is the part of the mind which is largely outside of conscious awareness.”
OK, as I said:
gpuccio: “We are using a completely different language here.”
That is simply true. We have defined two completely different concepts of “subconsciousness”.
You use “awareness” in a very ambiguous way. “Mind” too.
Let’s take your statement:
Zachriel: “The subconscious is the part of the mind which is largely outside of conscious awareness”
That would be a definition of subconsciousness?
So, it is a part of the “mind”. Which is what?
But it is “largely (!) outside”
“of conscious awareness”.
So, please, could you:
a) Define “conscious”
b) “Define “aware”
c) Define “mind”
I am not asking you anything more than I try to do. I quote myself:
a) gpuccio: “Consciousness is any subjective experience. Conscious events require a subject which experiences them. That’s what I call the I.”
So, unless you have problems with the concept of “subjective experience”, I have given a rather clear definition.
Zachriel: “A slug has subjective experiences, is conscious of those experiences.”
And so? Probably. But, as I said, we have scarce understanding or evidence of the subjective experiences of a slug. Why do you keep referring to something that we don’t really know?
b) I use “aware” only as a synonym of “conscious”.
c) I scarcely use “mind”, and if I do, it’s only as referring to groups of contents which are represented by a conscious I.
My point is clear:
for me: any subjective experience. It includes the waking consciousness, subconscious processes, dreams, mystical experiences, and any other state of consciousness. That’s why they are called “states of consciousness”.
for you: only the state of waking consciousness (it seems, please clarify)
for me: the subject, in any subjective experience.
for you: the ego operating in waking consciousness (it seems, please clarify)
I thought that I had been clear enough in my precious post:
gpuccio: “The wakeful state and the “conscious” ego are only part of our consciousness. They are the tip of the iceberg, not the whole thing.”
PS: I ran across an argument that helps us see the typical errors at work:
The problems lie in gross conflations and confusions in 2 and 3.
Deep Blue is in no wise playing, much less playing Chess. It has no responsible freedom, no sense of grounds and consequences, no weighing up of judgements, no true decision. Kasparov is just the opposite.
In 2, computers use the results of rationality, mechanically, to process information by — let’s talk bit based machines — mechanically, blindly manipulating bit patterns in registers that were coded based on a language and design wholly external to the machine.
But by imposing evolutionary materialism on Kasparov, he too becomes a big blue and if he like us is conscious and rational then obviously a sufficiently complex computer will be the same.
But in fact the reduction of a human to a meat, wetware machine decisively undermines the foundation of rationality and ends in self-referential incoherence.
But, for those bewitched by the fashionable ideologies of our day, the argument seems plausible, especially when scientism locks out any other perspective.
PPS: Perhaps, this from Cothram may help:
First, let me stress that in my post #18, I was merely trying to answer Robert’s question. I don’t know of any specific ways in which my day-to-day experience of life would differ if I did have a soul. I don’t have much to say about the OP itself.
To address a few of your questions:
Sure, but I’ve never felt that the operation of my mind is (solely) due to blind mechanical computation. And I haven’t the slightest idea whether brain states are in one-to-one correspondence with beliefs.
I can imagine that happening. However, I don’t move in circles where one’s beliefs on this matter make any difference, so there’s not much at stake here for me. I’ll leave the culture-war aspect for others to discuss.
But just to clarify, I am about as anti-scientism as one can be. I’m very skeptical about attempts to apply scientific or mathematical findings to other areas. One example: Gödel’s incompleteness theorems, which surely you agree are widely abused.
gpuccio: And I did.
Yes, you clarified you are using terminology in a non-standard way. That’s okay, except it makes it hard to understand your point when it needs to be translated into standard English.
gpuccio: “Consciousness is any subjective experience. Conscious events require a subject which experiences them. That’s what I call the I.
Okay. That’s what you call it. What do you call the internal monologue in humans that most people consider representative of the “I”?
gpuccio: The I is not what you seem to mean by “ego”.
A standard definition was provided. You are using the term in a different fashion.
ego, the “I” or self of any person; a person as thinking, feeling, and willing, and distinguishing itself from the selves of others and from objects of its thought.
gpuccio: That is simply true. We have defined two completely different concepts of “subconsciousness”.
Again, a standard definition was provided, though we did neglect to provide a link.
“existing or operating in the mind beneath or beyond consciousness:”
The etymology is that the subconscious is beneath the conscious.
Let’s return to your original point, which was approval of this statement:
Box: In other words, the existence of consciousness is the most certain knowledge one has. All other knowledge must be in compliance with the extraordinary status of “I exist”.
Using your definition of consciousness to include those aspects of mind of which someone is not even aware doesn’t really work in this context. Many activities of the mind are largely outside of “certain knowledge”.
What most people would agree is that they (what word do we use to refer to what is normally called the conscious mind?) have an awareness of the world, an awareness of their own minds, that they have a representation of themselves, called the “I”, that they can abstract about the world, about others, and about themselves. They also have learned that there is a (what word do we use to refer to the activities of the mind that are not in direct awareness?) mental world that occurs below the level of awareness. This activity does slip into awareness at times, but it has been shown that much of it remains below the level of the part of the mind that is aware, or at least below the level of the part of the mind that can communicate with the outside world.
kairosfocus: Deep Blue is in no wise playing, much less playing Chess.
For some strange definition of “playing Chess”. Of course Deep Blue plays chess. It does so by making moves on the basis of plans.
DS, the fact that you are conscious, purposeful, responsibly free and rational, able to warrant claims and thus knowing, cannot be accounted for on materialistic grounds. Indeed, that is why there are so many cases of trying to set such first person experiences aside as delusional, whatever clever words will be used. The resulting self referential incoherence will be plain. KF
Z, I pointed out the equivocation involved precisely to see how you would respond. Deep Blue has no intention to engage in a sport for fun or profit, it is simply processing bits per a program and per its machine organisation, blindly and mechanically. It does not even understand what it is to be appeared to blue-ly, much less what it means to have a name and a distinct, enduring identity, awareness and will, much less to evaluate and accept grounds and logically draw out consequences on meanings. Don’t even go to the hints of IBM in the name we attach to the machine. As a result, I. Big Blue, am playing Kasparov at Chess, an ancient stylised war game, is utterly, absurdly irrelevant to what is happening with the machine. The fallacies of anthropomorphising jump up and scream out. It is simply blindly, in the ultimate sense unquestioningly processing bits on registers that are freighted with algorithmic utility in codes created and used for purposes wholly external to the machine. Load, shift left, invert, add, subtract, 2s complement, test flag reg and branch on a given bit 0, etc. KF
OK, but I’m not a materialist, and am not trying to set fire to anything.
kairosfocus: Deep Blue has no intention to engage in a sport for fun or profit
No, but that’s not a requirement to play chess. It’s like saying a steam drill doesn’t drill because it doesn’t have the intention of John Henry, who, by the way, didn’t do it for fun or money, but out of pride.
F/N: where this ends is, you or I or an unborn child in the womb have no more value or worth than that outdated computer now being scrapped to mine its valuable bits and pieces. This stuff is not without serious consequences. KF
Z, if you refuse to understand the implications of agency, self-identity, intentionality, responsible freedom and rationality in I play a game, I cannot stop you. But I can point out the absurdity and the sobering consequences of reducing man to machine. KF
DS, the problem is not whether you are personally explicitly committed to evolutionary materialist scientism, but the degree of influence it holds over you, sometimes even unconsciously. Even as an outspoken opponent to Marxism on my campus, its influences seeped in subtly in ways it took years to clean out again, including in visceral, intuitive responses that were simply soaked in. KF
PS: It may be helpful for you to articulate your core worldview commitments and assess them on factual adequacy, logical and dynamical coherence, and explanatory power. At the very least to yourself.A good first point is to look at the root of reality and linked matters ontological and moral, applying to moral governance, community life and civilisational consequences.
Z, Steam drills drilling are a physical, mechanical process involving utterly no responsible rational freedom. John Henry, a man of an oppressed minority eking out a living for his family expresses an existential threat and as the song goes gave his life in fighting for himself and presumably his family in the face of a cruel robber baron calculus of efficiency. Do you not see where your lines of thought are headed? KF
kairosfocus: if you refuse to understand the implications of agency, self-identity, intentionality, responsible freedom and rationality in I play a game, I cannot stop you.
You can play a game, or play a radio. Which transitive verb would you prefer concerning a computer and chess?
kairosfocus: Steam drills drilling are a physical, mechanical process involving utterly no responsible rational freedom.
That’s rather the point. The steam drill and John Henry both drilled.
Well, I think we all do that on a continual basis.
Z, you are now equivocating, play. That’s like equating a Jack that swims with one that lifts a car with a flat. KF
DS, the point is to move to clear thinking. KF
Yes, and we do our best.
kairosfocus: That’s like equating a Jack that swims with one that lifts a car with a flat.
You didn’t answer. You can play a game. You can play a radio. Which transitive verb would you prefer concerning a computer and chess?
Z, the computer is not an agent at all. It is not playing. The program loaded is being executed and carries out an interactive chess move algorithm so that the live player is matching wits with the designers of the program. KF
kairosfocus: the computer is not an agent at all. It is not playing.
A steam drill drills. Are you really claiming a machine can’t be the subject of a verb?
A few months ago Silver Asiatic wrote:
I have come to the same conclusion.
Z, a drilling is a mechanical brute force causal process. So is computation. Rational, responsible, contemplative freedom (such as is required to play a game) is precisely not mechanical brute force in action and the attempt to reduce it to such ends in self-referential incoherence and undermining of mind, logic and reason. That so many are blind to that in our day speaks volumes and none of such to the good. KF
PS: Just as a reminder, Haldane’s caution:
It is largely facile to argue about definitions.
Creationism / intelligent design theorists use the definition of choosing where the possibilities are in the future, and one of them is made the present, which is called a decision. It is the definition of choosing that is based on spontaneity, spirituality, agency,etc. That choosing is the fundamental mechanism of creation.
And that has a lot of potential for explaining phenomena in nature. Not just explain human free will, obviously, but also the design of organisms. Because it means in the model you can have all organisms as potential organisms in the future, and choose intelligently among them, and the results of the model corresponds with what is found in the universe.
In any case, when you criticize intelligent design theory, you cannot mangle it, cannot change the definitions of it. You can only evaluate intelligent design theory on it’s own terms, using it’s own definitions, and then see if it corresponds with how things work in the universe.
F/N: I have added to the OP a further diagram that will help clarify the mechanism of computation. Instructions are fetched, decoded and executed at machine code, bits in registers level, executing a pattern of successive input, processing, output in the context of a purposeful plan, the algorithm. That is, the step by step sequence of actions that effects an overall outcome that is intended by its designer. At every stage, the register transfer and transformation operations are purely mechanical, taking significance from their designed purpose and the underlying code that gives them functional significance. For example, ASCII text, RGB etc colour codes, sound coding and floating point numbers are all externally imposed and processed by physical, mechanical instantiation of algebraic operations — and yes there is an algebra of operations, transforming input into output functions that vary with time or space as key independent variables. Hence the crucial role of registers and an arithmetic and logic unit in a processor, controlled by a control unit and interfacing through address, data and control signal line buses. Thus also the importance of digital signal processing and the underlying mathematics of difference equations, the digital — discrete state and/or discrete time — analogue of differential equations. Where, such operations can be mechanically instantiated once relevant values, variables and functions can be suitably coded. Hence the emerging digital information, communication and control era. KF
MNY, good point. KF
kairosfocus: a drilling is a mechanical brute force causal process.
Actually, a steam driller is more complex than a fishing reel. Nonetheless, steam drillers drill, just like fishing reels reel. People play chess, and people play the radio. If computers don’t “play” chess, what transitive verb applies?
mohammadnursyamsu: Creationism / intelligent design theorists use the definition of choosing where the possibilities are in the future, and one of them is made the present, which is called a decision.
Yes, that’s how computers make decisions, too.
What you write is empty say so. The mathematics which describe choosing in the creationist sense is demonstrably completely different from the if-else logic of computers.
For physics this creationist choosing means that objects consist of the laws of nature, rather than that they follow the laws of nature. As laws unto themselves objects compute their next state. So to say, one can model objects in nature mathematically, and such description requires a mathematical future part to each object (as well as a past), and this future part consists of definite possibilities. Just as we can accurately reflect the present state of an object, we can also accurately reflect the future of an object, albeit that it’s future consists of alternative values.
It is all demonstrably totally different from if-else logic, yet you insist it is the same…..
The theory is then that the DNA system is like a little universe in it’s own right, like human imagination is it’s own world much. Still things in the physical world can be copied to a representation in imagination, as things can be copied from the physical world to the DNA world.
And natural selection, as it is explained in terms of being forced, is then in principle knowable in advance, in this DNA world. Intelligent design can use natural selection to look into the future to see what organisms are fit, explaining how organisms are designed with a design principle of survival.
Do you consider that all still the same as if else logic?
…..we’ve been through it all. The way computers simulate choosing is with the random function. With the random function it looks like the computer can turn out one of several different ways in the moment autonomously. The if else function does not even simulate choosing, let alone that it is choosing.
mohammadnursyamsu: For physics this creationist choosing means that objects consist of the laws of nature, rather than that they follow the laws of nature. As laws unto themselves objects compute their next state.
How does the law of gravity choose?
mohammadnursyamsu: The way computers simulate choosing is with the random function.
No. While a random element can be added to a system, that isn’t essential to decision-making. Rather, you have a complex interface to the world which the computer analyzes. Then the computer projects into the future, compares to its criteria for success, then reaches a decision. That’s not so different from what people do, such as when they play chess.
Z, you are equivocating and confusing whole categories. The FSCO/I in reels and drills shows their design. But drilling is a physical brute force process, making a move in a chess game is a matter of agent choice. A radio circuit being on so it detects, demods and amplifies then converts to sound a signal is again utterly different. Was it GP talking about word games? KF
You still didn’t answer. Steam drivers drive. So did John Henry. People play chess, and people play the radio. If computers don’t “play” chess, what transitive verb applies?
Kasparov defeats chess-playing computer – Feb 17, 1996
Computers do as humans tell them to do.
Computers don’t play chess because they have no clue as to what playing means or what a board game is or that tic-tac-toe, checkers and Go are also board games. Chess playing computers are part of what is known as GOFAI, the bankrupt symbolic AI paradigm of the last century. As such, they suffer from the well-known symbol-grounding problem. IOW, it’s just rule-based or brute-force search crap that has nothing to do with intelligence or consciousness.
You’re out of your league, Zachriel.
Absolutely. Time is precious. It’s not worth arguing. They will never acknowledge their intellectual bankruptcy.
Z, as we all know, computers mechanically fetch, decode and execute machine code instructions from algorithms as designed and loaded by their designers and programmers. They do not perceive the positions in a game, creatively and freely work out possible and likely chains of moves then determine oh, I am tutoring a kid so I give him a chance, let me play this move. As for drilling a rock, that is as you full well know but choose to keep on typing as though repetition changes patent facts, a purely mechanical process. But then evolutionary materialist scientism forces adherents and fellow travellers to try to reduce mind to blind chance and mechanical necessity regardless of the resulting self referential incoherence that shows such is necessarily false. For, it is a first truth that we are responsibly free enough to be rational and morally governed. Denied only on pain of absurdity. KF
kairosfocus: computers mechanically fetch, decode and execute
So computers fetch, decode and execute. Doesn’t seem to work in a sentence, though. Computers fetch chess. Computers decode chess. Computers execute chess.
Everyone else in the world uses the phrase computers play chess. Seems to work fine. Not sure why you are making it an issue.
kairosfocus: They do not perceive the positions in a game, creatively and freely work out possible and likely chains of moves
Actually, artificial neural nets work in much that fashion; trying moves then evaluating positions based on what they have learned from previous play.
Z, you snipped out of context. Note: “computers mechanically fetch, decode and execute machine code instructions from algorithms as designed and loaded by their designers and programmers.” You snipped and substituted: “Computers fetch chess,” then went on to antropomorphise. Computers do not try out moves etc, they are down at the machine code level and register transfer level churning away. Programmers arrange such so that on given inputs, processing and outputs, alternative chess moves in a situation will be weighed up on some scale and a best alternative per an externally given weighting function, will be exploited. Again and again, you are trying to reduce mind to meat and signal processing. But, the evidence is that the whole evolutionary materialist reductionism is self referentially absurd. KF
PS: I forgot, They are not learning either, there is an algorithm for improving weightings and thus it is in the algorithm and where it comes from that we seek for intelligence. Nor am I impressed by objections along the lines of oh how dare you suggest machines cannot learn. We know what is going on and it is not an active process, the word is being used with equivocation in a context liable to lead to huge worldview errors, multiplied by refusal to acknowledge the self referential absurdity long since on the table.
Dr Selensky, you have a point. KF
Machines do not play because they do not know what “play” means. Wake up, troll. On your toes.
Mapou, they do not know either. Load and store data structured as functional information, but that is not the same thing as knowledge: well warranted, credibly true and/or functionally reliable belief. Which last is an active state of agents. KF
Zachriel, how does the computer “choose” which opening to use?
When one uses an abacus to do math calculations, is the abacus doing the calculating? Of course not. It is only a tool that makes it easier for a rational being to do calculations. The most sophisticated computer on the planet can do just as much thoughtful calculating, or chess playing, or anything else that requires rationality as an abacus can do math calculations, which is to say computers never do anything thoughtful at all.
A computer is just a very sophisticated tool very cleverly configured to manage electron flow in a predictable, programmatic way, but has no more awareness of what it is doing or how it is being used than an abacus does, or a can-opener, or a hammer. It is just a tool, albeit a very intricate one.
It is only this extreme intricacy of the computer’s Central Processing Unit and the esoteric nature of how that CPU and its instruction set takes advantage of the properties of election flow and resistance to electron flow that allows what it does to be confused with “thinking” by those who have never dealt with computers at the level of the CPU, its instruction set, logic gates and so on. It is just a tool no smarter than your nail clippers.
Thank you for bringing the abacus into the discussion!
Indeed, that has always been one of my favorite concepts. It is true that the results of an algorithmic computation are independent from the hardware which implements the computation itself. That makes the folly of strong AI theory absolutely self-evident.
Let’s imagine that strong AI theory (intended as the idea that consciousness arises as a by-product of software complexity) may be true. Then, let’s say that we have a sophisticated computer which performs complex computations (parallel, loop-rich, or whatever) so that, at last, consciousness arises.
Now, it must be true that if we perform the same computations, although more slowly, by a very big abacus system, that system should become conscious too!
The whole idea is folly.
The truth is that any algorithmic computation is only the sum of very simple computations. The whole system can be made mechanical, but it remains the sum of simple events.
So, if we really don’t think that computing 2 + 2 on an abacus generates consciousness (either if it is done by a person or automatically), why in the world should a long series of the same kind of events, in whatever order, become conscious?
All the single events which take place in a computer are essentially of the kind of a 2 + 2 sum, or of simple logical gates. A computer, however big and complex, is just a big automatic abacus, nothing more.
It is not conscious.
It does not understand any meaning.
It does not learn.
It does not feel anything.
It does not want anything.
It does not choose anything.
Whenever we use those words, as Zachriel usually does here, to describe what a computer does, we are only using analogies, and IMO very bad analogies. One can use words as one likes, but the underlying truth does not change.
Although I disagree with Zachriel regarding the capabilities of current machines, I have a problem with the above. I do research in AI and I believe computers can do these things but unconsciously. Computers can learn and can choose in the same way that animals learn and choose: according to preprogrammed instincts/motivations.
Programs do exist that can rearrange themselves to reflect their environments. They are called learning machines. They do it according to precise rules but then again, so does the human brain. Your spirit did not learn to see and recognize patterns; your brain did. The difference is that we can choose to override the recommendations of our own brain. Intelligence is always at the service of motivation.
Our future machines will act as if they do understand their environments and the words we speak to them. They will behave very intelligently. The reason is that those things are causal/physical phenomena that can be computed in a machine. Most of you will witness the arrival of these machines in your lifetimes. It will usher in the age of full unemployment. Wait for it.
gpuccio @ 66,
Very well put!
As for Zachriel, he appears to be just as gullible about the possibility of dumb, lifeless matter mindlessly and accidentally assembling itself into the metabolizing, self-replicating, digital information-based nanotechnology of life as he is gullible about the abilities of computers. I would say he doesn’t know what he doesn’t know, but I think he doesn’t want to know what he doesn’t know, because that might disturb his devout atheism.
Of course gravity cannot choose in your sense of sorting present variables, but it can make an alternative future the present. That is some complicated maths, selfrefferential, where the law of gravity is entered as data into the law of gravity. Supposedly this mathematics shows that Newton’s gravity, treated in this way, will have an antipatory aspect which equates to Einstein’s gravity. The same ‘abberation’ in the perihelion of Mercury is predicted with anticipatory Newtonian gravity, as it is with Einstein’s gravity. So it means Newton’s theory is reinstated, and Einstein’s theory is reconfigured as an anticipatory aspect of Newtonian gravity. With the added benefit that while Einstein’s theory can only be applied with a steady frame of reference, the anticipatory Newtonian theory can be applied with an accelerating frame of reference.
This above here is just sketchy, just to show broadly that theory can be made in terms of anticipation, which anticipation you still bizarrely equate to if else logic, which it is nothing of the kind.
For as far as computer randomness simulating choosing goes. In a computergame, obviously if what the monster in the game does is dependent on the random function then this can give a credible experience of the monster choosing what to do. And no matter how sophisticated you make any if else logic, once you know everything the monster is going to do, will be the exact same thing in the same situation, then the illusion of the monster choosing anything is lost. While with randomness, the player might have the illusion that the monster has emotions, that it is “courageous”, or “vicious” in deciding what it does. So there is the link to subjectivity again in regards to the agency of a decision, while there is no link to subjectivity at all in your if else logic.
In passing back above, I pick up in 32:
In this difference lies the whole problem.
To play a game or a sport inherently involves intent, motivation (fun or profit), engagement, goal-orientation, purpose, genuine decision, and so forth thus agency. But, we live in a time where thanks to the dominance of evolutionary materialist scientism, these per the force of that ideological imposition, must be squeezed out, discredited as dubious or illusory or even as the delusional and demonic superstitions of Sagan and Lewontin.
And so, the indoctrinated are locked into a cramped, implicitly self referential ideology that refuses to see its own absurdity as responsible freedom is the premise of reason on acknowledging ground and accepting the following consequent as aligning with guidestar principles of logic and evidence.
In trying to reduce an agent to a blind wetware neural network computational device with paralellism, looping and feedback, the very point at stake is squeezed out. So is the first fact of all: our self-perception of responsible rational freedom to understand, decide and act in a world that presents itself to us through our senses, awareness and understanding. Where, there is a difference between good sense and nonsense, sanity informed by wisdom and delusional, disintegrative insanity that is ever so wise in its own eyes and clever in its own self-deceiving conceits headed for a march of folly and shipwreck.
This is not a clash of science vs superstition.
Your side cannot even see the foundational Scientists’ credo that they were thinking God’s creative and sustaining providential thoughts after him, so living in a world of order and law that allowed them to examine cases, observe pattern, test consistency and with some degree of confidence summarise underlying law; albeit with some degree of provisionality and open-mindedness to correction. As in the rule of a ruler and architect of the world-system, an ordered reality, a cosmos not a chaos.
The operative word, Z, is PLAY.
Something that agents do by free and intelligent choice, something that is not mechanical or passive or blindly deterministic and/or a matter of dice tossing chance, maybe with some loading.
What you and ilk are forced to do is to impose a materialistic procrustean bed, stretching or cutting everything to fit a cramped worldview that is at the outset self-referentially incoherent. And necessarily self-falsifying as a direct consequent.
Never mind the August Magisterium all duly dressed in lab coats and putting on an impressive show in an oh so confident manner.
The key symptom is the constant bending, distortion and equivocation in use of words that must ever be stretched, squeezed, hammered, twisted, bent to fit what Rational Wiki so tellingly summed up, after the Coup: “Methodological naturalism is the label for the required assumption of philosophical naturalism when working with the scientific method.”
Sez the Materialist Magisterium dressed up in their August Lab coats even as they sweep the self referential incoherence and question begging under the carpet.
Instead, mind, self aware mind exhibiting responsible rational freedom is our first fact. The one through which we perceive the material world.
And free choice is a characteristic act of such agency.
Where the power of mind over matter is readily seen in how mind creates functionally specific complex organisation and associated information beyond 500 – 1,000 bits, readily overwhelming the needle in haystack search challenge that confronts blind chance and mechanical necessity in a cosmos of 10^80 atoms, fast 1 – 10 eV valence shell interaction rates of about 10^12 – 14 acts/s and duration 10^17s.
In short — and this is exactly the point that has been a sticking point and last ditch bastion of materialism — FSCO/I is a signature of intelligently directed configuration or design, a sign pointing to mind at work through decision, purpose, insight, creativity, knowledge and skill. (Hence the revolutionary nature of the design inference on FSCO/I, in whatever form. Horror of horrors once triumphant materialists, designing mind is BAAAAAACK!)
Signal, not noise.
Signature, not random ink spot splashed when the bottle fell.
Signature that speaks to us in the digitally coded algorithms and linked clever organisation of cell based life and the fine tuned deeply isolated operating point of a cosmos set up so that it supports C-Chemistry, aqueous medium, cell based terrestrial planet life, right from the core laws, constants and parameters of the cosmos.
Unwelcome sign, signature and signal.
Bring out the thumbscrews! Expel the heretic! Out and stalk him and his family down to the third and fourth degree of relationship! Slander, cruelly mock and scorn! (After all, it is only ignorant, stupid, insane or wicked fundy fanatics who want to subject us to Right Wing Theocratic Christofascist Tyranny who could dare object to Science facts, Facts FACTS! We’ll give them what they deserve! [Only, those caught up in this do not see how they are becoming what they so smugly project unto the despised other while refusing to objectively assess the foundational self-referential absurdity in evolutionary materialist scientism.])
But, e pur si muove.
It still moves, undeniable, plain for those willing to see.
Game over, materialists.
Check . . . 3 moves to mate.
PS: It seems necessary to again call attention to the fatal self referential incoherence at the heart of evolutionary materialist scientism. Here, via Nancy Pearcey:
Onlookers, carefully observe the studious avoidance of facing this issue on the parts of advocates of evolutionary materialist scientism and their fellow travellers.
PPS: Yes, I have not tried to boil the above down to a short little sound bite of dismissive rhetoric. Sometimes, we need to actually read, ponder and think; if, we are to go anywhere worth going intellectually.
I have deep respect for all those who seriously study AI. I am convinced that AI can shed a lot of light about what Chalmers call the “easy” problem of consciousness. There is no doubt that the brain processes information for consciousness, and it does it algorithmically. We can certainly understand much about that from AI studies.
What AI cannot solve is the “hard” problem of consciousness: why subjective experiences exist, and what they are.
Now, unfortunately the ambiguous use of words has “warranted”, in the imagination of people, a series of “analogies” which have really no objective support from facts. They make people assume that statements about the easy problem are really statements about the hard problem. But that is simply not true.
I will try to clarify my previous statements about the computer in the light of this word ambiguity:
1) It is not conscious.
My meaning: it has no subjective experiences and representations. IOWs, inj no way it is an example of a solution to the hard problem.
Alternative meanings: probably none, unless someone really thinks that the computer has subjective experiences, or that being conscious can be defined alternatively. But I understand that not even Zachriel has suggested something like that.
2) It does not understand any meaning.
That’s very important. Maybe the most important point.
My meaning: “meaning” can only be defined as a cognitive subjective experience. Therefore, the computer, having no conscious experiences, cannot understand any meaning at all.
Alternative meanings: it is perfectly possible to “freeze” into a complex software information which derives from a conscious understanding of some meaning, so that the software can operationally compute things as though it understood that meaning. But, in reality, there is no understanding at all.
3) It does not learn.
This is connected to 2).
My meaning: if we consider “learning” as a cognitive recognition of new meaning, then it is obvious that a computer cannot learn anything-
Alternative meanings: of course, a computer which has been programmed to accept new data from outer events, or from its interaction with them, can incorporate those new data into its computations, always according to the programming that it has received. Those new data can certainly bring new computational results, which can certainly be used in new computations, always according to pre-programmed instructions. However, there is no cognition in the process, therefore no “learning” in the cognitive sense I previously defined.
4) It does not feel anything.
This is easy. Felling is a subjective experience.
My meaning: the usual meaning of feeling.
Alternative meanings: I leave that to Zachriel!
5) It does not want anything.
Easy again. Desire and purpose are rooted in feeling: we want what is felt as good or pleasurable.
My meaning: to feel that some event or course is desirable for us, in any sense (cognitive, moral, or else). Which usually motivates action to make that event or course real.
Alternative meanings: any course of action can be programmed in a software as a response to some condition. In that sense, we can say that the software “wants” to act in that way. But there is no feeling in the process.
6) It does not choose anything.
This is more subtle, and connected to 5). I will not deal here with the problem of “free” choice. I will simply distinguish between conscious choices and non conscious algorithmic nodes.
My meaning: a conscious choice is an output which proceed from a conscious desire, in the form of what we consciously perceive as an act of “will”. Whether free or not free, we consciously feel that our choices are our choices, that they proceed from us. Otherwise, we do not call them “choices”.
Alternative meanings: any algorithm, even a very simple one, can respond to a condition with some predefined process, according to some logical gate evaluation. That’s what Zachriel calls “choice”, if I understand well his thought. That’s what “choice” means in AI. It’s OK for me, but in no way it is the same thing as a conscious choice as previously defined.
Moreover, while we can debate if a conscious choice can be free or not (IOWs, free will, if it exists, applies to conscious choices, or at least to some of them), the same cannot be said of algorithmic “choices”: they are certainly not “free” (even if we admit that free will exists), and at most they can incorporate some random element.
OK. there is always compatibilism, but I suppose that everybody here probably knows what I think of it! 🙂
kairosfocus: Z, you snipped out of context.
We asked for a transitive verb (if one can be provided). We snipped out the transitive verbs in the hopes you were attempting to answer.
kairosfocus: Computers do not try out moves etc, they are down at the machine code level and register transfer level churning away.
Computers churn chess. Is that your answer? Or are you saying we can’t use transitive verbs with machines, as in steam drillers drill holes?
Mung: how does the computer “choose” which opening to use?
Some computers always use the same opening. Some choose willy-nilly. Some choose based on past results. Much like people do!
harry: When one uses an abacus to do math calculations, is the abacus doing the calculating? Of course not.
On the other hand, a calculator will calculate the square root of a number.
gpuccio: It does not learn.
Artificial neural nets learn.
gpuccio: It does not choose anything.
Computers choose, using the ordinary meaning of the term.
kairosfocus: The operative word, Z, is PLAY.
Well, most everyone uses the word “play” to refer to computers that (insert transitive verb) chess. Perhaps it is your use of the word that is in error.
play: the conduct, course, or action of a game; a particular act or maneuver in a game; the moving of a piece in a board game (as chess).
gpuccio: That’s what “choice” means in AI. It’s OK for me, but in no way it is the same thing as a conscious choice
Which is why we have the term “conscious choice”, a subset of all choices. This is just semantics, but it is very hard to understand someone saying “computers can’t play chess” or “computers don’t make decisions”, when they clearly do. They’re pretty good at playing chess, actually. They can recognize your mother in a crowd, too!
Please, see my post #73.
A computer without a program for chess could never participate in a chess match. And everything that a computer does can be traced back to humans.
gpuccio: Please, see my post #73.
Thought we responded.
gpuccio: 1) It is not conscious.
gpuccio: 2) It does not understand any meaning.
That depends on what it means to understand. It may not be conscious of its understanding.
gpuccio: 3) It does not learn.
Of course computers learn, especially artificial neural nets. They aren’t conscious of learning.
gpuccio: My meaning: if we consider “learning” as a cognitive recognition of new meaning, then it is obvious that a computer cannot learn anything-
Being able to extrapolate from experience to new situations is learning. Being conscious of learning is not a requirement of learning.
gpuccio: 4) It does not feel anything.
Computers can have sensory inputs, but are not conscious of them.
gpuccio: 6) It does not choose anything.
Being conscious of choosing is not a requirement of choosing.
gpuccio: 4) It does not feel anything. 5) It does not want anything.
One day you may very well ask a computer what it wants and it will answer.
Mung: how does the computer “choose” which opening to use?
Zachriel: Some computers always use the same opening.
LoL. ok, so no choice involved at all. Got it.
Mung: how does the computer “choose” which opening to use?
Zachriel: Some choose willy-nilly.
How does a computer choose willy-nilly? Does it toss a coin, for example?
Zachriel, how does the computer choose to not play at all, or to stop playing once begun? Can it choose to not make an opening move at all?
Zachriel: One day you may very well ask a computer what it wants and it will answer.
I just asked my computer what it wanted and it chose to not answer. Or perhaps it chose to answer but used a language I just did not understand.
What do you think Zachriel? Which one is more likely?
Mung: ok, so no choice involved at all.
Peter: You always open King’s pawn when playing white.
Peter: So, no choice involved at all.
Paul: I choose to open King’s pawn when playing white.
Sally: You always have chocolate ice cream.
Sally: So, no choice involved at all.
Sue: I choose chocolate ice cream.
Mung: How does a computer choose willy-nilly? Does it toss a coin, for example?
That would be one way.
Mung: how does the computer choose to not play at all, or to stop playing once begun?
People may have a limited range of choices also.
Mung: I just asked my computer what it wanted and it chose to not answer.
You need an upgrade, obviously.
OK, your position is clear enough.
So I hope is mine.
I fully disagree with this. Meaning and understanding come from having an accurate model of one’s environment from which one can make useful predictions. This is certainly computable. Whether or not one is conscious of the model is irrelevant to its utility, IMO. Like I said, our future intelligent machines will have full understanding or their environments and of natural language and will act accordingly to accomplish the goals we give them. You will be amazed and many people will be deceived and will conflate their intelligence with consciousness. This is not unlike the way many of us already conflate the emotional behavior of animals with consciousness.
I disagree for the reasons I gave above.
I see a difference between conscious feeling and unconscious sensing. The former is impossible without the latter, IMO. Intelligence only needs the latter. Consciousness needs both.
Only if you mean ‘consciously wanting’. A machine can certainly have appetitive and aversive behaviors just like humans and animals. You are not hungry because your spirit is hungry. You are hungry because your body is hungry. This is related to the field of reinforcement learning. It’s all physical, cause-effect stuff. There is no reason that it cannot be emulated in a machine.
OK, I agree that true choice is impossible to a machine.
In friendship, I agree to disagree. But I do disagree.
If gpuccio’s statement “5) It does not want anything.” refers mainly to ‘conscious‘ events, then most of the above quoted explanation (except the first sentence) seems off topic, doesn’t it?
Mapou @82 [addendum to comment @84]
If gpuccio’s statement “5) It does not want anything.” refers mainly to ‘conscious‘ events, then most of the above quoted explanation (except the first sentence) seems off topic, doesn’t it?
Perhaps most physical causes (thirst, food craving?) could be simulated, assuming that you know all the details required to have the complete set of conditions with their associated actions for the criteria decision table. Otherwise, you could only simulate it partially, inaccurately.
However, conscious wanting is a different kind of issue.
Can a “strong” AI robot love an unlovable person? Why? How?
Can a “strong” AI robot love someone who hates the robot? Why? How?
We know that intelligence is always at the service of motivation. So where will intelligent machines get their motivation? From us, that’s where. It’s all about classical and operant conditioning, stuff that we learned in psychology 101. If we condition them to behave like angels, that is what they will do. If we condition them to behave like assholes, then we will only have ourselves to blame when they kick our stupid arses to oblivion.
Will intelligent machines love or hate in the sense that humans love and hate? Of course not. But they will surely behave as if they did. It’s all in the conditioning. And, as I said earlier, many people will swear that robots are conscious.
Only the most careful questioning will reveal otherwise. For example, they will have no way of determining whether a pattern that they have never seen before, is beautiful or ugly. They will know something is beautiful to us only because we will tell them what is beautiful and what is not. They will have no sense of beauty of their own. Why? It is because beauty is not a property of physical matter. It is a spiritual concept.
One man’s opinion, of course.
Briefly, about understanding and meaning:
I read the phrase:
“the cat is on the table”.
A very simple statement.
Being a conscious intelligent being, and as my brain has the computing power to decrypt the language, I understand what the phrase mean: I build a conscious representation of a cat on a table.
A complex computer receives the same phrase in input. Being complex, maybe it is well programmed to react to the input so that an observer can believe that the computer is understanding the meaning, in the sense that it understands that a cat is a cat, and that it is on the table. But that is simply not true, because the computer has no idea of what a cat is, and it has no idea that the cat is on the table, and it does not know what a table is, and it does not know what “on” means, and so on.
This is not only philosophy: as computers are not conscious and have no understanding, we have important consequences: the most important for ID is: computers cannot generate new original complex functional information, including complex original language.
In my view, you continue to conflate consciousness with intelligence. It’s frustrating because the words become ambiguous or meaningless. You write:
No. You just build a representation. Whether or not you are conscious of it does not take away from its power as a representation. When you are no longer consciously thinking about something, it does not mean that the physical representation of the thing in your brain disappears. Attention, conscious or not, moves from one cortical representation to another. I can emulate this in a computer program.
Maybe current intelligent programs do not have these abilities but I see no reason to suppose that future programs cannot know these things. They are all physical cause-effect phenomena, information that machines are exquisitely designed to process.
We can probably stop it here. However, I will try to clarify once again my thought.
“Whether or not you are conscious of it does not take away from its power as a representation. When you are no longer consciously thinking about something, it does not mean that the physical representation of the thing in your brain disappears.”
No. One thing is the representation in my brain, which can be equivalent to the representation in the computer, or in any other physical media (a photograph, etc.).
Another thing is the conscious event of my becoming aware of that representation subjectively.
That’s what the hard problem is about.
Now, not only I become subjectively aware, I also subjectively react to that awareness, and not only to the objective physical representation in my brain. That simply cannot happen in the computer, because there is no subjective awareness there. Only objective processes.
“Attention, conscious or not, moves from one cortical representation to another. I can emulate this in a computer program.”
No, you can’t. For the simple reason that attention means what our I is aware of. The computer is not aware of anything, so it has no attention. Obviously, it processes different things at different times. If you call that “attention”, then your statement becomes true.
But then, it’s you that are conflating two completely different meanings for “attention”:
1) What the subject is aware of at some moment
2) What an object (the CPU) is processing at some moment.
OK, I have no intention to try to convince you. If you understand my position, and still don’t agree, we can really stop it here. But if you want to clarify further points, for the sake of constructive discussion, I am happy of that.
Mung: how does the computer choose to not play at all, or to stop playing once begun?
Zachriel: People may have a limited range of choices also.
We’re not talking about people, we’re talking about a computer and whether the computer makes a choice to play or not play a game of chess and if so, how that choice is made.
If you have nothing we’ll all understand, really. The computer cannot choose whether or not to play a game of chess. So trying to answer how it does something that it cannot do is futile. But go ahead and try.
Not at all. I agree that a computer cannot be conscious. What I am saying is that conscious awareness does not happen in a vacuum. There is a physical part in the phenomenon that you seem to ignore. To be aware of a fruit on the table requires many physical things. The fruit has to exist and its representation in the visual cortex has to exist. Your spirit is simply aware of the representation. In order for that to happen, the physical neuronal circuits in the cortex that comprise the representation must be activated. This is the physical part of the attention phenomenon. I am saying that all of these physical circuits and activations that are related to the recognition and representation of an object in the brain are computable. Intelligence is a physical thing.
Another way to put it is this. Awareness is a yin-yang phenomenon, i.e., it requires a subject and an object. The subject is the spirit that is in the brain and the object is an activated physical representation in the brain.
All one needs to do in order to create machine intelligence is to emulate the neuronal circuits in the brain. The machine will not have conscious awareness of anything but it will act as if it did. Why? Because all the knowledge and the circuits are just physical stuff that can be emulated. The machines will have goals and will try to accomplish their goals intelligently. They will behave according to well-known psychological principles of classical and operant conditioning.
In conclusion, I again predict the arrival, in the not too distant future, of machines that are uncannily and even frighteningly intelligent. And I mean it in the same sense that humans are intelligent with the exception that the machines will not be conscious.
It’s coming. Wait for it.
Mung: We’re not talking about people, we’re talking about a computer and whether the computer makes a choice to play or not play a game of chess and if so, how that choice is made.
No. We’re talking about whether computers make choices. You pointed to a case where a computer may have no choice. The answer is that people are often limited in their choices also.
Nonetheless, there’s no reason a computer may not be able to choose to play or not play. To be relevant, the choice has to be based on some sort of external constraint, as is the case with humans, perhaps as a gambling decision, or due to limitations of resources.
I have no problems at all with the physical and algorithmic part of conscious processes. I am well aware of its importance. That is the easy problem, as described by Chalmers. Easy, but not too easy. And important.
But I do have problems with this:
“Intelligence is a physical thing.”
“All one needs to do in order to create machine intelligence is to emulate the neuronal circuits in the brain. The machine will not have conscious awareness of anything but it will act as if it did. Why? Because all the knowledge and the circuits are just physical stuff that can be emulated.”
Because the consciousness which perceives the forms “prepared” by the brain is not a passive component. It perceives, it understands, and it reacts. If you take the consciousness away, the process is no more the same, and the results are no more the same.
Algorithms are “intelligent” in the sense that a conscious agent has designed them intelligently. They are, in a sense, “frozen” intelligence.
Now, frozen intelligence can do many thins, but it cannot do everything that “conscious” intelligence can do.
For example, it cannot generate new original complex functional information.
Why? Because the conscious recognition of meaning, and reaction to that recognition in the form of original (free) output to the brain, are IMO fundamental components of the process.
“It’s coming. Wait for it.”
I will wait. But I am not holding my breath.
gpuccio: Algorithms are “intelligent” in the sense that a conscious agent has designed them intelligently. They are, in a sense, “frozen” intelligence.
Neural networks, on the other hand, are not frozen, but learn from their interaction with the environment.
gpuccio: For example, it cannot generate new original complex functional information.
Computers can find novel solutions to problems, for instance, in chess. It’s original, complex, and within the world of chess, functional.
Neural networks are intelligently designed.
Computers find only what they are programmed to find.