Uncommon Descent Serving The Intelligent Design Community

Face it, your brain isn’t a computer

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Though Gary Marcus tells us it is, in “Face It, Your Brain Is a Computer”at the New York Times:

… Finally, there is a popular argument that human brains are capable of generating emotions, whereas computers are not. But while computers as we know them clearly lack emotions, that fact itself doesn’t mean that emotions aren’t the product of computation. On the contrary, neural systems like the amygdala that modulate emotions appear to work in roughly the same way as the rest of the brain does, which is to say that they transmit signals and integrate information, and transform inputs into outputs. As any computer scientist will tell you, that’s pretty much what computers do.

Of course, whether the brain is a computer is partly a matter of definition. The brain is obviously not a Macintosh or a PC. And we humans may not have operating systems, either. But there are many different ways of building a computer.

The real payoff in subscribing to the idea of a brain as a computer would come from using that idea to profitably guide research. In an article last fall in the journal Science, two of my colleagues (Adam Marblestone of M.I.T. and Thomas Dean of Google) and I endeavored to do just that, suggesting that a particular kind of computer, known as the field programmable gate array, might offer a preliminary starting point for thinking about how the brain works.

That computers do not generate emotions is not a “popular argument”; it is a fact.

If neurons are akin to computer hardware, and behaviors are akin to the actions that a computer performs, computation is likely to be the glue that binds the two.

There is much that we don’t know about brains. But we do know that they aren’t magical. They are just exceptionally complex arrangements of matter. Airplanes may not fly like birds, but they are subject to the same forces of lift and drag. Likewise, there is no reason to think that brains are exempt from the laws of computation. If the heart is a biological pump, and the nose is a biological filter, the brain is a biological computer, a machine for processing information in lawful, systematic ways. More.

And Frankenstein is alive and well at the North Pole too.

DuBois needs to talk to David Gelernter:

Following on a Slate computer columnist’s assessment that artificial intelligence has sputtered, Yale computer science prof David Gelernter offers some thoughts on the closing of the scientific mind. Readers will appreciate his comments on the “punks, bullies, and hangers-on” who have been attacking philosopher Thomas Nagel for doubting Darwin:

The modern “mind fields” encompass artificial intelligence, cognitive psychology, and philosophy of mind. Researchers in these fields are profoundly split, and the chaos was on display in the ugliness occasioned by the publication of Thomas Nagel’s Mind & Cosmos in 2012. Nagel is an eminent philosopher and professor at NYU. In Mind & Cosmos, he shows with terse, meticulous thoroughness why mainstream thought on the workings of the mind is intellectually bankrupt. He explains why Darwinian evolution is insufficient to explain the emergence of consciousness—the capacity to feel or experience the world. He then offers his own ideas on consciousness, which are speculative, incomplete, tentative, and provocative—in the tradition of science and philosophy. More.

But he won’t.

See also: Why the human mind is hard to grasp (so to speak)

Follow UD News at Twitter!

Follow UD News at Twitter!

Comments
Thanks for proofreading, Lincoln Phipps! Must have swatched the wrong name from the screen. Corrected.News
July 8, 2015
July
07
Jul
8
08
2015
03:37 AM
3
03
37
AM
PST
F/N: My onward response to Marcus: https://uncommondescent.com/ethics/science-worldview-issues-and-society/answering-popperians-challenge-why-doesnt-someone-start-out-by-explaining-how-human-beings-generate-emotions-then-point-out-how-the-universality-of-computation-does-not-fit-that-explana/#comment-571284 KFkairosfocus
July 8, 2015
July
07
Jul
8
08
2015
03:25 AM
3
03
25
AM
PST
Gérard DuBois is a French illustrator and produces images for the NYT including an image for the cited article. Gary Marcus, a professor of psychology and neural science at New York University, actually wrote the article that O'Leary (NEWS) cites as being written by DuBois. Please correct the OP.Lincoln Phipps
July 8, 2015
July
07
Jul
8
08
2015
12:25 AM
12
12
25
AM
PST
Virgil Cain: the premise was about limitations, not enumeration. We asked about a count, and you responded that it was via "science". But the question remains, how do you measure the human "problem space"?Zachriel
July 6, 2015
July
07
Jul
6
06
2015
07:14 AM
7
07
14
AM
PST
Zachriel:
That vague statement doesn’t entail an enumeration.
It is only vague to the unknowledgeable. And the premise was about limitations, not enumeration. To recap: Zachriel totally messed up what mjoels said, got caught and is now in full flail mode.Virgil Cain
July 5, 2015
July
07
Jul
5
05
2015
10:19 AM
10
10
19
AM
PST
Virgil Cain: Via observation and experimentation, ie science. That vague statement doesn't entail an enumeration.Zachriel
July 5, 2015
July
07
Jul
5
05
2015
09:49 AM
9
09
49
AM
PST
Zachriel:
How do you count the problem space for humans, as opposed to lower organisms?
Via observation and experimentation, ie science. That is what has you confused.
By Darwinism, are you referring to the modern theory of evolution, or to Darwin’s original theory, or something else?
There isn't any "modern theory of evolution". Darwin tried but he didn't produce a scientific theory either.Virgil Cain
July 5, 2015
July
07
Jul
5
05
2015
07:34 AM
7
07
34
AM
PST
Silver Asiatic: In the literal sense. That's not even close to an answer. Take the definition of Darwinism, and then show in what manner it is "infinitely expandable, but limited in degree." By Darwinism, are you referring to the modern theory of evolution, or to Darwin's original theory, or something else? By infinitely expandable, do you mean the theory is infinitely expandable, or are you referring to the capabilities of evolution?Zachriel
July 5, 2015
July
07
Jul
5
05
2015
07:07 AM
7
07
07
AM
PST
Z
In what sense is Darwinism “infinitely expandable, but limited in degree”?
In the literal sense.Silver Asiatic
July 5, 2015
July
07
Jul
5
05
2015
06:58 AM
6
06
58
AM
PST
Virgil Cain: One- the human brain’s problem space- was infinitely expandable and the other-lower organisms- was limited in degree. How do you count the problem space for humans, as opposed to lower organisms?Zachriel
July 5, 2015
July
07
Jul
5
05
2015
05:49 AM
5
05
49
AM
PST
Mung: Darwinism in a nutshell. In what sense is Darwinism "infinitely expandable, but limited in degree"?Zachriel
July 5, 2015
July
07
Jul
5
05
2015
05:48 AM
5
05
48
AM
PST
Mung #39, Perfect!Box
July 5, 2015
July
07
Jul
5
05
2015
01:07 AM
1
01
07
AM
PST
Zachriel:
How can something be infinitely expandable, but limited in degree?
It was two different things, Zachriel. One- the human brain's problem space- was infinitely expandable and the other-lower organisms- was limited in degree. So yes one thing can be infinite while another can be limited and the two can have similarities. Things that make you go hmmmmm....Virgil Cain
July 4, 2015
July
07
Jul
4
04
2015
08:10 PM
8
08
10
PM
PST
Mung 39 Q: How can something be infinitely expandable, but limited in degree? A: Darwinism in a nutshell. Post of the day, Mung!anthropic
July 4, 2015
July
07
Jul
4
04
2015
07:39 PM
7
07
39
PM
PST
It’s also a fallacy to assume that computers can do things they are currently not capable of doing.
How do you determine what a computer is or is not capable of doing? Again, I would suggest that there are a number of things computers can do, but are currently incapable of doing because we haven't figured out how to program that capability yet. For example, part of what makes the internet so powerful is that systems can dynamically determine the closest route though a number of nodes in a network. The algorithm that makes this possible is called Dijkstra's algorithm. The earliest universal computers did not exhibit this capability. This is because Dijkstra's algorithm had yet to be developed. Yet, we knew this was possible, not because we actually achieved it, but because of the explanatory theory about how computers do what they do. In the same sense, the laws of physics are such that a digital computer can simulate any other physical system, not just another computer, with arbitrary precision.Popperian
July 4, 2015
July
07
Jul
4
04
2015
03:01 PM
3
03
01
PM
PST
kairosfocus:
Mapou, reasoning and common sense etc are not blindly mechanical causal chains (perhaps perturbed by some noise) such as are effected in an arithmetic-logic unit, ALU or a floating point unit, FPU. Instead, such are inherently based on insight into the ground-consequent relationship and broader heuristics that guide inference, hunches, sense of likelihood or significance of a sign etc. While we can mimic some aspects of such through sufficiently complex blends of algorithms — I have in mind so-called expert systems, these again are critically dependent on programming design and the structure and contents of data evaluated as knowledge and rules of inference, heuristics of “explanation” in response to query, etc. Such things of course are intelligently designed.
Of course, they are intelligently designed and so is the brain. But this is irrelevant to whether or not a machine can be just as intelligent as you and I. Your use of the word 'blind' to refer to mechanisms is erroneous. There is nothing blind about concurrent and sequential pattern detectors. The opposite is true. They are not blind to the sensory patterns that they detect.
From what you are saying, you have been developing a system capable of detecting characteristic patterns and locking to a target once acquired, resisting a fair degree of background noise or interference. Such is an achievement, one that is again functionally specific, complex, organised, information-rich — i.e. FSCO/I — and it is obviously intelligently designed. (BTW, note the military implications.)
Believe me, the implications, military and otherwise, have not escaped me.
I bring forward the FSCO/I point to underscore that AI systems as implemented fundamentally reveal their source in design. That is not crucial, what is is the difference between inherently blind mechanism and insight based rationality. Reduction to tokens used as symbols and stored in data structures then processed on mechanical step by step algorithms to yield programmed results through essentially mechanical cause-effect chains is not rational insight and inference. Nor is it responsible, rational freedom.
I disagree. Insight is a way of saying that some bits of knowledge are connected to some others. This is a normal characteristic of hierarchical knowledge systems. By the way, my learning algorithm is unsupervised, meaning that, unlike like current deep learning programs, it does not require that a label or symbol be attached to the audio data. Rebel Speech is not an expert system. It is non-symbolic: no tokens, no symbols, no labels. Just sensory data.Mapou
July 4, 2015
July
07
Jul
4
04
2015
11:49 AM
11
11
49
AM
PST
How can something be infinitely expandable, but limited in degree? Darwinism in a nutshell.Mung
July 4, 2015
July
07
Jul
4
04
2015
09:05 AM
9
09
05
AM
PST
mjoels: Our brain (if we are really just meat computers) can expand its problem space infinitely... Even the smallest life can do it, albeit to a limited degree. How can something be infinitely expandable, but limited in degree? Querius: When a non-conscious machine of some kind such as a recording device is involved in a quantum experiment, unless the recording is observed/observable by a human, the wave function does not collapse, and the machine becomes entangled with the quantum experiment. Nowadays, wave function collapse is normally analyzed as a case of quantum decoherence, which occurs with any macroscopic interaction or system with many degrees of freedom.Zachriel
July 4, 2015
July
07
Jul
4
04
2015
06:54 AM
6
06
54
AM
PST
35# KF, not to put to fine a point on it: It is the Holy Spirit that coordinates the strands of our intelligence, producing intuition (infused knowledge), wisdom (infused understanding), etc. Christians know that it is the Holy Spirit who is the source of our prayers, when prayed aright, speaking, according to the mind of God. However, there is sometimes a strange similitude between its action in a person's prayer, and the movements of a sportsman, when he is 'in the zone'. A further similitude springs to mind between a person having obtrusive thoughts and a person who is told, on no account to think of the word that you are going to say to him. He will immediately apprehend the full meaning of the word, but will then be unable to prevent himself from reflecting on it, however briefly; the only difference being that bad, obtrusive thoughts are demonically inspired. It sounds 'hair-raising' (and usually is!), but we are subject to their promptings pretty much all the time, in a host of different ways, however appropriately or inappropriately we may respond. However, that initial, immediate apprehension, indeed, intuiting, of the full meaning of the word in both cases is paralleled by the play of both the sportsman when 'in the zone', and on occasions by a person while in his prayers. The coordination of heart and mind when praying is not always automatic and easy, particularly when tired, when the mind can wander off following the hearts discursive meanderings, instead of its leading and controlling the path of the thoughts of the heart. It seems that the way to remedy this distractedness is to pray faster. I used to find it a little shocking that a priest could lead praying of the Rosary at what seemed to me to be in an unseemly haste, but later discovered that one can not only 'get into the flow' at a mundane level, actually focusing the mind by reciting the words, but on occasions, get into the sportsman's 'zone' by doing that, so that, not only does one immediately, effortlessly and seamlessly apprehend the meanings of the words but can simultaneously reflect on the Mysteries relating to Christ's life, death, Resurrection and Ascension concerned. Almost as though, at the same time, one were a spectator. It doesn't always happen that way, but it's nice when it does.Axel
July 4, 2015
July
07
Jul
4
04
2015
04:13 AM
4
04
13
AM
PST
Popperian, re:
why doesn’t someone start out by explaining how human beings generate emotions, then point out how the universality of computation does not fit that explanation. Effectivly stating “It’s magic and computers are not magic doesn’t cut it.” Pushing the problem into an inexplicable mind hat exists in an inexplicable realm, doesn’t improve the problem.
Thanks for sharing your reflections (as opposed to the too common deadlocks on talking point games and linked typical fallacies that have become all too familiar . . . and informal fallacies are instructive on this matter . . . ), this always helps discussion move forward. Second, pardon an observation: your response inadvertently shows how you have become overly caught up in the Newtonian, clockwork vision of the world. Again, that reasoning by analogy or paradigmatic example -- even though misleading -- is instructive. My fundamental point is that reasoning as opposed to blindly mechanical computation inherently relies on insight into meaning and a sense of structured patterns that suggest connexions. For instance, many informal fallacies pivot on how emotions are deeply cognitive judgements that shift expectations and trigger protective responses. So, if someone diverts attention from the focal topic and sets up then soaks a strawman in ad hominems and ignites, the resulting fears and anger will shift context and will contribute to inviting dismissal of the original matter without serious evaluation. Thus the protective heuristice have been manipulated. Similarly, by shifting focus from the significance of insights and meaningful connexions to the scientific paradigm of Newtonian clockwork, then blending in the success of computer systems there is a shift away from a crucial difference that then leads to a reductionist, mechanistic tendency. The case of expert systems as was just discussed with Mapou is instructive:
reasoning and common sense etc are not blindly mechanical causal chains (perhaps perturbed by some noise) such as are effected in an arithmetic-logic unit, ALU or a floating point unit, FPU. Instead, such are inherently based on insight into the ground-consequent relationship and broader heuristics that guide inference, hunches, sense of likelihood or significance of a sign etc. While we can mimic some aspects of such through sufficiently complex blends of algorithms — I have in mind so-called expert systems, these again are critically dependent on programming design and the structure and contents of data evaluated as knowledge and rules of inference, heuristics of “explanation” in response to query, etc.
Notice, the motif of evaluation by comparison while noting key differences? Thus, the implication that analogies -- pivotal to inductive reasoning BTW -- are prone to being over-extended. We know per widespread experience that there are patterns in the world, and that sch often can be extended from one case to another so if we think there is a significant similarity, we will extend. But this raises the question of implications of significant difference and adjusting, adapting or overturning the extension. Such thought is imaginative, active, inferential, defeasible but verifiable to the point of in some cases strong empirical reliability, and more, much more. It is inherently non-algorithmic, pivoting on meaning, judgement and insight. As I am aware of your problem with inductive reasoning (broad sense), I share Avi Sion's point:
We might . . . ask – can there be a world without any ‘uniformities’? A world of universal difference, with no two things the same in any respect whatever is unthinkable. Why? Because to so characterize the world would itself be an appeal to uniformity. A uniformly non-uniform world is a contradiction in terms. Therefore, we must admit some uniformity to exist in the world. The world need not be uniform throughout, for the principle of uniformity to apply. It suffices that some uniformity occurs. Given this degree of uniformity, however small, we logically can and must talk about generalization and particularization. There happens to be some ‘uniformities’; therefore, we have to take them into consideration in our construction of knowledge. The principle of uniformity is thus not a wacky notion, as Hume seems to imply . . . . The uniformity principle is not a generalization of generalization; it is not a statement guilty of circularity, as some critics contend. So what is it? Simply this: when we come upon some uniformity in our experience or thought, we may readily assume that uniformity to continue onward until and unless we find some evidence or reason that sets a limit to it. Why? Because in such case the assumption of uniformity already has a basis, whereas the contrary assumption of difference has not or not yet been found to have any. The generalization has some justification; whereas the particularization has none at all, it is an arbitrary assertion. It cannot be argued that we may equally assume the contrary assumption (i.e. the proposed particularization) on the basis that in past events of induction other contrary assumptions have turned out to be true (i.e. for which experiences or reasons have indeed been adduced) – for the simple reason that such a generalization from diverse past inductions is formally excluded by the fact that we know of many cases [[of inferred generalisations; try: "we can make mistakes in inductive generalisation . . . "] that have not been found worthy of particularization to date . . . . If we follow such sober inductive logic, devoid of irrational acts, we can be confident to have the best available conclusions in the present context of knowledge. We generalize when the facts allow it, and particularize when the facts necessitate it. We do not particularize out of context, or generalize against the evidence or when this would give rise to contradictions . . .[[Logical and Spiritual Reflections, BK I Hume's Problems with Induction, Ch 2 The principle of induction.]
We have a deep intuitive sense that there is order and organisation in our cosmos, which comes out in recognisable, stable and at least partly intelligible patterns that extend from one case to another. Mechanism, of course is one such, and explanation on mechanism is highly successful in certain limited spheres. But by the turn of C19, there were already signs of randomness at work and by C20 we had to reckon with the dynamics of randomness in physics. In quantum mechanics, this is now deeply embedded, many phenomena being inextricably stochastic. But reducing an irreducibly complex world tot he pattern of mechanism with some room for chance, is not enough. The first fact of our existence is our self-aware, self-moved intelligent consciousness and interface with an external world using our bodies. This too is a reasonable pattern, one that we see in action with others who are as we are. From this we abstract themes such as intelligence, responsible freedom, agency, purpose and more, which we routinely use in understanding how we behave and the consequences when we act. What has happened in our time is that due to the prestige of science, mechanism based explanations have too often been allowed to displace the proper place for agent based explanations, the place for art and artifice. This has even been embedded in a dominant philosophy that too often unduly controls science: evolutionary materialism. There is even a panic, that if agency is allowed in the door, "demons" will be let loose and order and rationality go poof. This then often triggers fear, turf protection and linked locked in closed minded ideological irrationality. The simple fact that modern science arose from in the main Judaeo-Christian thought that perceived a world as designed in ways meant to point to its Author, through involving at some level simple and intelligible organising principles or laws, should give pause. The phrase thinking God's [creative, organising and sustaining] thoughts after him should ring some bells. (This is too often suppressed in the way we are taught about the rise of modern science.) And of course, by way of opening the door to self-referential incoherence through demanding domination of mindedness by mechanism, evolutionary materialism falsifies itself. Haldane puts it in a nutshell:
"It seems to me immensely unlikely that mind is a mere by-product of matter. For if my mental processes are determined wholly by the motions of atoms in my brain I have no reason to suppose that my beliefs are true. They may be sound chemically, but that does not make them sound logically. And hence I have no reason for supposing my brain to be composed of atoms. In order to escape from this necessity of sawing away the branch on which I am sitting, so to speak, I am compelled to believe that mind is not wholly conditioned by matter.” [["When I am dead," in Possible Worlds: And Other Essays [1927], Chatto and Windus: London, 1932, reprint, p.209.
So, the very terms you use: "how human beings generate emotions," is a giveaway. We do not so much generate emotions and other consciously aware states of being, we experience them. And, to recognise and respect that fact without reference to demands for mechanistic reduction is a legitimate start-point for reflection. All explanation is going to be finite and limited, so there will always be start-points. Starting from the realities of our interior-life experience is a good first point, and reflection on such shows that rationality itself (a requisite of doing science etc) crucially depends on insightful, purposeful responsible and rational freedom. That which undermines such will then be self-defeating, and should be put aside. Thus, the significance of Reppert's development of Haldane's point via Lewis:
. . . let us suppose that brain state A, which is token identical to the thought that all men are mortal, and brain state B, which is token identical to the thought that Socrates is a man, together cause the belief that Socrates is mortal. It isn’t enough for rational inference that these events be those beliefs, it is also necessary that the causal transaction be in virtue of the content of those thoughts . . . [[But] if naturalism is true, then the propositional content is irrelevant to the causal transaction that produces the conclusion, and [[so] we do not have a case of rational inference. In rational inference, as Lewis puts it, one thought causes another thought not by being, but by being seen to be, the ground for it. But causal transactions in the brain occur in virtue of the brain’s being in a particular type of state that is relevant to physical causal transactions
Trying to reduce this to blindly mechanistic physical cause-effect chains with perhaps some noise, is self-defeating. In short, start-points and contexts for reasoning count for a lot. KF PS: Headlined: https://uncommondescent.com/ethics/science-worldview-issues-and-society/answering-popperians-challenge-why-doesnt-someone-start-out-by-explaining-how-human-beings-generate-emotions-then-point-out-how-the-universality-of-computation-does-not-fit-that-explana/kairosfocus
July 4, 2015
July
07
Jul
4
04
2015
01:28 AM
1
01
28
AM
PST
Mapou, reasoning and common sense etc are not blindly mechanical causal chains (perhaps perturbed by some noise) such as are effected in an arithmetic-logic unit, ALU or a floating point unit, FPU. Instead, such are inherently based on insight into the ground-consequent relationship and broader heuristics that guide inference, hunches, sense of likelihood or significance of a sign etc. While we can mimic some aspects of such through sufficiently complex blends of algorithms -- I have in mind so-called expert systems, these again are critically dependent on programming design and the structure and contents of data evaluated as knowledge and rules of inference, heuristics of "explanation" in response to query, etc. Such things of course are intelligently designed. From what you are saying, you have been developing a system capable of detecting characteristic patterns and locking to a target once acquired, resisting a fair degree of background noise or interference. Such is an achievement, one that is again functionally specific, complex, organised, information-rich -- i.e. FSCO/I -- and it is obviously intelligently designed. (BTW, note the military implications.) I bring forward the FSCO/I point to underscore that AI systems as implemented fundamentally reveal their source in design. That is not crucial, what is is the difference between inherently blind mechanism and insight based rationality. Reduction to tokens used as symbols and stored in data structures then processed on mechanical step by step algorithms to yield programmed results through essentially mechanical cause-effect chains is not rational insight and inference. Nor is it responsible, rational freedom. I again draw attention to Reppert (And others beyond him):
. . . let us suppose that brain state A, which is token identical to the thought that all men are mortal, and brain state B, which is token identical to the thought that Socrates is a man, together cause the belief that Socrates is mortal. It isn’t enough for rational inference that these events be those beliefs, it is also necessary that the causal transaction be in virtue of the content of those thoughts . . . [[But] if naturalism is true, then the propositional content is irrelevant to the causal transaction that produces the conclusion, and [[so] we do not have a case of rational inference. In rational inference, as Lewis puts it, one thought causes another thought not by being, but by being seen to be, the ground for it. But causal transactions in the brain occur in virtue of the brain’s being in a particular type of state that is relevant to physical causal transactions
KFkairosfocus
July 4, 2015
July
07
Jul
4
04
2015
12:30 AM
12
12
30
AM
PST
kairosfocus:
Computers are mechanisms that do not work through meaning and common sense.
I fully disagree with this. If you had said "Computers are unconscious mechanisms", I would have agreed. But meaning, reasoning and common sense are all cause-effect phenomena, which means that they are actually impossible without a mechanism. So there is no reason that these things cannot be emulated in a machine. IDists should stop resisting artificial intelligence. It's making ID look bad. Intelligence does not require consciousness or vice versa. Soon, we will have machines that are just as intelligent as we are or even more so. On a slightly different tangent: In the not too distant future, I plan to release Rebel Speech, the first unsupervised machine learning program that can learn to recognize speech (or any other sound) as accurately as a human being, just by listening. In addition, it is able to focus on a given voice in a conversation while ignoring all others, thereby solving the cocktail party problem. Wait for it.Mapou
July 3, 2015
July
07
Jul
3
03
2015
05:59 PM
5
05
59
PM
PST
Mapou, When a non-conscious machine of some kind such as a recording device is involved in a quantum experiment, unless the recording is observed/observable by a human, the wave function does not collapse, and the machine becomes entangled with the quantum experiment. Whether Schrödinger's cat, by staring intently at the radioactive particle that would set off the geiger counter, etc., can remain alive by employing the Quantum Zeno effect is unresolved and controversial. -QQuerius
July 3, 2015
July
07
Jul
3
03
2015
05:44 PM
5
05
44
PM
PST
The answer is simple and infinitely complex at the same time. Our brain (if we are really just meat computers) can expand its problem space infinitely. We can identify and solve novel problems without having to be pre programmed to do so. Even the smallest life can do it, albeit to a limited degree. (antibiotic resistance anyone) No machine created can ever do that. Machines are deterministic. You can't change that. No matter what, they have a finite set of outcomes based on their initial coding, that is spread across a specific spectrum of possible solution space. A computer can never defy it's initial program. Neither can a TM. No model we currently have in mathematics can while bounded. That is impossible. The problem is that consciousness grows. It is not some static quantity you can fit inside a pre-defined box. AI is junk science mostly. We can define loops and clever tricks to make it seem like a computer makes decisions or performs some action, but there is never any intention behind it, it is always deterministic and will always be bounded by its physical limitations. People that believe in AI believe in it because it fits in with their materialistic beliefs, not because there is any strong proof that it is possible. All the proof right now says it is not. Even the article only speculates that physics HINTS AT IT While I believe that we might one day have bio based computing algorithms, there is no possibility that a hunk of metal will come to be known to be alive or have any equivalent sort of existence with a human.mjoels
July 3, 2015
July
07
Jul
3
03
2015
05:30 PM
5
05
30
PM
PST
Again, why doesn't someone start out by explaining how human beings generate emotions, then point out how the universality of computation does not fit that explanation. Effectivly stating "It's magic and computers are not magic doesn't cut it." Pushing the problem into an inexplicable mind hat exists in an inexplicable realm, doesn't improve the problem. Of course, no one wants to explain how human beings generate emotions. How could anyone since it's been divinely revealed that God did it and he is inexplicable, right? Computers, in the context of the article, are Universal Turing machines, not calculators. No one designed the first UTM with the goal of creating universally. Rather, we wanted a way to perform more accurate calculatons, quicker and more conveniently. Universality emerges from a specific repertoire of computations. It's one of those concrete examples where explicability resolves at a higher level that is quasi-independent. As for why we've stalled, see this article.Popperian
July 3, 2015
July
07
Jul
3
03
2015
04:50 PM
4
04
50
PM
PST
Popperian, computers are blindly mechanical, non rational signal processing devices. Responsible rational freedom and associated conscious intelligence put us in an entirely different category. And, we should note that. KF PS: Perhaps this can help us open up thinking on the mind-brain-body issue, courtesy Derek Smith: http://iose-gen.blogspot.com/2010/06/origin-of-mind-man-morals-etc.html#smth_modkairosfocus
July 3, 2015
July
07
Jul
3
03
2015
04:01 PM
4
04
01
PM
PST
Mapou, I think the underlying logical case is deeper than whether we have a Turing machine. Computers are mechanisms that do not work through meaning and common sense. They execute mechanical operations, blindly, on data, and so fall under GIGO including getting into flailing loops or semantic blunders that go nowhere and just keep on until the power is externally switched off. As I have noted, computation is not contemplation. KFkairosfocus
July 3, 2015
July
07
Jul
3
03
2015
03:50 PM
3
03
50
PM
PST
It is a fallacy that modern computers are Turing machines and are thus subject to the halting problem. This is the age of massively parallel computing and networks. Turing's idea's on this are irrelevant.Mapou
July 3, 2015
July
07
Jul
3
03
2015
03:29 PM
3
03
29
PM
PST
Some thoughts on the difference between computers and humans from Ashish Dalela's Godel's Mistake: The Role of Meanings in Mathematics.
"Turing’s proof of the Halting Problem means that there are no formal procedures to distinguish programs that halt from those that don’t. This illustrates the contrast between computer programs and humans. Even an average intelligence human is unlikely to loop through the above instructions more than once. Humans would quickly detect a loop and stop even though there is no instruction to that effect. Humans are goal oriented and can see that looping is not taking them closer to the goal of solving a problem. A computer is not goal-oriented and has no way of knowing if it is getting closer to its goal. It knows how to execute instructions but has no clue about the computational ‘distance’ between a problem and its solution. When faced with an intractable problem, a computer would continue indefinitely on a line of approach that has been fed into it through programming. Human beings will likely alter their approach, try to solve the problem from multiple angles, and take the ideas and intuitions developed in one approach into another. They might bring unrelated ideas to bear upon the solution of a problem, which a computer will not. In case the problem isn’t solved, humans would stop attempting after a while, but the computer will not. In short, computers can never stop even when the problem is unsolvable and Turing formalized this in the Halting Problem. A problem might take a hundred years to solve, so it is worthwhile to know that the problem indeed has a solution before we spend a hundred years trying to solve it. It would be futile to spend a hundred years and then abort the attempt because the solution wasn’t found so far. Humans have the ability to abort intractable problems and Turing proved that this was impossible for a computer. The Halting Problem is an example of the kinds of unsolvable problems that Gödel’s theorem alludes to, but did not explicitly identify. The machine that attempts to answer such a question for a program that never halts will also run forever since coming to a stop means determining that the program being analyzed also comes to a halt."
http://www.ashishdalela.com/books/godels-mistake/tarmaras
July 3, 2015
July
07
Jul
3
03
2015
02:09 PM
2
02
09
PM
PST
"So, it’s a fallacy to assume just because computers don’t do something that human beings currently do, they cannot." It's also a fallacy to assume that computers can do things they are currently not capable of doing. Computers are limited to what humans design them to be as working electronics. That they will transcend design and/or the limitations of electronics is kinda in the realm of fantasyland. Andrewasauber
July 3, 2015
July
07
Jul
3
03
2015
08:57 AM
8
08
57
AM
PST
1 2

Leave a Reply