Uncommon Descent Serving The Intelligent Design Community

Eric Holloway: Strong Artificial Intelligence Must Be Possible! Really…?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

He argues that many arguments for strong artificial intelligence depend on an ideological commitment to explicit, unproven theories about the universe:

If we define artificial intelligence as a very trivial form of algorithmic intelligence, which we have called regurgence, then it is necessarily true as a theoretical construct, although it may be practically impossible. On the other hand, if we rely on a compression interpretation of intelligence, then it is no longer necessarily true. It may still not be practically possible, although it may seem the best hypothesis. Then we examined whether the idea is falsifiable, and it turns out algorithmic intelligence can be falsified via the limitations of algorithms such as the halting problem. In conclusion, if the human mind passes the limitations of algorithms, then the mind cannot be an algorithm, and artificial intelligence is impossible. A couple pieces of evidence offered in this regard are the issues in software development and the history of human innovation. Not only is it valid to ask whether artificial intelligence is impossible but the argument that can be pursued on a scientific basis with quantifiable, empirical evidence. More.

Eric Holloway has a Ph.D. in Electrical & Computer Engineering from Baylor University. He is a current Captain in the United States Air Force where he served in the US and Afghanistan He is the co-editor of the book Naturalism and Its Alternatives in Scientific Methodologies. Dr. Holloway is an Associate Fellow of the Walter Bradley Center for Natural and Artificial Intelligence.

Also by Eric Holloway: Will artificial intelligence design artificial superintelligence?

Artificial intelligence is impossible

and

Human intelligence as a Halting Oracle

Also: Why I Doubt That AI Can Match the Human Mind Jonathan Bartlett: Computers are exclusively theorem generators, while humans appear to be axiom generators

and

How can consciousness be a material thing? Materialist philosophers espouse this improbable idea because they face starkly limited choices in how to view consciousness (Denyse O’Leary)

Note: Many consider the theory of artificial intelligence a foregone
conclusion due to materialism, and it is just up to the computer scientists to figure out the details. But, what if materialism is not the only game in town? Discover the exciting new scientific frontier of methodological holism in the new journal Communications of the Blyth Institute.

Follow UD News at Twitter!

Comments
"In the 1980s, fifth-generation languages were considered to be the way of the future, and some predicted that they would replace all other languages for system development, with the exception of low-level languages. Most notably, from 1982 to 1993, Japan put much research and money into their fifth-generation computer systems project, hoping to design a massive computer network of machines using these tools. However, as larger programs were built, the flaws of the approach became more apparent. It turns out that, given a set of constraints defining a particular problem, deriving an efficient algorithm to solve it is a very difficult problem in itself. This crucial step cannot yet be automated and still requires the insight of a human programmer." https://en.wikipedia.org/wiki/Fifth-generation_programming_languageEricMH
March 1, 2019
March
03
Mar
1
01
2019
10:43 AM
10
10
43
AM
PST
Yes, MG is correct. Non-deterministic Turing machines are the most powerful form of computation possible, and they can be emulated by deterministic Turing machines. Every other form of computation falls between these two extremes. Since both extremes are limited by the halting problem, then so is every form of computation.EricMH
March 1, 2019
March
03
Mar
1
01
2019
09:41 AM
9
09
41
AM
PST
MG, significant point, that potential for emulation brings out functional equivalence thus exposure to the halting problem. This then points to how minded rationally and morally governed agency is radically different from the action of a computational substrate. This in turn points to issues of distinct identity, thus also differing orders of being. Which also brings up our world-embedded abstract objects starting with those of structure and quantity. KFkairosfocus
February 28, 2019
February
02
Feb
28
28
2019
11:25 PM
11
11
25
PM
PST
Any parallel computational device can be simulated by a single deterministic Turing machine (albeit with exponential or worse time slow-down). Same with quantum computers and non-deterministic Turing machines. Thus parallel computation, quantum computation, and non-deterministic Turing machines are all subject to the Halting Problem.math guy
February 28, 2019
February
02
Feb
28
28
2019
09:36 PM
9
09
36
PM
PST
FF, in interactive, long lived programs, the called modules that do the actual processing do need to go through from a start to an end, i.e. they are algorithmic.Reaching a reasonable finitely remote end (or else detecting and reporting a fault condition thus imposing a halt on error) is linked to reliably achieving the desired outcome on a given input and start point. Hence definition of algorithms on FINITE step by step, goal directed procedures. Just a thought. KF PS: I note from Wolfram Math World:
Halting Problem The determination of whether a Turing machine will come to a halt given a particular input program. The halting problem is solvable for machines with less than four states. However, the four-state case is open, and the five-state case is almost certainly unsolvable due to the fact that it includes machines iterating Collatz-like congruential functions, and such specific problems are currently open. The problem of whether a general Turing machine halts is undecidable, as first proved by Turing (Wolfram 2002, pp. 1136-1138). SEE ALSO: Busy Beaver, Chaitin's Constant, Turing Machine, Undecidable REFERENCES: Chaitin, G. J. "Computing the Busy Beaver Function." §4.4 in Open Problems in Communication and Computation (Ed. T. M. Cover and B. Gopinath). New York: Springer-Verlag, pp. 108-112, 1987. Davis, M. "What Is a Computation." In Mathematics Today: Twelve Informal Essays (Ed. L. A. Steen). New York: Springer-Verlag, pp. 241-267, 1978. Penrose, R. The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics. Oxford, England: Oxford University Press, pp. 63-66, 1989. Turing, A. M. "On Computable Numbers, with an Application to the Entscheidungsproblem." Proc. London Math. Soc. Ser. 2 42, 230-265, 1937. Reprinted in The Undecidable (Ed. M. David). Hewlett, NY: Raven Press, 1965. Turing, A. M. "Correction to: On Computable Numbers, with an Application to the Entscheidungsproblem." Proc. London Math. Soc. Ser. 2 43, 544-546, 1938. Wolfram, S. A New Kind of Science. Champaign, IL: Wolfram Media, pp. 1136-1138, 2002. CITE THIS AS: Weisstein, Eric W. "Halting Problem." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/HaltingProblem.html
kairosfocus
February 27, 2019
February
02
Feb
27
27
2019
05:38 PM
5
05
38
PM
PST
@FF, look up non-deterministic Turing machines.EricMH
February 27, 2019
February
02
Feb
27
27
2019
03:57 PM
3
03
57
PM
PST
@EricMH, The halting problem simply says that it's impossible to logically determine whether or not some algorithmic programs (i.e., sequential programs running on a special machine called the Turing machine) will end. A Turing program only has one entry point and one output point. Not all machines are Turing machines. The brain, for example, is massively parallel (not sequential) and thus, its functioning has nothing to do with the halting problem. In fact, some programs (e.g., airline reservations, stock markets, etc.) are designed not to end at all. The halting problem is certainly not a limitation to intelligence. Whoever says that it is does not understand the halting problem or computers. But I'll leave it at that. https://en.wikipedia.org/wiki/Halting_problemFourFaces
February 27, 2019
February
02
Feb
27
27
2019
03:32 PM
3
03
32
PM
PST
@FF, not sure I follow. The halting problem applies to all forms of computation, deterministic and non-deterministic. And it turns out many problems of interest are not mechanically solvable due to the halting problem. Programming in particular. We cannot completely automate the activity of programming due to the halting problem. The original goal of 5th generation languages, so all the programmer needed to do is specify the problem constraints, and the compiler would generate the code. However, this is not generally possible, and only works within highly limited domains.EricMH
February 27, 2019
February
02
Feb
27
27
2019
03:09 PM
3
03
09
PM
PST
@DaveS, it depends what you consider to be the processing power of the human mind. People assume the entire brain is involved, which entails an enormous processing power. However, our conscious ability to process information is only about 60 bits per second, which is vastly lower than even a pocket calculator. So, on one measurement, current computers already are vastly more powerful than the human brain, yet struggle to match human performance even in highly constrained domains. This suggests we already have clear evidence there is a fundamental difference between mind and computer.EricMH
February 27, 2019
February
02
Feb
27
27
2019
01:50 PM
1
01
50
PM
PST
The halting problem is only a theoretical problem for sequential or algorithmic computers and is not a problem for parallel systems. Heck, it's not even a limiting problem for ordinary computers since computers are everywhere and doing fine. Machines will never be conscious (you need a soul for that) but the idea that we will not have highly intelligent machines that can perform almost every task humans can perform is nonsense. Intelligence is a cause/effect physical phenomenon. It does not come from the soul. The soul just uses intelligent brains to achieve its goals. The brain learns to sense the world on its own, by physical processes. We will give future intelligent machines goals and they will work to achieve them as well as humans can.FourFaces
February 27, 2019
February
02
Feb
27
27
2019
01:34 PM
1
01
34
PM
PST
EricMH, If the gap persists when we have hardware that is comparable to the human brain, then I would say anti-strong-AI position would be on more solid ground.daveS
February 27, 2019
February
02
Feb
27
27
2019
12:55 PM
12
12
55
PM
PST
@DaveS, the question becomes at what point does an enormous, persistent gap indicate there is no path from here to there? I would say that without a testable alternative theory, then it will be always unsatisfactory to say the gap is unbridgeable. This is the same sort of problem ID is currently addressing. Lots of success showing there is a huge gap that Darwinian evolution cannot explain, and ID 3.0 is looking at alternate empirical theories, such as Winston's dependency graph.EricMH
February 27, 2019
February
02
Feb
27
27
2019
12:28 PM
12
12
28
PM
PST
EricMH,
So, even with weak AI it will never be indistinguishable from humans.
I guess we shall see ...daveS
February 27, 2019
February
02
Feb
27
27
2019
11:06 AM
11
11
06
AM
PST
@BA77, a lot of great points, and I'd like to address the 'creating information' point. We can prove that no stochastic process (i.e. computer + randomness) is expected to create algorithmic mutual information between two bitstrings X and Y. For example, we can say X is a stochastic process and Y is the abstract body of mathematical facts. Under its own powers, X will never increase in information about Y. So, if there is increase, it must come from outside of X, i.e. a non-stochastic process. The history of human progress is a history of the increased embedding of mathematical facts in the fabric of physical reality. This means that the human mind cannot itself be entirely reducible to a stochastic process. Somehow it has access to a power beyond computation in order for innovation to happen. The same can be said about cosmic and biological history.EricMH
February 27, 2019
February
02
Feb
27
27
2019
10:46 AM
10
10
46
AM
PST
"Now, leaving aside the question of whether the mind and consciousness are material or immaterial, it is clear that whatever the case may be, all the observable aspects of the mind intersect with the physical world and create a physical effect. Thus, an artificial intelligence computer program that reproduces the physical phenomena of the human mind is possible. We will call this kind of artificial intelligence “algorithmic intelligence.” In fact, based on the preceding premises, algorithmic intelligence as just defined is necessarily true. ......................... In conclusion, if the human mind passes the limitations of algorithms, then the mind cannot be an algorithm, and artificial intelligence is impossible. A couple of pieces of evidence offered in this regard are the issues in software development and the history of human innovation. Not only is it valid to ask whether artificial intelligence is impossible but the argument can be pursued on a scientific basis with quantifiable, empirical evidence." The mind is not the brain, is not what the brain as a biological computer can do (which is execute algorithms) for other reasons, for instance the now-famous "Hard Problem" of consciousness formulated by Chalmers. This has no solution up to the present or in the foreseeable future. This problem is the fact that the properties of consciousness and subjective experience like thinking, feeling, perceiving (qualia), willing and intentionality (agency), etc. are in an entirely different (higher) existential realm than the properties of matter and energy and space. They have no length, width, depth, mass, charge, velocity, etc. etc., and therefore are not physical, not either the physical brain or its neurological processes including the execution of algorithms. This other higher realm obstinately eludes any scientific analysis. There is also the matter of human creativity, especially the extreme human creativity exhibited by some geniuses. This brings vast amounts of new complex organized information into the world in a physical form, that was not intrinsic in the former states of matter. The ability to do that seems to be exclusively allocated to sentient intelligence. From a letter written by Mozart: Quote: "When I feel well and in a good humor, or when I am taking a drive or walking after a good meal, or in the night when I cannot sleep, thoughts crowd into my mind as easily as you could wish. Whence and how do they come? I do not know and I have nothing to do with it. Those which please me, I keep in my head and hum them; at least others have told me that I do so. Once I have my theme, another melody comes, linking itself to the first one, in accordance with the needs of the composition as a whole: the counterpoint, the part of each instrument, and all these melodic fragments at last produce the entire work." Another letter: Quote: "Then my soul is on fire with inspiration, if however nothing occurs to distract my attention. The work grows; I keep expanding it, conceiving it more and more clearly until I have the entire composition finished in my head though it may be long… It does not come to me successively, with its various parts worked out in detail, as they will be later on, but it is in its entirety that my imagination lets me hear it." I wonder how poor Mozart would have reacted to a materialist skeptic telling him that these experiences were merely a mechanistic deterministic causal chain sort of process going on in his neurons (imagine a vast machine of clockwork gears and levers) leavened by occasional random discharges. The execution of complex algorithms. He probably would have laughed. Since the mind is demonstrably not merely the execution of algorithms (which is what AI can do), it is extremely unlikely that AI will ever exhibit mind, consciousness.doubter
February 27, 2019
February
02
Feb
27
27
2019
10:44 AM
10
10
44
AM
PST
Algorithmic Information Theory, Free Will and the Turing Test - Douglas S. Robertson Excerpt: Chaitin’s Algorithmic Information Theory shows that information is conserved under formal mathematical operations and, equivalently, under computer operations. This conservation law puts a new perspective on many familiar problems related to artificial intelligence. For example, the famous “Turing test” for artificial intelligence could be defeated by simply asking for a new axiom in mathematics. Human mathematicians are able to create axioms, but a computer program cannot do this without violating information conservation. Creating new axioms and free will are shown to be different aspects of the same phenomena: the creation of new information. http://cires.colorado.edu/~doug/philosophy/info8.pdf The danger of artificial stupidity – Saturday, 28 February 2015 “Computers lack mathematical insight: in his book The Emperor’s New Mind, the Oxford mathematical physicist Sir Roger Penrose deployed Gödel’s first incompleteness theorem to argue that, in general, the way mathematicians provide their “unassailable demonstrations” of the truth of certain mathematical assertions is fundamentally non-algorithmic and non-computational” http://machineslikeus.com/news/danger-artificial-stupidity The mathematical world - James Franklin - 7 April 2014 Excerpt: the intellect (is) immaterial and immortal. If today’s naturalists do not wish to agree with that, there is a challenge for them. ‘Don’t tell me, show me’: build an artificial intelligence system that imitates genuine mathematical insight. There seem to be no promising plans on the drawing board.,,, - James Franklin is professor of mathematics at the University of New South Wales in Sydney. http://aeon.co/magazine/world-views/what-is-left-for-mathematics-to-be-about/ Robert Marks: Some Things Computers Will Never Do: Nonalgorithmic Creativity and Unknowability - video https://www.youtube.com/watch?v=Cm0s7ag3SEc The Turing Test Is Dead. Long Live the Lovelace Test. Robert J. Marks II – July 3, 2014 Excerpt: Here are a few others statements expressing doubt about the computer’s ability to create Strong AI. “…no operation performed by a computer can create new information.” Douglas G. Robertson “The [computing] machine does not create any new information, but it performs a very valuable transformation of known information.” Leon Brillouin “Either mathematics is too big for the human mind or the human mind is more than a machine.” - Kurt Godel and, of course, my favorite:7 “Computers are no more able to create information than iPods are capable of creating music.” - Robert J. Marks II The limitations invoked by the law of conservation of information in computer programming have been a fundamental topic of investigation by Winston Ewert, William Dembski and me at the Evolutionary Informatics Lab. We have successfully and repeatedly debunked claims that computer programs simulating evolution are capable of generating information any greater than that intended by the programmer.8,9,10,11,12,13 https://evolutionnews.org/2014/07/the_turing_test_1/
bornagain77
February 27, 2019
February
02
Feb
27
27
2019
10:37 AM
10
10
37
AM
PST
@DaveS back to your original point, if Johnnyb and I are correct that humans are able to create consistent axioms, then there will forever be a performance gap between humans and computers. And, this performance gap will always be enormous. For example: https://mindmatters.ai/2019/02/the-brain-exceeds-the-most-powerful-computers-in-efficiency/ So, even with weak AI it will never be indistinguishable from humans.EricMH
February 27, 2019
February
02
Feb
27
27
2019
10:30 AM
10
10
30
AM
PST
EricMH, Yes, I would certainly expect them to do that.daveS
February 27, 2019
February
02
Feb
27
27
2019
09:32 AM
9
09
32
AM
PST
@DaveS, even Google uses internal crowdsourcing to train its models.EricMH
February 27, 2019
February
02
Feb
27
27
2019
09:25 AM
9
09
25
AM
PST
EricMH, That's a possibility. That sounds like a short step from call centers western corporations set up in India etc. On the other hand, I'm wondering if such an arrangement would be profitable or even necessary, given that companies such as Google are able to freely collect data from their customers who are using their products voluntarily.daveS
February 27, 2019
February
02
Feb
27
27
2019
08:29 AM
8
08
29
AM
PST
@DaveS, rather, I predict what we will call "indistinguishable weak AI" will actually be a computerized storefront on a bunch of human overseas workers. And it still won't be indistinguishable.EricMH
February 27, 2019
February
02
Feb
27
27
2019
08:17 AM
8
08
17
AM
PST
That is what kids are for- capitols.ET
February 27, 2019
February
02
Feb
27
27
2019
06:12 AM
6
06
12
AM
PST
ET, A fair point. On the other hand, if someone asked me what the capitol of North Dakota is, I might also be at a loss if my internet connection is down.daveS
February 27, 2019
February
02
Feb
27
27
2019
06:04 AM
6
06
04
AM
PST
Siri will never pass for a human when the internet is down.ET
February 27, 2019
February
02
Feb
27
27
2019
05:55 AM
5
05
55
AM
PST
I'll go out on a limb and make a prediction: while many computer science workers believe strong AI is possible, they will never achieve a breakthrough such that from that point forward, it will be clear that computers are capable of strong AI. However, in the meantime, weak AI will be gradually refined to the point where it's indistinguishable from human intelligence. I don't know that we'll ever have human-like androids walking around among us, as that might be super-expensive and perhaps no company/government would be willing to make such a large investment. However, I think personal assistants such as Siri will develop to the point where they can pass for human (which will create its own set of problems).daveS
February 27, 2019
February
02
Feb
27
27
2019
05:46 AM
5
05
46
AM
PST

Leave a Reply