Uncommon Descent Serving The Intelligent Design Community

When will a computer nag you even more irritatingly than … and more!

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Help this guy. He wants to know when artificial intelligence will surpass human intelligence.Here are his numbers so far:

A question very simply crafted poll I’m asking a few friends to gain a better perspective on the time-frame for when we may see greater-than-human level AI. Results posted below… if you wish to participate, email me (bruce-at-novamente.net) an answer for the following:

[ ] 2010-20
[ ] 2020-30
[ ] 2030-50
[ ] 2050-70
[ ] 2070-2100
[ ] Beyond 2100
[ ] Never
[ ] Prefer not to make predictions
[ ] Other: __

He recounts, “Many people have replied Never, so I’ve separated this answer from the replies and have added it to the survey results (above). – Bruce K.” Write to him here: bruce-at-novamente.net, doing the usual sub of @ for -at- .

Meanwhile, lots of fun at Overwhelming Evidence:

Salvo! A pop science mag that rocks – or maybe shocks! Have you ever imagined a pop sci mag without materialism? Well, don’t try. Go have a look ….

Is your mind just a buzz of neurons in your brain? Some neuroscientists do not think so. Here’s where you can find out why.

Comments
[...] wgzselbynHIn fact, the jument of bromegrass and checkers programs, minimax mettlesome tree evaluation, does not effect for most added games in the same class, owed to something famous as mettlesome tree pathology. A sad fact of machine bromegrass state is that you … [...]Internet Chess » Comment on When will a computer nag you even more irritatingly …
March 9, 2008
March
03
Mar
9
09
2008
09:51 PM
9
09
51
PM
PDT
Mapou, your example of the DARPA Grand Challenge is very good in that it demonstrates the achievement of actual artificial computer intelligence (at least in limited applications). "Once you figure out the neural principles of perceptual and conceptual learning, reasoning and adaptation and write the code to implement them into a machine, then there is no limit to how much knowledge and experience the machine can get by interacting with its environment." This seems to be true. But some very important aspects of human conscious intelligence appear to be beyond such means of approximation or simulation. For instance use of language, except in well defined and researched contexts. But the main issue is the old one of the "hard problem" of Chalmers, that separates any, no matter how advanced, AI system from human conscious intelligence. This is only confirmed by the mountain of evidence for an interactive dualist model of human consciousness.magnan
February 10, 2008
February
02
Feb
10
10
2008
03:56 PM
3
03
56
PM
PDT
Denyse, The "circuitry" physicists and electrical engineers see in the brain is indeed floating in an ocean of chemicals they prefer not to see. Ray Kurweil used to talk about scanning the structure of a brain and synthesizing a copy. It would have been interesting to see him try to make a brain work without its bath. As for nagging, emotional response is not restricted to the brain, and I don't think you can really annoy a person unless you have a body that knows what feels annoying.Cloud of Unknowing
February 9, 2008
February
02
Feb
9
09
2008
06:19 PM
6
06
19
PM
PDT
I guess I must be one of few Christians who believe that computers will be as intelligent as humans and even more so. I think it is incorrect to compare intelligence to current computer applications such as a word processor. I agree that most AI researchers are out to lunch but the fact that a software application can be intelligent was recently demonstrated in the desert in California when a bunch of self-driving vehicles competed for the DARPA Grand Challenge. The cars had to drive around in a small town and obey all the normal California traffic laws without colliding. Sure, we’re not going to see self-driving cabs in New York City anytime soon but you can bet your life savings that it will happen at some time in the future. True general intelligence is not a matter of degree. You either have it or you don’t. Once you figure out the neural principles of perceptual and conceptual learning, reasoning and adaptation and write the code to implement them into a machine, then there is no limit to how much knowledge and experience the machine can get by interacting with its environment. The same principles can be used to build an artificial brain with a million neurons or one with 100 billion neurons. So yes, Denyse, I believe that future AIs will know how to nag and they will be very good at it if they are so conditioned. But why train them to nag? :-) Wouldn't it be much better if our future synthetic servants are motivated to learn to know us better than we know ourselves? This way they'll serve us better. Imagine, your own personal housekeeper, chef and butler catering to your every desire! I can't wait.Mapou
February 9, 2008
February
02
Feb
9
09
2008
05:52 PM
5
05
52
PM
PDT
magnan, Analytical IQ might be somewhat positively correlated with hubris. It amazes me that there are people who want to leapfrog the unsolved problems of artificial intelligence and go directly to artificial consciousness. On the other hand, Juergen Schmidhuber's "Goedel machines are self-referential universal problem solvers making provably optimal self-improvements." Schmidhuber asks and answers:
Is consciousness the ability to execute unlimited self-inspection and provably useful self-change (modulo limits of computability and provability)? Then the Goedel machine's Global Optimality Theorem provides the first technical justification of consciousness in the context of general problem solving.
I think this is just rumination. I doubt he set out to create artificial consciousness, and I doubt that many people focused on artificial consciousness find the Goedel machine satisfying.Cloud of Unknowing
February 9, 2008
February
02
Feb
9
09
2008
05:47 PM
5
05
47
PM
PDT
Thanks to all, for all kind posts, and especially Cloud of Unknowing, who seems to discern cats. I have spent more of my life observing cats than I could give a good account of. I used to write a cat column for a local paper. A key difficulty with AI seems to me to be that the human brain is nothing like a computer. It is more like an ocean than a machine. Thus, a person who knows how to nag you can get on your nerves in a way that a computer can't. The computer can frustrate you (easily!), but the nag can keep you awake half the night merely by a change in tone of voice or an insinuation about something that happened ten years ago ... That stuff is dredged up from the ocean.O'Leary
February 9, 2008
February
02
Feb
9
09
2008
05:06 PM
5
05
06
PM
PDT
Cloud of Unknowing, good posts. I also have been struck by the self-deluded arrogance of the AI movement. "A particularly stupid dream of some artificial intelligence fans is that great day when machines will outperform human geniuses in all areas of intellectual endeavor." I agree. But I think their dream of creating artificial conscious self-awareness in the human sense (or of transferring human consciousness into computers) is even more stupid if that were possible. It is an interesting psychological phenomenon, since these people are anything but stupid in terms of measureable analytical IQ.magnan
February 9, 2008
February
02
Feb
9
09
2008
04:15 PM
4
04
15
PM
PDT
A particularly stupid dream of some artificial intelligence fans is that great day when machines will outperform human geniuses in all areas of intellectual endeavor. People caught up in this dream lose sight of the fact that development of artificial genius may teach us nothing about general principles of artificial intelligence. That is, there are many domains in which we wish computational systems had the competence of average (intelligent) human beings, and the genius systems in other domains give us no insight whatsoever into how to build such systems. The most egregiously stupid work on artificial genius has been in the area of two-player games of perfect information (e.g., chess and checkers). People have labored for decades to eke every last bit of performance out of programs playing these games, and their methods of creating artificial genius have absolutely no application to anything but the games. In fact, the workhorse of chess and checkers programs, minimax game tree evaluation, does not work for most other games in the same class, due to something known as game tree pathology. A sad fact of computer chess playing is that you can plot the ratings of world champion programs over processing speed and do a pretty good job of fitting a line to the data points. In other words, the programs are succeeding by brute force. A sad fact of computer checkers players is that state-of-the-art programs benefit enormously from huge endgame databases. That is, when the number of pieces on the board is reduced to a certain number, the program has the game "solved." This has everything to do with game theory, and absolutely nothing to do with artificial intelligence. Things become polymorphously stupid when minimax tweakers and endgame database developers presume to comment on artificial intelligence, about which they, having been engrossed in development of custom artificial genius embodying no principles of intelligence, know nothing. To put it a bit differently, the intelligences you want to assist you with tasks in the workplace are on par with Dick and Jane, but the genius-obsessed will tell you authoritatively, on the basis of their experiences with the Rain Man, that they know all about the limits of AI, and that you're not going to get Dick and Jane. Part of why you don't see more AI in commercial software is that customers expect software not to make mistakes. But if an AI system has the competence of an ordinary human assistant, it makes mistakes. Chew on that thought.Cloud of Unknowing
February 9, 2008
February
02
Feb
9
09
2008
03:22 PM
3
03
22
PM
PDT
In ways, a dog is more intelligent than you are. And so is a cat. The reason dogs can sniff out drugs and tumors is not that they have good noses, but that they have good olfactory bulbs in their brains. The average dog processes some information in ways that are beyond the capacity of all humans. When a cat leaps from the floor to the top of a door, it is a feat of stereo image processing and motor control beyond any human. We may describe some people as "cat-like" in their athleticism, but that is wishful thinking. So which are smarter, dogs or cats? What a ridiculous question! We lump together many qualitatively distinct attributes under the term "intelligence." The fact that we recognize intelligence in dogs and cats does not mean that there is some vital essence both species possess, perhaps in different degree. Dog intelligence and cat intelligence are different things. Anthropocentrism is a huge problem in discussions of intelligence. People tend to make dogs and cats into dumb (or perpetually childlike) people. But the species have fundamentally different equipment for cognitive processing. Similarly, computers have fundamentally different equipment for cognitive processing than do living organisms, and to expect a computer to be an intelligent dog or an intelligent cat or an intelligent human is absurd. With a different substrate of cognition, a computer would have to SIMULATE the cognitive processes of an animal. And we know that simulation requires huge computational resources. In fact, it is generally infeasible to have one computer do a large-scale simulation of another computer's operations at the hardware level. And a brain is much more complex than computer hardware. The upshot is that humans are not intelligent computers, and computers are not intelligent humans. Holding computers up to human standards is to make the mistake of believing that intelligence is one thing, and to engage in anthropocentrism. The objective of research in computational intelligence should be to make computers reach their potential as mechanical cognitive systems in interaction with people, not to make them into people simulators.Cloud of Unknowing
February 9, 2008
February
02
Feb
9
09
2008
02:16 PM
2
02
16
PM
PDT
I already answered your questions. You are replying in bad faith, in my opinion. I no longer wish to discuss this subject with you. I feel no desire to persuade you or anybody in particular of my views. I was speaking only to Christians and others who believe in the existence of a human spirit. You are obviously not part of my audience. We’re wasting each other’s time. See you around.
Well, I am somewhat offended that you think that I was replying in bad faith. I think I asked fair and obvious questions. However, I understand perfectly if you do not wish to discuss your views with me.hrun0815
February 9, 2008
February
02
Feb
9
09
2008
01:26 PM
1
01
26
PM
PDT
When I was in university pursuing a bachelors in computer science my professor of robotics said that we are so far from making anything close to human intelligence it's ridiculous to speak of creating AI that surpasses human intelligence within the next century. 13 years later I still have to agree with him. It is a joke. Anyone predicting machine intelligence that surpasses the human brain is selling something you'd better not buy. I've worked as an informatics consultant at least as many years since, including for the military (intelligence systems) and I've neither seen nor heard of anything anywhere near matching human intelligence let alone surpassing it. Not even remotely close! Ever heard of science fiction? Nothing wrong with dreaming but talking of 20-30 years future, on that issue, is ludicrous at this point.Borne
February 9, 2008
February
02
Feb
9
09
2008
01:12 PM
1
01
12
PM
PDT
hrun0815, I already answered your questions. You are replying in bad faith, in my opinion. I no longer wish to discuss this subject with you. I feel no desire to persuade you or anybody in particular of my views. I was speaking only to Christians and others who believe in the existence of a human spirit. You are obviously not part of my audience. We're wasting each other's time. See you around.Mapou
February 9, 2008
February
02
Feb
9
09
2008
12:52 PM
12
12
52
PM
PDT
It will come from Christians tinkering with neural networks. Forget symbolic reasoning programs because symbolic AI is unmitigated crackpottery from the last century, in my opinion.
Have you found that Christians are better AI researchers than non-Christians? Why do you think the religion of the researchers would matter?
Certainly, but unlike matrialists who believe that human intelligence is entirely the result of the neurons of the brain, a Christian researcher believes something else is also needed. So the materialist will attempt to design and built a a mechanism that is strictly biologically plausible whereas the Christian is not handicapped by this self-imposed constraint. He or she is free to add a computer-aided process that is not necessarily biologically plausible.,/blockquote> So the Christian researcher would add something material to the neurological networks? Something driven by computers? Or would they add something non-material? Is so, how? I honestly don't understand your thinking here.
In my opinion, an intelligent designer is necessarily a dualistic intelligence because part of the rationale behind intelligent design is that intelligence itself is irreducibly complex.
It goes without saying that, if a Christian organization is the first to build a human-level AI using Christian principles, a lot of materialists will have egg on their faces. A mountain of crow is waiting for them. I hear crow is not so bad with Tabasco sauce. :-D You must forgive me; I can’t resist making fun of materialists. What Christian principles are you talking about? Certainly, if the organization used, for example, prayer to create a human-like intelligence from a neuroligical network then I would agree. Materialists would end up with a lot of egg on their face. But I just don't see how you envision Christian researchers creating AI using first materialist approaches and then somehow adding some 'immaterial' approach.hrun0815
February 9, 2008
February
02
Feb
9
09
2008
12:37 PM
12
12
37
PM
PDT
hrun0815: If you don’t think it will come from materialists tinkering with neural networks or symbolic reasoning programs, where will it come from? It will come from Christians tinkering with neural networks. Forget symbolic reasoning programs because symbolic AI is unmitigated crackpottery from the last century, in my opinion. Will the ‘ID based’ researchers at a ‘Christian research organization’ tinker with something material to create the AI? Certainly, but unlike matrialists who believe that human intelligence is entirely the result of the neurons of the brain, a Christian researcher believes something else is also needed. So the materialist will attempt to design and built a a mechanism that is strictly biologically plausible whereas the Christian is not handicapped by this self-imposed constraint. He or she is free to add a computer-aided process that is not necessarily biologically plausible. And what does the fact that certain things in nature are best explained by an intelligent agent bring to the table in creating AI? In my opinion, an intelligent designer is necessarily a dualistic intelligence because part of the rationale behind intelligent design is that intelligence itself is irreducibly complex. Therefore, an intelligent designer could not have evolved via naturalistic means. Something else is needed. Don't take me wrong. This does not mean that ID needs to venture into this territory (dualism) in order to make a case for design. IC is a good enough argument. As a Christian, however, I am free to speculate and to deduce consequences and corollaries to my heart's content. Also, since a human spirit could not have evolved and since a special brain is needed for spirit/brain interactions, it follows that a non-biologically plausible intelligence goes a long way toward proving design, especially with respect to the human brain. It goes without saying that, if a Christian organization is the first to build a human-level AI using Christian principles, a lot of materialists will have egg on their faces. A mountain of crow is waiting for them. I hear crow is not so bad with Tabasco sauce. :-D You must forgive me; I can't resist making fun of materialists.Mapou
February 9, 2008
February
02
Feb
9
09
2008
12:22 PM
12
12
22
PM
PDT
Be that as it may, and in spite of (or because of) my being a Christian, I happen to share their optimism that a computer will become more intelligent than a human being within the next 20 years or so. However, this event will not come about as a result of a bunch of materialists feverishly tinkering with simulated neural networks or symbolic reasoning programs. Materialists are lost in the wilderness, in my opinion, and they don’t even realize it. I predict that true AI (not be confused with conscious intelligence) will indeed happen but it will come from an ID-based and, more than likely, a Christian research organization.
If you don't think it will come from materialists tinkering with neural networks or symbolic reasoning programs, where will it come from? Will the 'ID based' researchers at a 'Christian research organization' tinker with something material to create the AI? And what does the fact that certain things in nature are best explained by an intelligent agent bring to the table in creating AI? Could you explain?hrun0815
February 9, 2008
February
02
Feb
9
09
2008
11:30 AM
11
11
30
AM
PDT
Denyse, Great find. I've read some of Novamente's stuff before. Those guys and gals are diehard materialists, the kind who seriously believe that a person can gain immortality by uploading the contents of his/her brain into a computer. It would be laughable if it weren’t so pathetic. Be that as it may, and in spite of (or because of) my being a Christian, I happen to share their optimism that a computer will become more intelligent than a human being within the next 20 years or so. However, this event will not come about as a result of a bunch of materialists feverishly tinkering with simulated neural networks or symbolic reasoning programs. Materialists are lost in the wilderness, in my opinion, and they don’t even realize it. I predict that true AI (not be confused with conscious intelligence) will indeed happen but it will come from an ID-based and, more than likely, a Christian research organization. I have reasons to believe that a general artificial intelligence is biologically implausible. The problem with biological neural networks is that they lack something that is essential to general intelligence, instantaneous random memory access (IRMA). I am convinced that this is the reason that dogs and chimps cannot be taught to play chess or checkers even though they have more than enough neural capacity to do so. Humans make up for the lack of an IRMA capability by having a spirit. Fortunately for us, Christian AI fanatics, computers (unlike biological neural networks) do have an instantaneous random memory access capability, and this capability can be used to simulate a pseudo-spirit, so to speak, enough to be able to build a full general, albeit unconscious, intelligence. Too bad for the Novamentes, Kurzweils and Jeff Hawkins of the world that they don’t believe in a human spirit.Mapou
February 9, 2008
February
02
Feb
9
09
2008
11:18 AM
11
11
18
AM
PDT

Leave a Reply