Uncommon Descent Serving The Intelligent Design Community

Artificial intelligence is no smarter than a rat?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email
controls for AI/Pbroks13

But why not? From Rod Kackley at PJMedia:

The “AI Index” released by Stanford University, the Massachusetts Institute of Technology, SRI International, and other research organizations shows artificial intelligence produced in the United States is no smarter than a five-year-old. And Yann LeCun, the head of AI for Facebook, said even the most advanced artificial intelligence systems are no sharper on the uptake than vermin.

The term “artificial intelligence” has been around since the mid-1950s when science fiction writers fantasized about automobiles that drove themselves, computers that could see, and even phones smart enough to respond to spoken commands.

However, the Stanford-led group that produced the AI index is the first to attempt to create a baseline to measure the technological progress of artificial intelligence. More.

<em>Teapot</em> Cobalt BlueOne problem is, animals become smarter in part by having desires and needs. How does one make artificial intelligence want anything?

Now, in fairness, vermin (especially rodents) are very smart, as your cat would tell you. In recognition of all he does for you in keeping them away, go straight to the fridge and cut off a decorously small hunk of prime rib for him, then leave it on a high shelf he can get up to but dogs can’t … Possibly, in two centuries, the artificial intelligence will have caught up with the system within which the cat lives and works every day. And he is no genius.

See also: At LiveScience: Will AI become conscious?

Comments
FourFaces @ 13: Cold logic alone would lead me to question whether or not I am the only truly conscious being, and succumb to solipsism. After all, "I think therefore I am" is the only absolute certainty I have. All other human beings I interface with could be "zombies" with nothing really conscious going on inside - they just simulate it perfectly. Or all else could be illusion - I could be a brain in a vat being fed a purely synthesized simulated experience. It seems to me that this way leads to madness. As a pragmatic necessity we have to accept certain "common sense" human intuitions about the world. I think one of those is that animals are beings with certain forms of consciousness lesser in some ways than human. Scientists have spent their lives observing and studying animals, and have become certain of their having minds of a sort with many human-like emotions. Behaviorism has long been discredited. From http://animalstudiesrepository.org/cgi/viewcontent.cgi?article=1002&context=acwp_asie: "Conrad Lorenz states that higher mammals and birds have emotional experiences completely similar to ours, representative of the basic structure of all experiences for man and animal alike. Self-awareness of the emotional state is shown by the chimpanzee Lucy, who possessed learned gesture-language; during a session, when her foster mother went away, she ran to a window and signed to herself: "cry me, me cry". She was also able to appreciate jokes, and imitate them for her own amusement. Emotions can also lead to empathetic (altruistic) action, such as cases where dogs save little children, and dolphins support a sick or injured companion. Emotions must also underly the "psychic" tracking of dogs who travel hundreds of miles to find their owners who moved to a place, unknown(!) before the dog's arrival." Also, to view all of the manifold evidences of animal (and even other human) consciousness as really just evidences of "zombie" simulations of awareness, runs up against the Ockham's Razor principle of parsimony. Why would nature create such an elaborate simulation? Or this is proposing a deliberate deception by some being or beings, successfully simulating awareness in multitudes of creatures. It just doesn't seem to work. I agree that AI robotic systems may come to simulate human consciousness well enough to convince at least some people and pass the Turing test. But if self-aware consciousness can never really be achieved in such systems (as I believe), then these systems will presumably never have the potential to become the existential threat that some people are worried about.doubter
December 29, 2017
December
12
Dec
29
29
2017
02:01 PM
2
02
01
PM
PDT
doubter @12: Looking into the eyes of my pet certainly gives a strong sense of there being some form of conscious awareness, desires, fears, etc. It's an illusion and a very dangerous one, in my opinion. Future intelligent machines will look and act even more conscious than animals because they will understand us and converse with us intelligently and even emotionally. They will be conditioned to "empathize" with us and many humans will swear, wrongly, that the machines are conscious. Human intuition here should not be ignored. I think it should be. If there ever was a time when cold logic should be rigorously applied, now is that time.FourFaces
December 29, 2017
December
12
Dec
29
29
2017
12:07 PM
12
12
07
PM
PDT
FourFaces @ 10: If by "conscious" you mean self-awareness, this seems through research to be shared with humans by at least some higher mammals: elephants, apes, chimpanzees and dolphins. I think other mammals like dogs and cats certainly have some lower form of consciousness that doesn't seem to include a definite sense of selfhood. Looking into the eyes of my pet certainly gives a strong sense of there being some form of conscious awareness, desires, fears, etc. Human intuition here should not be ignored.doubter
December 29, 2017
December
12
Dec
29
29
2017
09:41 AM
9
09
41
AM
PDT
Clearly no one here has watched the Netflix series "Travelers". Their Director is an AI and it knows all.ET
December 29, 2017
December
12
Dec
29
29
2017
08:17 AM
8
08
17
AM
PDT
doubter @ 9: You are correct. There is conscious desire and there is unconscious desire. I hope our languages will evolve special mechanisms that will allow us to make a distinction between conscious and unconscious processes. By the way, I personally do not believe that animals are conscious regardless of our feelings toward them. And I don't believe computers will ever be conscious. The danger is that they will appear to be conscious and many will swear that they are. It is easy to confuse intelligence with consciousness.FourFaces
December 29, 2017
December
12
Dec
29
29
2017
07:56 AM
7
07
56
AM
PDT
FourFaces @ 5: You speak of goal-oriented behavior. The key word is that this is purely behavior, not conscious desire and intention . DeepMind's GO learning program desires nothing. Look at the definition of "desire": to long or crave for something that brings satisfaction or enjoyment. This is inseparable from consciousness. The GO program does have a designed internal goal, but it doesn't have an iota of the essence of consciousness, anything like the inner feeling of "I want to do (something)", of emotional desire for anything, an intention to do anything. It's a complicated machine - it does what it is designed to do by its programmers.doubter
December 28, 2017
December
12
Dec
28
28
2017
04:18 PM
4
04
18
PM
PDT
FourFaces @ 7: "A superior intelligence did it a long time ago, no doubt about it." Agreed.Truth Will Set You Free
December 28, 2017
December
12
Dec
28
28
2017
12:51 PM
12
12
51
PM
PDT
Truth @ 6: FourFaces @ 5: Who or what did the genetic programming in animals? A superior intelligence did it a long time ago, no doubt about it. Animals simply inherit it. My understanding is that the genetic code can adapt to environmental pressures to a certain extent and this allows the animal's brain to change its goals over time. It happens not only during the animal's own lifespan but can have effects lasting multiple generations. For example, the desire to suckle is pretty much gone after a "gestation" period.FourFaces
December 28, 2017
December
12
Dec
28
28
2017
11:17 AM
11
11
17
AM
PDT
FourFaces @ 5: Who or what did the genetic programming in animals?Truth Will Set You Free
December 28, 2017
December
12
Dec
28
28
2017
11:06 AM
11
11
06
AM
PDT
FourFaces at 2, I am not sure I understand. The fact that correct answers are reinforced is not the same thing as, say, an animal desiring a reward. It is the same thing. It is called goal-oriented behavior. Both animal and the AI choose the path that leads to the greatest reward. DeepMind's GO learning program desires (it has an internal goal) to win the game. That is the reward. It comes up with the best strategies that lead to getting the reward. Animals have genetically programmed goals.FourFaces
December 28, 2017
December
12
Dec
28
28
2017
10:48 AM
10
10
48
AM
PDT
FourFaces at 2, I am not sure I understand. The fact that correct answers are reinforced is not the same thing as, say, an animal desiring a reward. A cat, for example, really wants a reward (a piece of chicken liver, maybe) and will teach himself tricks in order to get it. Or else, he may get bored with chicken liver and just watch the squirrels in the backyard instead, no matter what you do or say... The motivation is internal and relates to his being alive. Maybe an artificial cat would be different... What can we hope to learn about animal minds? and Animal minds: In search of the minimal selfNews
December 27, 2017
December
12
Dec
27
27
2017
07:12 AM
7
07
12
AM
PDT
The earlier analog cybernetics research got MUCH closer to life, because it was starting from life. http://www.americanradiohistory.com/Archive-Practical-Electronics/60s/Practical-Electronics-1969-03.pdf The British Practical Electronics ran a yearlong series of articles on building your own learning animal. The animal was designed to have a purpose, to enjoy useful work, and to satisfy hungers. Modern digital AI started by attempting to build a computer that thinks like people who think like computers. Since people who think like computers are not quite alive, it's not surprising that the research is slow. (Aside from the BASIC fact I've mentioned before: Digitsl can't do feedback, therefore digital can't hope to mimic life.)polistra
December 27, 2017
December
12
Dec
27
27
2017
01:00 AM
1
01
00
AM
PDT
One problem is, animals become smarter in part by having desires and needs. How does one make artificial intelligence want anything? As someone who does research in artificial general intelligence (AGI), I can assure you that giving motivation and goals to intelligent machines is not a big problem. It's called reinforcement learning. This is how Google's DeepMind can get its game playing programs to learn to win the game. Still, I would say that current AI systems are super idiot-savants. Their expertise is in very narrow domains. An AI system that can beat you at chess will have no idea how to play tic-tac-toe. Worse, the systems have no idea that they are playing a game called chess or even what chess or game-playing is. It is orders of magnitude easier to program a computer to play the Chinese game of GO than to build a robot that can walk into any generic kitchen and fix a breakfast plate of scrambled eggs with bacon, toast and coffee. The biggest problem in AI is making a machine understand the world around it and display common sense. In this respect, a rat is orders of magnitude more intelligent than any AI system in existence. Nobody knows how to do it. Not yet.FourFaces
December 26, 2017
December
12
Dec
26
26
2017
05:28 PM
5
05
28
PM
PDT
How does one make artificial intelligence want anything? Artificial stuffs have no will. They respond to external cues through sophisticated algorithms installed in them by intelligent agents. They don't want things, they don't feel, just sense, detect.Dionisio
December 26, 2017
December
12
Dec
26
26
2017
02:31 PM
2
02
31
PM
PDT

Leave a Reply