Uncommon Descent Serving The Intelligent Design Community

When we try to replace scholars with computers …

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

We can think we’ve invented the wheel when actually …

From Adam Kirsch at The New Republic:

Certainly, if we ask the data unsophisticated or banal questions, we will get only unsophisticated and banal answers. That is the lesson of Uncharted, in which Erez Aiden and Jean-Baptiste Michel play tricks with the Google Ngram Viewer. In an odd but revealing moment, they quote a list of things that publications said about their invention when it launched, including this: “Mother Jones hailed it as ‘perhaps the greatest timewaster in the history of the Internet.” “Hailed” does not seem like quite the right word here, but Aiden and Michel don’t care: what matters is not the quality of the attention but the fact that “the interwebs were atwitter, and the Twitter was abuzz.”

The Google Ngram Viewer allows the user to search all of Google Books for strings of characters. This sounds like a powerful tool, but as Aiden and Michel put it through its paces, it turns out once again that the digital analysis of literature tells us what we already know rather than leading us in new directions. It is not surprising to learn, for instance, that the incidence in print of the name of any given year is most common in that year itself, so that more books containing “1950” were published in 1950 than in any other year. One reason this is not surprising is that all books’ copyright pages include the year of publication; but Aiden and Michel ignore this fact, which tends to nullify their conclusions about the “forgetting curve.” Once again, meta-knowledge—knowledge about the conditions of the data you are manipulating—proves to be crucial for understanding anything a computer tells you. Ask a badly phrased question and you get a meaningless answer.

At another point Aiden and Michel use the Ngram Viewer to document the suppression of certain names in German-language books published between 1933 and 1945. They show that banned artists such as Chagall and Beckmann virtually disappear from German books under the Nazis, and then rebound spectacularly after the war, as interest in their work revives. This is another example of data illustrating a truism rather than discovering a truth. After all, we wouldn’t think to search for those names in that time period unless we knew what we were going to find, and why; and the same holds true for the other examples of censorship that Aiden and Michel cite—the word “Tiananmen” in Chinese after 1989, for instance. The faux naïveté of some of these digital tools, their proud innocence of prior historical knowledge and literary interpretation, is partly responsible for the thinness of their findings.

Indeed, Aiden and Michel write that when they posed the same question about artists’ names to “a scholar from Yad Vashem,” she was able to predict exactly “which names would appear at which end of the curve. We didn’t give her access to our data or to our results, and we didn’t even tell her why we were asking. All she got from us was the list of names. Nevertheless, her answers agreed with ours the vast majority of the time.” Of course they did: she was a scholar! Aiden and Michel do not seem to recognize that this example, far from making the case for the usefulness of Ngrams, completely destroys it, by turning them into fancy reiterations of conventional wisdom.More.

There’s nothing wrong with Ngrams, but original ideas are just the sort of thing that can’t by their nature be automated. The iThinkbot isn’t going to work out any better than the iCarebot.

Follow UD News at Twitter!

Comments
Dionisio @ 24 [off topic]
The computer operating system (i.e. electronic signals and circuitry embedded in the microprocessor), executes...
That's incorrect (very inaccurate). Here's a [tentative] correction:
The computer operating system (i.e. the main software that operates directly on the electronic signals and circuitry embedded in the microprocessor, so that other application programs could function properly), executes...
Dionisio
May 8, 2014
May
05
May
8
08
2014
11:09 AM
11
11
09
AM
PDT
Robert, The computer operating system (i.e. electronic signals and circuitry embedded in the microprocessor), executes the software that is stored in the memory. But the operating system and the executed software are representations of algorithms that represent ideas that pursue some purposes or goals. Ideas are nonmaterial. They are not matter or energy, but are represented through material and energetic entities or processes. Those ideas come from thinking beings like the people who write interesting OPs and comments in this blog. The computers perform tasks and resolve problems using algorithms represented in electronic signaling through software and input data. The intelligence is in the folks who had the ideas on how to resolve the problems. As a computer programmer I don't create new ideas, just represent someone else's ideas in a format than can be executed by a bunch of electronic circuits and signals, which themselves were previously designed by intelligent beings who came up with ideas on how to create them, based on the knowledge they had previously acquired. Ok, gotta go now... my wife is calling me ;-)Dionisio
May 8, 2014
May
05
May
8
08
2014
05:04 AM
5
05
04
AM
PDT
Dionisio Wooooo. lets think about this. i say the computer is enturely about memory. These math things are not understood by the computer. they just memorized them and all related. The program is in the memory of the computer. its not thinking at all . ZERO. not a clue has it. i can't see how you say its not all memory despite the math. Math is all memory except for new discoveries.Robert Byers
May 7, 2014
May
05
May
7
07
2014
11:47 PM
11
11
47
PM
PDT
Barb, Sorry to see you got so angry at my comments. It was not my intention to make you so angry. Please, accept my apologies for anything I wrote that you took as an offense. Perhaps my style is not very clear. Keep in mind that English is not my first language. I pray that whatever it is that makes you feel so angry, will go away, so you can continue to enjoy reading and writing comments in this blog. I know that God loves you, because I know He loves me and I'm worse than you. :)Dionisio
May 7, 2014
May
05
May
7
07
2014
04:37 PM
4
04
37
PM
PDT
Dionisio continues,
Barb @ 14 If you would have read more carefully the information associated with this link you provided (http://www.openclinical.org/aiinmedicine.html) you would have realized that the term ‘think’ should not be used so liberally when describing existing AI systems that can learn ‘new tricks’ as long as the supporting software allows it. Probably we could say those ‘weak AI’ systems are better suited than old dogs to learn new tricks, but they can’t learn to think
Not yet, but you’re implying that they never will. You seem to want to separate weak/strong AI but there doesn’t appear to be that differentiation when computer scientists describe it.
There are many examples of AI apps, including the so-called ‘expert’ or ‘knowledge-based’ systems (like the ones used in your healthcare field), which basically can process large amounts of data much more efficiently than humans could do. Some of those systems can even ‘learn new tricks’, because the elaborate algorithms embedded in their software were designed with that capability. But they have not achieved the level where one could say they can learn to think. The term ‘think’ may imply human thinking, hence it should not be used so liberally when describing currently existing systems.
The term ‘think’ can be defined many different ways; critical thinkers read and comprehend what they’re reading. You’re arguing semantics here. And please do not ever presume to tell me what words to use when describing anything, ever. You don’t have that authority and you never will. Your arrogance and condescension are noted. And ignored.
The so-called ‘strong AI’ or AGI systems that pass Turing’s test could be compared to human thinking. However, as far as I know, they aren’t there yet. It’s a ‘work in progress’ situation.
I never indicated that they could pass a Turing test. If that is what you were looking for, then (a) Google it yourself, and (b) be more specific when asking for information. I provided examples of AI. You don’t like them. Too bad. They are considered AI whether you like it or not, because computer scientists don’t use your definition of weak/strong AI.
The following quote is from the link you provided: “There have been attempts to build systems that can pass Turing’s test in recent years. [However,] …none have yet passed the mark set by Turing.” Unfortunately these days thinking and paying attention to the meaning of words is not popular. That’s why systems like SMS, twitter and Facebook are so popular.
And this is why Google could in and of itself be considered AI.
The instant sharing of frivolous information is what many people enjoy doing these days. Now, that’s really sad. [btw, you wrote that it's sad that I don't know how to do Google searches, but now you have seen that much worse than that is the fact that we misuse and abuse some terms like 'think']
Condescend much? This entire thread you've added nothing but arrogance and condescension. You are a special snowflake, aren't you? So much smarter than everyone else! Or not. You may think what I posted was frivolous because it doesn’t fit your narrow definition of AI. If scientists believe it does (which they do), then I think I’ll value their opinion over yours. Get over yourself.Barb
May 7, 2014
May
05
May
7
07
2014
07:40 AM
7
07
40
AM
PDT
Robert Byers @ 18 As far as I know, intelligence is required in order to write the software containing the algorithms that can process so efficiently huge amounts of data. The intelligence in AI systems is really in the minds of the creators of those systems. Basically the AI systems are the creation of intelligent beings, as we are the creations of God. So I would not reduce the computer system that beat the human chess champion to just a memory game. The algorithms embedded in that software are refined and elaborate beyond simple data retrieval methods. However, their sophistication does not reach the level of human thinking.Dionisio
May 7, 2014
May
05
May
7
07
2014
05:31 AM
5
05
31
AM
PDT
Barb @ 14 If you would have read more carefully the information associated with this link you provided (http://www.openclinical.org/aiinmedicine.html) you would have realized that the term 'think' should not be used so liberally when describing existing AI systems that can learn 'new tricks' as long as the supporting software allows it. Probably we could say those 'weak AI' systems are better suited than old dogs to learn new tricks, but they can't learn to think ;-) There are many examples of AI apps, including the so-called 'expert' or 'knowledge-based' systems (like the ones used in your healthcare field), which basically can process large amounts of data much more efficiently than humans could do. Some of those systems can even 'learn new tricks', because the elaborate algorithms embedded in their software were designed with that capability. But they have not achieved the level where one could say they can learn to think. The term 'think' may imply human thinking, hence it should not be used so liberally when describing currently existing systems. The so-called 'strong AI' or AGI systems that pass Turing's test could be compared to human thinking. However, as far as I know, they aren't there yet. It's a 'work in progress' situation. The following quote is from the link you provided: "There have been attempts to build systems that can pass Turing's test in recent years. [However,] ...none have yet passed the mark set by Turing." Unfortunately these days thinking and paying attention to the meaning of words is not popular. That's why systems like SMS, twitter and Facebook are so popular. The instant sharing of frivolous information is what many people enjoy doing these days. Now, that's really sad. [btw, you wrote that it's sad that I don't know how to do Google searches, but now you have seen that much worse than that is the fact that we misuse and abuse some terms like 'think'] ;-)Dionisio
May 7, 2014
May
05
May
7
07
2014
05:00 AM
5
05
00
AM
PDT
Dionisio The computer beating someone at chess makes the case the computer is not intelligent. Chess is just a simple memory game. Those who prevail just memorized things more then the others. In fact its little more then a video game. Its a wrong concept to see boardgames as things requiring thinking. its no more thinking then scrabble. So easily a computer can beat people and the top chess people. Yet the computer never gave it a thought. It just remembers moves. Its a falwed thing to start with about dumb games.Robert Byers
May 6, 2014
May
05
May
6
06
2014
10:05 PM
10
10
05
PM
PDT
BA77, Thank you for the information, which seems to confirm what I wrote in a previous comment: it ain't as easy as they want to make it look. Someone mentioned 'computers that can learn to think' so I requested a reference to the source of such affirmation. The response was very kind: I'm lazy. Oh, well. C'est la vie.Dionisio
May 6, 2014
May
05
May
6
06
2014
08:43 PM
8
08
43
PM
PDT
Dionisio, you may be interested in this recent article on ENV: What Is a Mind? More Hype from Big Data - Erik J. Larson - May 6, 2014 Excerpt: In 1979, University of Pittsburgh philosopher John Haugeland wrote an interesting article in the Journal of Philosophy, "Understanding Natural Language," about Artificial Intelligence. At that time, philosophy and AI were still paired, if uncomfortably. Haugeland's article is one of my all time favorite expositions of the deep mystery of how we interpret language. He gave a number of examples of sentences and longer narratives that, because of ambiguities at the lexical (word) level, he said required "holistic interpretation." That is, the ambiguities weren't resolvable except by taking a broader context into account. The words by themselves weren't enough. Well, I took the old 1979 examples Haugeland claimed were difficult for MT, and submitted them to Google Translate, as an informal "test" to see if his claims were still valid today.,,, ,,,Translation must account for context, so the fact that Google Translate generates the same phrase in radically different contexts is simply Haugeland's point about machine translation made afresh, in 2014. Erik J. Larson - Founder and CEO of a software company in Austin, Texas http://www.evolutionnews.org/2014/05/what_is_a_mind085251.htmlbornagain77
May 6, 2014
May
05
May
6
06
2014
06:58 PM
6
06
58
PM
PDT
Barb, As far as I know, AI is far away from making a computer think the way we humans can do. The 'brilliant' supercomputer that beat the famous chess player definitely can play chess much more efficiently than any human opponent, but it could not respond many simple questions that involve feelings and emotions. Those supercomputers can 'learn' to perform specific complex tasks much more efficiently than we humans could do, but they cannot 'think' out of the box, i.e. improvise. That's why I asked you about the computers that learn to think. Please, believe me, it was not laziness, but plain curiosity.Dionisio
May 6, 2014
May
05
May
6
06
2014
06:49 PM
6
06
49
PM
PDT
Dionisio,
Don’t you like to share the results of your research and the sources of the information you already know? Why not?
I don’t like laziness. If you want to know anything about any subject, Google it. Or do research at the library. But don’t expect other people to do the work for you. That’s laziness.
Different (kind of) unrelated issues. Is there a rule on this? This time I’m responding in one post, as per your implicit suggestion.
Browser issues? That’s happened to me a couple of times. Also, they really need an “edit” button here.
Note that my questions were not about ‘AI’ but about ‘strong AI’ (AGI). As far as I’m aware of, I don’t define those terms, someone else did it already. At least that’s what I read in the links you provided. You may want to read them too.
AI appears to have different definitions based on what I’ve read. AGI also has separate definitions (http://intelligence.org/2013/08/11/what-is-agi/). We don’t have self-driving cars (yet) or robots that would be the equivalent of C3PO (which would be awesome), but AI is used in other applications, which my previous links mentioned. Because I work in healthcare, I generally look at healthcare informatics and how it utilizes AI (http://www.openclinical.org/aiinmedicine.html) There are alerts in electronic health records (EHRs) that notify nurses if a patient’s condition worsens as well as alerts about potential medication interactions. PACS can show x-ray images to physicians immediately.Barb
May 6, 2014
May
05
May
6
06
2014
09:13 AM
9
09
13
AM
PDT
Barb,
Do your own research next time.
Don't you like to share the results of your research and the sources of the information you already know? Why not?
And why did you respond to my one post with four of your own?
Different (kind of) unrelated issues. Is there a rule on this? This time I'm responding in one post, as per your implicit suggestion. :)
It depends on how you want to define AI.
Note that my questions were not about 'AI' but about ‘strong AI’ (AGI). As far as I'm aware of, I don't define those terms, someone else did it already. At least that's what I read in the links you provided. You may want to read them too.Dionisio
May 6, 2014
May
05
May
6
06
2014
06:58 AM
6
06
58
AM
PDT
Dionisio,
You don’t know how to use Google? Sad. Why did you write that?
Because you wrote, “I’m interested in that subject too. Please, can you provide some links to examples of those computers and the software behind their operation? Thank you.” Do your own research next time. And why did you respond to my one post with four of your own?
Is this a good example of ‘strong AI’ (AGI), or a computer that can learn to think? Is that the correct terminology for this example? Are you sure of this?
It depends on how you want to define AI.Barb
May 6, 2014
May
05
May
6
06
2014
05:03 AM
5
05
03
AM
PDT
Barb @ 7
There’s Deep Blue, which defeated chess grandmaster Gary Kasparov. Here’s an article which provides some examples: http://www.ucs.louisiana.edu/~.....tisAI.html
Is this a good example of 'strong AI' (AGI), or a computer that can learn to think? Is that the correct terminology for this example? Are you sure of this?Dionisio
May 5, 2014
May
05
May
5
05
2014
09:55 PM
9
09
55
PM
PDT
Barb @ 4
computers than that can learn to think
Dionisio
May 5, 2014
May
05
May
5
05
2014
09:38 PM
9
09
38
PM
PDT
Barb:
You don’t know how to use Google? Sad.
Why did you write that?Dionisio
May 5, 2014
May
05
May
5
05
2014
09:18 PM
9
09
18
PM
PDT
Barb, Thanks for the information.Dionisio
May 5, 2014
May
05
May
5
05
2014
09:05 PM
9
09
05
PM
PDT
Dionisio @ 5: You don't know how to use Google? Sad. Technically, Google itself could be considered an example of AI depending on how you want to define AI. There's Deep Blue, which defeated chess grandmaster Gary Kasparov. Here's an article which provides some examples: http://www.ucs.louisiana.edu/~isb9112/dept/phil341/wisai/WhatisAI.html Here's a Wiki article which also provides some examples: http://en.wikipedia.org/wiki/Applications_of_artificial_intelligenceBarb
May 5, 2014
May
05
May
5
05
2014
09:48 AM
9
09
48
AM
PDT
When we try to replace scholars with computers …
Some scholars out there could be replaced by a bunch of monkeys with typewriters ;-)Dionisio
May 5, 2014
May
05
May
5
05
2014
08:17 AM
8
08
17
AM
PDT
Barb @ 4
computers than can learn to think
I'm interested in that subject too. Please, can you provide some links to examples of those computers and the software behind their operation? Thank you.Dionisio
May 5, 2014
May
05
May
5
05
2014
08:13 AM
8
08
13
AM
PDT
The computer is just a memory machine. So it just remembers and never thinks. Congratulations on missing the entire point of artificial intelligence (AI), which deals with computers than can learn to think.Barb
May 5, 2014
May
05
May
5
05
2014
05:54 AM
5
05
54
AM
PDT
Robert Byers @ 2
...unless it was told too.
to?
It can remember better then us but...
than?
Thats what the mind i think.
that's? is? ?
Its not our thinking soul and heart.
It's?
The great flaw is them thinking memory counts as thinking or intelligence.
say what? My friend, I assume English is not your first language, is it? It's not my first language either. But both, you and I, can learn from some other participants in this blog, who seem to know how to express their ideas very well. I encourage you to read carefully what others have written here in this blog, and to pay special attention to their writing style and grammar. I'm learning from them, so I'm sure you could learn from them too. And it's free of charge! ;-) For example, gpuccio is a frequent writer in this blog. As far as I know, English is not his first language, but his writing style is so precise and clear, that it's a pleasure to read his OPs and comments, even if they describe things that fly high over my ignorant mind. The same can be said about other folks in this blog too. It's a blessing that they let me hang around here and even write my own comments sometimes too. :) I'm very thankful for that.Dionisio
May 5, 2014
May
05
May
5
05
2014
05:26 AM
5
05
26
AM
PDT
The computer is just a memory machine. So it just remembers and never thinks. It never chooses between options unless it was told too. It can remember better then us but this only shows we are largely a memory machine. Thats what the mind i think. Just a priority memory organization. Its not our thinking soul and heart. The great flaw is them thinking memory counts as thinking or intelligence. memory is just a machine part or organ. Thats why it can be fooled around with.Robert Byers
May 4, 2014
May
05
May
4
04
2014
08:39 PM
8
08
39
PM
PDT
Didn't Spencer Tracy and Katharine Hepburn explore this in some depth in the movie "Desk Set"? Where the human librarians predict exactly what the computer's response will be as soon as they hear the question? And predict which answers the computer will get wrong? That which resolves Doubt is Information. Everything else is Noise. It is VERY, very dangerous to release any raw data from a retrieval of any kind. You really want a skilled human being to clean the mess up before anyone in management sees it.mahuna
May 4, 2014
May
05
May
4
04
2014
12:51 PM
12
12
51
PM
PDT

Leave a Reply