The so called “strong Artificial Intelligence” (AI) has some relations with evolutionism because both imply a “more” coming from a “less” and both are products of a materialist reductionist worldview. In evolutionism they believe that life arises from non life, and, similarly, in AI they believe that the intelligent comes from the non intelligent, that “machines can think”. To try to experimentally prove this last claim it was even developed a test, called “Turing test”.
“The Turing test [TT] is a test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of an actual human. In the original illustrative example, a human judge engages in a natural language conversation with a human and a machine designed to generate performance indistinguishable from that of a human being. All participants are separated from one another. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test.” (from Wikipedia)
First of all, I have to state the total metaphysical absurdity of AI per se. Intelligence (“thinking”, like consciousness, free will, reason, etc. is simply one of the consequences of intellect) is virtually a connection between the knowledge of the Being and the comprehension of a living being (see here). A machine cannot be basis for such connection in principle. Therefore, according to metaphysics, “artificial intelligence” is pure nonsense, a monstrous oxymoron. Whoever has the least idea of the former cannot avoid to consider AI a deceit.
Secondarily, a test dedicated to verify if “machines can think” suffers of other minor procedural defects.
1) Identical effects may have entirely different cause. For example, a singer and a CD-player can output exactly the same song, despite of they are fully different things. Identical actions/behaviors may have different motivations/intentions. So effects alone cannot identify causes. One cannot judge reality from appearance or simulation. Resemblance is not identity. This is also the opinion of John Searle when he rightly says that simulation of thinking is not proof of real thinking.
2) The problem of “the range of questions”. In the TT the “humans” (both the examined and the examiner) are not enough specified. Humans have different knowledge, so how does the examiner/judge choose the inquiry fields? For example, if the judge is a good mathematician could ask a question about ─ say ─ the symbolic solutions of a differential equation. If the human under inquiry is not a mathematician and the computer was not programmed in advanced infinitesimal calculus, both are unable to answer and the TT declares parity. It doesn’t exist a standard “human” knowledge, so the TT cannot be considered standard as well. If you change the human under test and/or the judge and/or the programmer you could get an entirely different result. In other words the TT is subjective, not objective. As such it cannot pretend to be considered scientific.
3) The problem that the questions are function of the answers. Example, if I ask Jack and Tom “What do you like?” and they respectively answer “I like music” and “I like sport”, then my next question will be for the former “Why do you like music?” and for the latter “Why do you like sport?”. At this point the conversation diverges and cannot be shared any more, as the TT specifies. See this fact in the examples of TT below.
At the very end, a TT on a computer, if truth be told, is a test on the programmer who has programmed the computer, by using “intelligent artifices”. It is the programmer who thinks, not the machine. In the following, let’s try to think about tests a computer would likely fail, because involve things that are hard to program. For example, when the TT is something like this:
TT about the ontological hierarchy.
Judge: “What’s your name?”.
Judge: “What writes this?”.
T1: “My fingers”.
T2: “My output device”.
Judge: “What drives your fingers | output devices? [the judge is forced to ask T1 and T2 different things, see above the #3 point]”.
T1: “My brain”.
T2: “My CPU”.
Judge: “What drives your brain | CPU?”.
T1: “My mind”.
T2: “My software”.
Judge: “What drives your mind | software?”.
T1: “The Self”. [I assume T1 knows metaphysics]
T2: “The programmer”.
Judge: “What drives the Self | programmer?”.
T1: “The Self is driven by nothing”.
The judge can reliably tell the human is T1 while the machine is T2 because T2 doesn’t recognize the top of the ontological hierarchy and eventually falls into a loop.
TT about language and meta-language.
It is likely that a computer fails the TT when the question involves distinctions between language and meta-language, which are hard to recognize in the natural speaking.
Judge: “ Apple.  What I just said is not apple.  Do you think it is correct that ‘apple’ is not ‘apple’?”.
T1: “I think there could be a trick in #3 because in #2 you could mean that #1 is simply a word, not a real apple”.
T2: “I think your #3 is incorrect because ‘apple’ is always ‘apple'”.
The judge can reliably tell the human is T1 while the machine is T2 because T2 lacks the ability to distinguish between language and meta-language.
TT about jokes.
Judge: “[Invents and tells a new joke] J1. What do you think of my J1?”.
T1: “It is funny (or similar)”.
T2: “X [not ‘funny’ or similar inside; it doesn’t understand the humor]”.
Judge: “Tell something similar to my J1.”.
T1: “J2 [another joke]”.
T2: “&^._=jf@h [not a joke]”.
As known, jokes are difficult to understand and even more difficult to invent, given their subtle semantic traps and their complex linguistic squirms. The judge can reliably tell the human is T1 while the machine is T2 because T2 lacks the sense of humor, either as understanding it or as inventing it. Twice it appears unable to appreciate it.
TT about invention and intelligent design.
Judge: “Invent and describe at the functional block level a machine to pick up the apples dropped from the tree”.
T1: “We have to design a robot-machine composed of: a microprocessor, a vehicle, an arm to pick the apple, a container, a battery, a lot of actuators, a lot of sensors…; they must be connected this way…”.
The judge can reliably tell the human is T1 while the machine is T2 because T2 lacks the capacity of invention and design.
After all, systems which one claims to be thinking have to be tested about the eminent functions of a thinking mind. If we have to test a car, whose main function is to run, we will prove it by driving it in on the road. If we have to test mind we should prove it about understanding, invention and design, which are among its highest functions.