Uncommon Descent Serving The Intelligent Design Community

AI, Memristors and the future (could “conscious” machines lie ahead?)

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

AI — artificial intelligence — is emerging as a future-driver. For example, we have been hearing of driver-less cars, and now we have helmsman-less barges:

An Electrically-powered (potentially) crew-less barge [Cr: Guardian)
As The Guardian reports:

>>The world’s first fully electric, emission-free and potentially crewless container barges are to operate from the ports of Antwerp, Amsterdam, and Rotterdam from this summer.

The vessels, designed to fit beneath bridges as they transport their goods around the inland waterways of Belgium and the Netherlands, are expected to vastly reduce the use of diesel-powered trucks for moving freight.

Dubbed the “Tesla of the canals”, their electric motors will be driven by 20-foot batteries, charged on shore by the carbon-free energy provider Eneco.

The barges are designed to operate without any crew, although the vessels will be manned in their first period of operation as new infrastructure is erected around some of the busiest inland waterways in Europe.

In August, five barges – 52 metres long and 6.7m wide, and able to carry 24 20ft containers weighing up to 425 tonnes – will be in operation. They will be fitted with a power box giving them 15 hours of power. As there is no need for a traditional engine room, the boats have up to 8% extra space, according to their Dutch manufacturer, Port Liner.

About 23,000 trucks, mainly running on diesel, are expected to be removed from the roads as a result . . . >>

Of course, such articles tend to leave off “minor” details such as just how dirty the semiconductor manufacturing business is.  As for “carbon [emissions] free” I will believe it when I see it demonstrated. Let’s simply say instead, renewables.

The significant point for us is that we are seeing AI emerging into the market-place as a potential future-driver, a good slice of the technologies that will likely drive the next generation of economy dominating industries that likely will shape our future. That brings the economy-driving long wave thinking of Kondratiev, Schumpeter and successors to the fore as the world moves beyond the great recession from 2008 on, which still has lingering impacts. And BTW, I have been told of huge former office-cubicle spaces that are now hosting racks of computers running AI driven trading programs for big-ticket investment houses.

Let’s pause and ponder a bit on the Long-wave view of the global economy, using here a “live” working chart:

Someone will doubtless ask: that’s all well and good as part of an ongoing UD Sci-Tech watch, but how is this relevant to the core ID focus of this blog?

Glad you asked.

Here is Science Daily, citing a somewhat older version of Wikipedia in a sci-tech backgrounder piece:

>>The modern definition of artificial intelligence (or AI) is “the study and design of intelligent agents” where an intelligent agent is a system that perceives its environment and takes actions which maximizes its chances of success.

John McCarthy, who coined the term in 1956, defines it as “the science and engineering of making intelligent machines.”

Other names for the field have been proposed, such as computational intelligence, synthetic intelligence or computational rationality.

The term artificial intelligence is also used to describe a property of machines or programs: the intelligence that the system demonstrates.

AI research uses tools and insights from many fields, including computer science, psychology, philosophy, neuroscience, cognitive science, linguistics, operations research, economics, control theory, probability, optimization and logic.

AI research also overlaps with tasks such as robotics, control systems, scheduling, data mining, logistics, speech recognition, facial recognition and many others.

Computational intelligence Computational intelligence involves iterative development or learning (e.g., parameter tuning in connectionist systems).

Learning is based on empirical data and is associated with non-symbolic AI, scruffy AI and soft computing.

Subjects in computational intelligence as defined by IEEE Computational Intelligence Society mainly include: Neural networks: trainable systems with very strong pattern recognition capabilities.

Fuzzy systems: techniques for reasoning under uncertainty, have been widely used in modern industrial and consumer product control systems; capable of working with concepts such as ‘hot’, ‘cold’, ‘warm’ and ‘boiling’.

Evolutionary computation: applies biologically inspired concepts such as populations, mutation and survival of the fittest to generate increasingly better solutions to the problem.

These methods most notably divide into evolutionary algorithms (e.g., genetic algorithms) and swarm intelligence (e.g., ant algorithms).

With hybrid intelligent systems, attempts are made to combine these two groups.

Expert inference rules can be generated through neural network or production rules from statistical learning such as in ACT-R or CLARION.

It is thought that the human brain uses multiple techniques to both formulate and cross-check results.>>

Wiki’s context for discussing AI

The current version of Wikipedia says outright: “In computer science AI research is defined as the study of “intelligent agents“: any device that perceives its environment and takes actions that maximize its chance of success at some goal.[1] It adds: “Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.[2]

In short, surprise — NOT, we are back at a triple-point fuzzy inter-disciplinary border between [a] sci-tech, [b] mathematics (including computing) and [c] philosophy.

That should be duly noted for when the usual objections come up that we must focus on “Science” or aren’t focussing enough on “Science.” Science isn’t just Science — and it never was. There was a reason why Newton’s key work was on The Mathematical Principles of Natural Philosophy, and why “Science” [meaning knowledge] originally meant more or less, the body of established results and findings of a methodical, objective field of study.

Backing up a tad, here is Merriam-Webster on AI:

Definition of artificial intelligence

1 : a branch of computer science dealing with the simulation of intelligent behavior in computers
2 : the capability of a machine to imitate intelligent human behavior

Of course one of the sharks lurking here is that in evolutionary materialistic scientism, it is assumed that our brains and central nervous systems evolved as natural computers that somehow threw up consciousness and rationality.Sir  Francis Crick on The Astonishing Hypothesis (1994) is perhaps the most frank in saying this:

“You”, your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behaviour of a vast assembly of nerve cells and their associated molecules. As Lewis Carroll’s Alice might have phrased: “You’re nothing but a pack of neurons.” This hypothesis is so alien to the ideas of most people today that it can truly be called astonishing.

The scare-quoted “You,” the “in fact” assertion and the You’re “nothing but a pack of neurons” tell the story. Indeed, we can see the view as reducing mindedness to:

Fig. G.18(b): Integration of Neurons in layered networks and the brain, the body’s controller, n.b. motor area. (Credits: Jedismed, Riken, HSS, India)

 

A web Sci-Fi comic strip with three AI’s in imagined conversation. L-R: a former ship AI now of godlike powers resident in the galactic core but capable of popping up with a hologram-like avatar anywhere , a ship AI and a “recorded” creature turned into an AI, it has also featured resurrection by recording neural state and reconstructing a body. [HT: Schlock Mercenary]
No wonder that seminal ID thinker, Philip Johnson, replied that Sir Francis should have therefore been willing to preface his works thusly: “I, Francis Crick, my opinions and my science, and even the thoughts expressed in this book, consist of nothing more than the behavior of a vast assembly of nerve cells and their associated molecules.”  Johnson then acidly commented:  “[t]he plausibility of materialistic determinism requires that an implicit exception be made for the theorist.” [Reason in the Balance, 1995.]

Engineer Derek Smith has given us a less philosophically loaded framework, and we can present it in simplified form:

The Derek Smith two-tier controller cybernetic model

Here, we can see ourselves as biological cybernetic systems that interact with the external world under the influence of a two-tier controller. The lower level in-the-loop input-output controller is itself further directed by a supervisory controller that allows room for intent, strategy, decision beyond ultimately deterministic and/or stochastic branch programming, and more. Is such “just” software? Is such an extra-dimensional entity that interfaces through say quantum influences? And more? (We can leave that open to further discussions.)

Oh, let me add this, as a tiny little first push-back on the gap between computational substrates and conscious, responsibly and rationally significantly free, morally governed agency:

 

The skinny is, what is an intelligence, what is agency, what is responsible, rational freedom and what can computational substrates do are all up for grabs and that this will get more and more involved as AI systems make it into the economy.

All of this leads to an interesting technology, memristors.

Memory + resistors.

Resistors with programmable memories that can be used in a more or less digital mode as storage units or as multiple level or continuous state elements amenable to neural network programming.

As storage units, for some years there has been talk of 100 Tera Byte storage units. Yes, 1 * 10^14 Bytes, or 8 * 10^14 bits. We’ll see on that soon enough. However, that would make low-cost high-density technology available for the much more interesting neural networks application.

Where, refresher, a neural network (in light of the way neurons interconnect):

A neural network is essentially a weighted sum interconnected gate array, it is not an exception to the GIGO principle

 

On the interesting bit, first, Physics dot org:

>>Transistors based on silicon, which is the main component of computer chips, work using a flow of electrons. If the flow of electrons is interrupted in a transistor, all information is lost. However, memristors are electrical devices with memory; their resistance is dependent on the dynamic evolution of internal state variables. In other words, memristors can remember the amount of charge that was flowing through the material and retain the data even when the power is turned off.

“Memristors can be used to create super-fast memory chips with more data at less energy consumption” Hu says.

Additionally, a transistor is confined by binary codes—all the ones and zeros that run the internet, Candy Crush games, Fitbits and home computers. In contrast, memristors function in a similar way to a human brain using multiple levels, actually every number between zero and one. Memristors will lead to a revolution for computers and provide a chance to create human-like artificial intelligence.

“Different from an electrical resistor that has a fixed resistance, a memristor possesses a voltage-dependent resistance.” Hu explains, adding that a material’s electric properties are key. “A memristor material must have a resistance that can reversibly change with voltage.”>>

In short, rewritable essentially analogue storage units capable of use in neural networks, thus AI systems.

Nature amplifies, using the reservoir computing concept:

>>Reservoir computing (RC) is a neural network-based computing paradigm that allows effective processing of time varying inputs1,2,3. An RC system is conceptually illustrated in Fig. 1a, and can be divided into two parts: the first part, connected to the input, is called the ‘reservoir’. The connectivity structure of the reservoir will remain fixed at all times (thus requiring no training); however, the neurons (network nodes) in the reservoir will evolve dynamically with the temporal input signals. The collective states of all neurons in the reservoir at time t form the reservoir state x(t). Through the dynamic evolutions of the neurons, the reservoir essentially maps the input u(t) to a new space represented by x(t) and performs a nonlinear transformation of the input. The different reservoir states obtained are then analyzed by the second part of the system, termed the ‘readout function’, which can be trained and is used to generate the final desired output y(t). Since training an RC system only involves training the connection weights (red arrows in the figure) in the readout function between the reservoir and the output4, training cost can be significantly reduced compared with conventional recurrent neural network (RNN) approaches4.>>

Fig. 1:

According to University of Michigan’s The Engineer News Center:

>>To train a neural network for a task, a neural network takes in a large set of questions and the answers to those questions. In this process of what’s called supervised learning, the connections between nodes are weighted more heavily or lightly to minimize the amount of error in achieving the correct answer.

Once trained, a neural network can then be tested without knowing the answer . . . .  [However,] “A lot of times, it takes days or months to train a network,” says [Wei Lu, U-M professor of electrical engineering and computer science who led the above research]. “It is very expensive.”

Image recognition is also a relatively simple problem, as it doesn’t require any information apart from a static image. More complex tasks, such as speech recognition, can depend highly on context and require neural networks to have knowledge of what has just occurred, or what has just been said.

“When transcribing speech to text or translating languages, a word’s meaning and even pronunciation will differ depending on the previous syllables,” says Lu.

This requires a recurrent neural network, which incorporates loops within the network that give the network a memory effect. However, training these recurrent neural networks is especially expensive, Lu says . . . .

Using only 88 memristors as nodes to identify handwritten versions of numerals, compared to a conventional network that would require thousands of nodes for the task, the reservoir achieved 91% accuracy.>>

So, we see a lot of promise and it is appropriate for us to have this background for onward discussion. END

Comments
critical rationalist: "And I would again say that there is two kinds of knowledge: explanatory and non-explanatory. Only people can create explanatory knowledge, but both natural processes and people can create non-explanatory knowledge, which represent useful rules of thumb that have limited reach." And, as I have said you may times, natural processes cannot create what you call non-explanatory information (prescriptive information) which is complex. That is the point of ID. Explanatory information (descriptive information) is essential to create complex non-explanatory information (complex prescriptive information). And, as you say yourself, only "people" (conscious intelligent beings) can create explanatory knowledge (descriptive information). And, therefore, only "people" (conscious intelligent beings) can create complex non-explanatory information (complex prescriptive information).gpuccio
January 31, 2018
January
01
Jan
31
31
2018
11:39 PM
11
11
39
PM
PDT
ID provides the basic tool for that: non conscious machines can never generate new complex functional information. IOWs complex functional information linked to a completely new functional specification.
And I would again say that there is two kinds of knowledge: explanatory and non-explanatory. Only people can create explanatory knowledge, but both natural processes and people can create non-explanatory knowledge, which represent useful rules of thumb that have limited reach. But, having a limited reach doesn't mean it's not useful and therefore can play a causal role in being retained when embedded in a storage medium.critical rationalist
January 31, 2018
January
01
Jan
31
31
2018
08:41 PM
8
08
41
PM
PDT
A true AI can perfectly predict what a human will do in the next instant. The human merely has to do the opposite, thus falsifying true AI.EricMH
January 28, 2018
January
01
Jan
28
28
2018
01:56 PM
1
01
56
PM
PDT
Dionisio: One of my cats can open doors. I suppose that the procedure requires some functional information.gpuccio
January 27, 2018
January
01
Jan
27
27
2018
12:49 PM
12
12
49
PM
PDT
LocalMinimum: "Whether looking up or working up a method, I use a general profile of what I want: input, output, costs and requirements, etc." How do you decide what you want? "When looking it up, it’s me playing meta-thesaurus for Google, working out what may show up on a document that could offer a lead to a suitable method." Who is the "me" that palys? Who decides what is "a suitable method"? How do you even know what a "method" is? "When working it out, I apply various combinations and permutations of methods I already know." How did you know them in the beginning? How do you evaluate the results? Who tells you what the parameters are, what is desirable, what is wrong, what works or does nott work? Who tells you what is the purpose? "Again, if there’s a hole in my developing method that I cannot fill with my knowledge, I push my current problem on the stack and attack that submethod as a problem in its own right." How do you decide how to solve the problem? "As far as art goes, I’d say it’s expression generation with its value as art being the ratio of symmetry density to recognition cost. The symmetry density, for non-abstract art, would also rely on the experiences/views/knowledge of the observer, so its value is subjective to the audience." Art comes from the subjective expereinces of the author. That's why it can objectively evolke similar experiences in those who receive it, with obvious differences due to subjective variation. But a symphony by Beethoven certainly evokes different experiences than some bad commercial song. That is an objective difference. "I, myself, don’t actually understand for myself how to see the workings of my working intellect as being beyond simulation." What can be simulated is some outer result or behaviour. Not the workings of your intellect. When your intellect works, it does that using, among other things, objective algotrithms and computations. Those algorithms and computations can certainly be simulated. That's what Chalmers calls "the easy problem". But the workings of your intellect are much more than that. Those workings start from consious representations and intuitions, and use algorithms and computations to get results which are, again, transformed into conscious representations. That's what Chalmers calls "the hard problem". Each conscious representation has at least two important aspects which are intrinsic in the representation itself: a) A cognitive reaction: your consciousness has cognitive intuitions about the representation, in terms of meanings, of truth and falsity, and so on. b) A feeling reaction: your consciusness always "judges" each representation as pleasant or unpleasant, joyous or painful, good or bad. All choices in cognition and action are based on those intuitive reactions to the forms that we represent. Nothing of that can be simulated, because all those reactions are not algorithmic, even if they can use algorithmic data or relate to them. So, you can simulate the concept of "true" and "false", or of "good" and "bad" in a computer program, but the program will deal with those variables as it does with any other binary variable, and will not be aware of any meaning associated with those variables. To the program, they are just numbers. Cause and effect, truth and falsity, pleasure and pain, purpose, are not numbers. They are experiences. They can be simulated by numbers, but it's a simulation which does not really simulate anything. It's only an external coding, related to the experiences, but which does not retain anything of the experiences themselves. So, we can code Shakespeare's sonnets from 1 to 154, and relate to them in some program by their number. But the number cannot evoke the experiences expressed in the sonnet. So are the simulations of conscious experiences: they are void conventional numbers, which cannot convey the experience itself, and in reality have nothing to do with it.gpuccio
January 27, 2018
January
01
Jan
27
27
2018
12:47 PM
12
12
47
PM
PDT
gpuccio @19: "Higher animals can probably generate some functional information, but not extremely complex, I believe." Well, my children's cat, living in complete harmony with their dog, sometimes does very intelligent things. :)Dionisio
January 27, 2018
January
01
Jan
27
27
2018
10:45 AM
10
10
45
AM
PDT
I, myself, don't actually understand for myself how to see the workings of my working intellect as being beyond simulation. When attacking a problem, I cannot reference methods I have never witnessed or worked up. Whether looking up or working up a method, I use a general profile of what I want: input, output, costs and requirements, etc. When looking it up, it's me playing meta-thesaurus for Google, working out what may show up on a document that could offer a lead to a suitable method. Methods discovered are then contextualized, then compared. When working it out, I apply various combinations and permutations of methods I already know. Again, if there's a hole in my developing method that I cannot fill with my knowledge, I push my current problem on the stack and attack that submethod as a problem in its own right. As far as art goes, I'd say it's expression generation with its value as art being the ratio of symmetry density to recognition cost. The symmetry density, for non-abstract art, would also rely on the experiences/views/knowledge of the observer, so its value is subjective to the audience.LocalMinimum
January 27, 2018
January
01
Jan
27
27
2018
08:49 AM
8
08
49
AM
PDT
Dionisio: "As professor Chalmers said, consciousness is a very hard problem." You bet! :)gpuccio
January 27, 2018
January
01
Jan
27
27
2018
07:09 AM
7
07
09
AM
PDT
@17 addendum However, we humans have noticed that at least certain animals (for example cats, dogs) have strong feelings which make them our close 'friends' in some ways, therefore they have certain consciousness. A problem with all this is the exact meaning of the terms we want to discuss about. Most of this is far above my pay grade. I prefer simple things like morphogen gradient formation and interpretation procedures, because at least I know that professor Moran knows exactly all that stuff and perhaps someday will explain it to GP, because he doesn't ask dishonest questions. As professor Chalmers said, consciousness is a very hard problem. My laziness makes me stick to the things that fall into the category of WYSIWYG, like asymmetric segregation of cell fate intrinsic determinants and stuff like that, where there is no room left for speculation.Dionisio
January 27, 2018
January
01
Jan
27
27
2018
05:45 AM
5
05
45
AM
PDT
Dionisio: I certainly agree that the "intelligence" part (being able to cosnciously experience meaning) is specially prominent in humans. Higher animals can probably generate some functional information, but not extremely complex, I believe.gpuccio
January 27, 2018
January
01
Jan
27
27
2018
05:29 AM
5
05
29
AM
PDT
KF: Thanks for the thoughtful comment. I think we essentially agree. :)gpuccio
January 27, 2018
January
01
Jan
27
27
2018
05:26 AM
5
05
26
AM
PDT
KF @14, I believe that only humans can have the creative power to purposely design complex functionally specified informational systems in their minds, as my former project leader (software development director) did in my previous employment. Only humans can have the desire and capacity to interpret the meaning of their experiences and purposely express it in a way that may be interpreted and enjoyed by other humans. Only humans can have the ability to purposely create new communication protocols and use them to communicate with other humans. Only humans can purposely study and learn about other biological systems (plants, animals) and the surrounding environments at different scales (universe, galactic, solar system, interplanetary, satellite, planet Earth, atomic, subatomic) and at different abstraction levels. Only humans have Imago Dei, therefore we can -at our limited level- communicate with our Creator. But this is beyond the scientific domain.Dionisio
January 27, 2018
January
01
Jan
27
27
2018
05:25 AM
5
05
25
AM
PDT
GP, thought-provoking and enriching as usual. My question on animals of course is about the threshold where we just have a bio-machine, working away and whether there are tests that detect. Is a frog or a fish or a crab conscious? An earthworm? A sponge? A bacterium? A mushroom? A mango tree? I know that is a hard question. Helps to keep us humble, I suppose. Going meta is of course a case of that reflexive self-moving mentioned and coming from Plato. The issue of self-aware insight joined to creativity is maybe an even bigger challenge. I do suspect that we are dealing with people who want to "push" the idea of an intelligence of the gaps regress, and so we need to ponder what is intelligence. Certainly, they used to say 'puter's won't play good chess then good go. But what about the real world without strict rule sets and specific game-boards. KFkairosfocus
January 27, 2018
January
01
Jan
27
27
2018
05:06 AM
5
05
06
AM
PDT
KF: Your questions are not simple at all! :) I will try to answer as I can. As you know, my attitude is to stick to what we can say scientifically, and not to rely on personal general philosophic or religious beliefs. You ask: "For instance, is a dog or an ape or a dolphin conscious? " OK, defining consciousness as the presence of subjective experiences, I would definitely say yes. But I have to remind that the only consciousness that we percieve directly, intuitively, in our personal consciousness. Consciousness in other human beings is, as I have said many times, an inference by analogy. Quite a strong and reliable inference, IMO, (I apologize to eventual solipsists in the audience). Indeed, an inference upon which most of our map of reality is based. For higher animals, the inference is IMO quite strong too. Few people would really diubt that their dog or cat have subjective experiences. You ask: "Where would you put the border?" Well, it's not me who can put the border anywhere. As I said, scientifically, the inderence of subjective expereinces is stronger in higher animals, but putting a border is not scientifically possible IMO, at least at present. That becomes a matter of philosophy, and I will not comment on that point. "Are we in an intelligence of the gaps retreat, where as AI achieves more and more, the room for saying machines are not truly intelligent gets ever more constricted?" No. The problem is in the many possible meaning of the word "intelligence". My basic definition is: "Having the conscious experience of meaning" In that sense, machines will never be intelligent, because they will never be conscious. But many define intelligence as the presence of intelligently designed mechanisms. In that sense, of course, even a simple watch is very intelligent. Computers can be more "intelligent" than we are if intelligence is related to the ability to perform some function: for example, any computer is probably better than most human beings in making computations. So, all depends on how we define intelligence. With my definition, machines are not intelligent at all. And my definition also refers to the property that is implied in generating new original functional information: without conscious understanding of meaning, that is impossible. "I note that you suggest a test, genuine inventive insightful contrivance and creativity or artistry expressing novel, complex function, which you imply is non-algorithmic." Yes. I would say that simply defining a new desired function, without any previous programming about that, either direct or indirect, is beyond the power of any machine. I will make a simple example. We can have a computing system which evaluates possible algorithms to assess if they can perform a task (let's say A) or not. As conscious beings, we can decide that our function is to have algorithms that perform the task (the most reasonable case). But we can also choose to select those that cannot perform it. Maybe even after spontaneous optinmization. That could also be an interesting task, maybe to understand how algorithms work. Now, a machine can only be programmed to do one thing, or the other, or both. Anyway, it cannot do the thing that it has not been programmed to do. It cannot "desire" to do that, or "understand" why that could be interesting. That's, very simply, the difference between conscius beings and a non conscious machine. "And BTW, can a neural network, in vivo or in silicon etc, go beyond the algorithmic . . . including search with feedback on success?" No, I don't think so. It's always algorithmic. It's only an algorithm which includes processing of new information from the outside. For example, AlphaGo Zero processes new information derived from its games with itself. But the process is still completely algorithmic. Understanding why Godel's theorem is true is a non algorithmic task, as Penrose has clearly shown. It requires a "meta" attitude that only consciousness can give. Consciousness is the only type of process that can always go "meta" in respect to its own experiences, with an infinite regress, a wonderful "mise en abime". "In that light, going forward, how should we think about understanding intelligent agency? Where, I note Plato’s concept of the self-moved, initiating cause that comes first and triggers a cascade of cause-effect stages thereafter. In The Laws, Bk X, he describes that as a characteristic of soul and of life." I think I can agree with him! :)gpuccio
January 27, 2018
January
01
Jan
27
27
2018
04:21 AM
4
04
21
AM
PDT
GP, do you wish to further elaborate? For instance, is a dog or an ape or a dolphin conscious? [Where would you put the border?] Are we in an intelligence of the gaps retreat, where as AI achieves more and more, the room for saying machines are not truly intelligent gets ever more constricted? Or is this just promissory note IOU's? I note that you suggest a test, genuine inventive insightful contrivance and creativity or artistry expressing novel, complex function, which you imply is non-algorithmic. And BTW, can a neural network, in vivo or in silicon etc, go beyond the algorithmic . . . including search with feedback on success? In that light, going forward, how should we think about understanding intelligent agency? Where, I note Plato's concept of the self-moved, initiating cause that comes first and triggers a cascade of cause-effect stages thereafter. In The Laws, Bk X, he describes that as a characteristic of soul and of life. KFkairosfocus
January 27, 2018
January
01
Jan
27
27
2018
03:52 AM
3
03
52
AM
PDT
daveS: "I don’t know if these machines will ever actually be conscious," I definitely think they will not. "but I suspect they will eventually be able to pass as a conscious agent, at least in some situations." Well, "at least in some situations" is a possibility, but there are things that machines will never be able to do, and that will always allow, in an appropriate context, to understand that they are not conscious. ID provides the basic tool for that: non conscious machines can never generate new complex functional information. IOWs complex functional information linked to a completely new functional specification. IOWs machines can only make computations according to the specifications that are already in their original software. In that sense, they can algorithmically increase the amount of information linked to some pre-existing functional specification, if that can be done by mindless computation only. The best example of that is a softwrae which can compute the decimal figures of pi: the more the software runs, the more "information" we have about pi, because the increase in information is algorithmically computable from the information already present in the system. In general, if we consider complexity as Kolmogorov complexity, no machine can really output a higher Kolmogorov complexity than the complexity of the machine itself. AlphaGo Zero learnt to play Go with great efficiency and speed, but not really "from scracth". It needed the information about the rules of the game, and the information to play with itself and to "learn" computationally from those same plays. And, of course, the functional specification, in its original software, whcih could be briefly expressed as: "Your task is to play Go with yourself and to improve the play efficiency computing from the results of those plays". IOWs a software can do nothig else than what it is programmed to do, even if that programming includes a vast range of possibilities. Conscious agents are different. They understand the meaning of what they experience. And they have purposes linked to their essential experience of feeling pleasure and pain, joy and sorrow, good and bad, and so on. That allows a creative interaction with experience, which is not algorithmic, even if it certainly includes many algorithmic steps (see, for example, Penrose). New complex functional information derives from that: the harnessing of new information, guided by understanding of meaning, towards a new desired result (a new function). That's why only conscious intelligent and purposeful agents can generate new complex functional information. Non conscious systems can only generate, occasionally, simple functional information, by mere chance. Algorithmic systems can only use the functional information that they already have to increase the computable information linked to the functions which are already defined in the system (and that includes the very limited powers of optimization in biological beings by RV + NS). See here: https://uncommondescent.com/intelligent-design/what-are-the-limits-of-natural-selection-an-interesting-open-discussion-with-gordon-davisson/ and here: https://uncommondescent.com/intelligent-design/what-are-the-limits-of-random-variation-a-simple-evaluation-of-the-probabilistic-resources-of-our-biological-world/ That's why a computer will never be able to write a new sonnet like Shakespeare's (a new recording of original conscious experiences and feelings in an original form), but can only try to copy and paste existing information, according to complex mindless instructions already embedded in it by its programmer, to imitate a sonnet. While winning at Go is a computational task, writing a sonnet is not. The meaning and beauty of a sonnet, its ability to evoke similar experiences and feelings in the reader, strictly depend on the original conscious experiences of the author, however beautiful and deep they may be or not be. A non conscious system can never generate new conscious experiences, it can only mindlessly recycle algorithmically the conscious experiences of conscious beings.gpuccio
January 27, 2018
January
01
Jan
27
27
2018
02:38 AM
2
02
38
AM
PDT
F/N: Notice, the absence of those ever so eager to pounce on us to object? Let us calibrate the protests that UD is not dealing with "Science" in the light of this repeated pattern of studious absence from science-focussed discussion-threads. DS, of course, being a notable exception. KFkairosfocus
January 27, 2018
January
01
Jan
27
27
2018
12:36 AM
12
12
36
AM
PDT
Dean_from_Ohio, the expression "conscious machines" is a stinky hogwash. empirical evidences show that machines are designed by conscious agents science does not deal with the creation of conscious agents, because science does not explain consciousness that's a hard problem for science... actually too hardDionisio
January 26, 2018
January
01
Jan
26
26
2018
04:14 PM
4
04
14
PM
PDT
KF, Here’s another piece of news also somehow related to wrong implementation of AI, though less than the previous post? http://fox17.com/news/local/does-google-home-know-who-jesus-is-brentwood-resident-says-noDionisio
January 26, 2018
January
01
Jan
26
26
2018
09:13 AM
9
09
13
AM
PDT
KF, You may have seen this AI-related article, but perhaps some of your readers haven’t yet: https://www.dailystar.co.uk/news/latest-news/676883/artificial-intelligence-russia-china-rule-world-google-eric-schmidt-vladimir-putinDionisio
January 26, 2018
January
01
Jan
26
26
2018
09:00 AM
9
09
00
AM
PDT
could “conscious” machines lie ahead? The answer is no, of course. Consciousness requires both spirit and brain.FourFaces
January 26, 2018
January
01
Jan
26
26
2018
08:34 AM
8
08
34
AM
PDT
News, rocks -- even refined and organised ones -- have no dreams. Computation on a substrate is not contemplation. And calling a thermostat an intelligent agent red flags the problem. KFkairosfocus
January 26, 2018
January
01
Jan
26
26
2018
06:24 AM
6
06
24
AM
PDT
DS, worker displacement is in fact an issue and that is on the cards for later. Mind you that problem has been an issue ever since the Industrial Revolution. Schumpeter's Creative Destruction. KFkairosfocus
January 26, 2018
January
01
Jan
26
26
2018
06:22 AM
6
06
22
AM
PDT
The significant point for us is that we are seeing AI emerging into the market-place as a potential future-driver, a good slice of the technologies that will likely drive the next generation of economy dominating industries that likely will shape our future.
I don't know if these machines will ever actually be conscious, but I suspect they will eventually be able to pass as a conscious agent, at least in some situations. The part that worries me is the problem of displaced workers. More and more people will find themselves without a place in our economy. How long until fast-food restaurants will be largely automated, for example? There's also the problem that it's expensive to break into AI, so only the rich and powerful are able to do anything significant with it. I think I read somewhere that the hardware required to run AlphaGo Zero* would cost about USD25 million. *The system that learned chess from scratch, going from "zero" to being the most powerful chess-playing entity ever in four hours.daveS
January 26, 2018
January
01
Jan
26
26
2018
05:09 AM
5
05
09
AM
PDT
Wouldn’t it be helpful to find out what consciousness is first? Seeing a thermostat as an intelligent agent is not very helpful for the purpose.News
January 26, 2018
January
01
Jan
26
26
2018
04:54 AM
4
04
54
AM
PDT
F/N: Wiki on intelligent agents -- and notice what they say about a thermostat (as a clue to what is going wrong):
In artificial intelligence, an intelligent agent (IA) is an autonomous entity which observes through sensors and acts upon an environment using actuators (i.e. it is an agent) and directs its activity towards achieving goals (i.e. it is "rational", as defined in economics[1]). Intelligent agents may also learn or use knowledge to achieve their goals. They may be very simple or very complex. A reflex machine, such as a thermostat, is considered an example of an intelligent agent.[2] Intelligent agents are often described schematically as an abstract functional system similar to a computer program. For this reason, intelligent agents are sometimes called abstract intelligent agents (AIA)[citation needed] to distinguish them from their real world implementations as computer systems, biological systems, or organizations. Some definitions of intelligent agents emphasize their autonomy, and so prefer the term autonomous intelligent agents. Still others (notably Russell & Norvig (2003)) considered goal-directed behavior as the essence of intelligence and so prefer a term borrowed from economics, "rational agent". Intelligent agents in artificial intelligence are closely related to agents in economics, and versions of the intelligent agent paradigm are studied in cognitive science, ethics, the philosophy of practical reason, as well as in many interdisciplinary socio-cognitive modeling and computer social simulations. Intelligent agents are also closely related to software agents (an autonomous computer program that carries out tasks on behalf of users). In computer science, the term intelligent agent may be used to refer to a software agent that has some intelligence, regardless if it is not a rational agent by Russell and Norvig's definition. For example, autonomous programs used for operator assistance or data mining (sometimes referred to as bots) are also called "intelligent agents".
See the raft of problems here? KFkairosfocus
January 26, 2018
January
01
Jan
26
26
2018
04:35 AM
4
04
35
AM
PDT
AI, Memristors and the future (could “conscious” machines lie ahead?kairosfocus
January 26, 2018
January
01
Jan
26
26
2018
03:41 AM
3
03
41
AM
PDT

Leave a Reply