Uncommon Descent Serving The Intelligent Design Community

More on memristors in action — including, crossbar networks and solving linear equation arrays

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Memristors [= memory + resistors] are a promising memory-based information storage technology that can work as non-volatile memory and in neural networks.  They were suggested c. 1971 by Leon Chua, and since HP created a TiO2-based multilayer architecture device exhibiting memristor capabilites in 2007, they have been a focus for research, given their potential.

Here, we may ponder a crossbar array of memristor elements forming a signal-processing matrix:

The memristor crossbar matrix. In effect at each clock-tick, weighted sums of the i/p vector appear at each element of the o/p vector. This is useful in general. However a matrix is a parallel array of vectors so a succession of clock-ticks will give the result of matrix multiplication. Weighted sums are of course very powerful, and arrays of them do even more interesting things.

Memristors are of interest to AI as a means to effect neural networks. For instance, a crossbar network (as is illustrated just above) has been used to demonstrate powerful image processing. As Sheridan et al reported in Nature, May 22, 2017 (details pay-walled, of course . . . ):

>>Sparse representation of information provides a powerful means to perform feature extraction on high-dimensional data and is of broad interest for applications in signal processing, computer vision, object recognition and neurobiology. Sparse coding is also believed to be a key mechanism by which biological neural systems can efficiently process a large amount of complex sensory data while consuming very little power. Here, we report the experimental implementation of sparse coding algorithms in a bio-inspired approach using a 32 × 32 crossbar array of analog memristors. This network enables efficient implementation of pattern matching and lateral neuron inhibition and allows input data to be sparsely encoded using neuron activities and stored dictionary elements.>>

That’s impressive, and of course, as reported, it is using the analogue, continuously variable signal mode.

Liu et al report a perhaps even more interesting result on the power of such arrays to process linear expression arrays:

>>Memristors have recently received significant attention as ubiquitous device-level components for building a novel generation of computing systems. These devices have many promising features, such as non-volatility, low power consumption, high density, and excellent scalability. The ability to control and modify biasing voltages at the two terminals of memristors make them promising candidates to perform matrix-vector multiplications and solve systems of linear equations. In this article, we discuss how networks of memristors arranged in crossbar arrays can be used for efficiently solving optimization and machine learning problems. We introduce a new memristor-based optimization framework that combines the computational merit of memristor crossbars with the advantages of an operator splitting method, alternating direction method of multipliers (ADMM). Here, ADMM helps in splitting a complex optimization problem into subproblems that involve the solution of systems of linear equations. The capability of this framework is shown by applying it to linear programming, quadratic programming, and sparse optimization. In addition to ADMM, implementation of a customized power iteration (PI) method for eigenvalue/eigenvector computation using memristor crossbars is discussed. The memristor-based PI method can further be applied to principal component analysis (PCA). The use of memristor crossbars yields a significant speed-up in computation, and thus, we believe, has the potential to advance optimization and machine learning research in artificial intelligence (AI). >>

(In looking at this, we should note too that in state space approaches to control systems, matrix elements can be in effect complex domain transfer functions, giving a higher dimension of power to matrix based processing.)

As Liu et al go on:

>>Memristors, nano-scale devices conceived by Leon Chua in 1971, have been physically realized by scientists from Hewlett-Packard [1], [2]. In contrast with the traditional CMOS technology, memristors can be used as non-volatile memories for building brain-like learning machines with memristive synapses [3]. They offer the ability to construct a dense, continuously programmable, and reasonably accurate cross-point array architecture, which can be used for data-intensive applications [4]. For example, a memristor crossbar array exhibits a unique type of parallelism that can be utilized to perform matrix-vector multiplication and solve systems of linear equations in an astonishing O(1) time complexity [5]–[8]. Discovery and physical realization of memristors has inspired the development of efficient approaches to implement neuromorphic computing systems that can mimic neuro-biological architectures and perform high-performance computing for deep neural networks and optimization algorithms [9]. The similarity between the programmable resistance state of memristors and the variable synaptic strengths of biological synapses facilitates the circuit realization of neural network models [10]. Nowadays, an artificial neural network has become an extremely popular machine learning tool with a wide spectrum of applications, ranging from prediction/classification, computer vision, natural language processing, image processing, to signal processing [11].>>

They also remark, tying in the general hill-climbing [in, steepest descent form] approach commonly discussed in optimisation:

>>a general question to be answered in this context is: how can one design a general memristor-based computation framework to accelerate the optimization procedure? The interior-point algorithm is one of the most commonly-used optimization approaches implemented in software. It begins at an interior point within the feasible region, then applies a projective transformation so that the current interior point is the center of projective space, and then moves in the direction of the steepest descent [37]. However, the inherent hardware limitations prevent the direct mapping from the interior-point algorithm to memristor crossbars. First, a memristor crossbar only allows square matrices with nonnegative entries during computation, since the memristance is always nonnegative. Second, the memristor crossbar suffers from hardware variations, which degrade the reading/writing accuracy of memristor crossbars.>>

Their solution is to use additional memristors to represent negative values to fix the first problem; involving decomposition into sub-problems. Then, with help of ADMM, the array is programmed just once, to get beyond the hardware variation issue. Using suitable scaling and manipulations, memristors are then set up to carry out matrix multiplication. That is an inherently powerful result as anyone familiar with the ubiquity of matrix models in modern systems analysis of various types will testify.

Reversing the inputs and outputs then sets up a solver that can work in O(1) — constant time, effectively “one step” [with seven-league boots] — computation time, which Liu et al aptly describe as “astonishing.”

Food for thought. END

Comments
Polistra, neuromorphic computing and neural networks are also on the agenda. All of this AI stuff is very directly connected to the intelligent agent side of ID. My theme is, that computation on a substrate [including memristors etc] should not be confused with self-aware, conscious rational contemplation, though such may make use of such substrates. And yes memristors are now increasingly routinely embedded in CMOS chips and used as part of the substrates. Where, memristors can not only do weighted sums but with a bit of algorithmic massaging can carry out gate functions and logical operations, plus storage, leading to architectures where memory can do logic too inside itself, cutting out the transfer- to- registers- for- processing- then- return execution bottleneck that leads to caching and to instruction assembly-line pipelining with speculative and out of order execution with commit/discard, etc which have recently led to the Meltdown and Spectra vulnerabilities fiasco. Linked, I am looking at how sensor arrays can feed summing-threshold gates -- a description of a "neuron" -- then inner layered weighted summing that then manifests in various mathematical operations that then directly correlate with physical ones. Where, the pulsed firing sequence operation found in biological neurons and now introduced in Si ones, seems to be also a quite useful feature not least as it contributes to robustness. All of this also seems to be tied to the gestalt, configuration/wholeness, whole- is- qualitatively- distinct- from- a- mere- [weighted-]summation- of- parts complex coherent whole view. Beyond lies the world of cybernetic, embodied active agents and the issue: whence, creativity and designing synthesis. KFkairosfocus
February 11, 2018
February
02
Feb
11
11
2018
02:31 AM
2
02
31
AM
PDT
For clarity, it has always been possible to build an analog circuit that performs a single memristor function. In the realm of discrete tubes or transistors, the memristor module necessarily has a few input wires and one output wire, so the CONFIGURATION of the system depends on how you plug the modules together. If you want a different configuration you have to plug things differently. What makes the 2007 development interesting is that the function resides CONTINUOUSLY across a surface. This makes retina or cochlea behavior practical. One spot on the surface can affect all of the area around it, which means the overall configuration of the system is itself constantly changing. Just like neurons.polistra
February 9, 2018
February
02
Feb
9
09
2018
03:04 PM
3
03
04
PM
PDT
HAL, HAL, where are you? KFkairosfocus
February 9, 2018
February
02
Feb
9
09
2018
03:02 PM
3
03
02
PM
PDT
Wow, interesting topic, looks relatively easy to make a simple one for testing and playing. https://www.youtube.com/watch?v=3ZRIPdr1lug&t=27s I hope it doesn't become self aware and take over the world :DEugen
February 9, 2018
February
02
Feb
9
09
2018
10:03 AM
10
10
03
AM
PDT
DS, military ones and surveillance state ones. Yes, we need to know the dangers so we can counter. Asimov's 3 ethical laws become relevant. KFkairosfocus
February 9, 2018
February
02
Feb
9
09
2018
08:52 AM
8
08
52
AM
PDT
For example, a memristor crossbar array exhibits a unique type of parallelism that can be utilized to perform matrix-vector multiplication and solve systems of linear equations in an astonishing O(1) time complexity
😲 Has science gone too far? All these recent developments in AI are fascinating, but I suspect we'll 'soon' see a number of revolutionary and disturbing applications of the technology. We're no longer in the realm of a bunch of underfunded geeks hacking Lisp.daveS
February 9, 2018
February
02
Feb
9
09
2018
06:03 AM
6
06
03
AM
PDT
More on memristors in action — including, crossbar networks and solving linear equation arrayskairosfocus
February 9, 2018
February
02
Feb
9
09
2018
02:56 AM
2
02
56
AM
PDT

Leave a Reply