Quantum computing, of course, has been a hot sci-tech topic in recent years, what with stories as to how it will obsolete the large prime number product encryption schemes that give us some of our strongest codes, and stories of vast computing power exponentially beyond our hottest supercomputers today. With hot money being poured in by the wheelbarrow load. (Well, maybe buckets of bits as most serious transactions are digital nowadays. Itself already a problem . . . security is an issue.)
What are we to make of this? (My bet is, superposition. Itself, a core quantum issue.)
Reader and commenter, Relatd, has given us a useful, first level video:
(A good place to begin, a useful survey with some good food for thought and better than the vids I had found in my searches; thanks. I do have a few quibbles, starting with the common tendency to use the loose language of being “both 1 and 0” to explain superposed — and often, entangled — wave functions in quantum computers. To give a fairly rough analogy, if we have a 2-d space and we say a point,
P — or, 0P, or |P> — is |0X> + |0Y>, all suitably dressed up in kets . . .
we don’t usually couch that as P is X and Y at the same time, it is something else, a vector away from 0X and away from 0Y, a case of real emergence through interaction. Not, spooky inexplicable emergence. BTW, integers have size and direction from 0 so are already vectors; as are Reals, which are mile-posted by the integers. [Where, yes I deliberately use 0 for zero point, rather than O for origin point. While we are at it, |s> is a column vector, ket notation and bra <t| is a row vector. Let’s add, with <bras| as row vectors and |kets> as column vectors, a matrix is a bra of kets <|k1>, |k2> . . . |kn>| or equivalently, a ket of bras, |<b1|, <b2| . . . <bn|>, which allows us to address row or column operations conveniently, and is a possible nested matrix representation that emphasises the vector components. Yes, we here see linear algebra with matrices and tensors lurking.])
Here is a short vid that may also be useful, especially as it gives a flavour of some problems that may be solvable in a decade or so, or so the guess is:
Ms Hossenfelder, of course, cries hype:
Take headlines with a grain of salt, in short.
Her bet is, most likely, this will be a bubble that fails and in a decade or so, you may be able to discuss multiple particle entanglement with your taxi driver. Maybe.
JVL, another reader and frequent commenter, has given another interesting link, where Sutter guides us in (not) understanding quantum mechanics. A slice gives the flavour:
Yet despite its overwhelming success as a framework for understanding what nature does, quantum mechanics tells us very little about how nature works. Quantum mechanics provides a powerful set of tools for successfully making predictions about what subatomic particles will do, but the theory itself is relatively silent about how those subatomic particles actually go about their lives.
For example, take the familiar concept of a quantum jump. An electron in an atom changes energy levels and thus either absorbs or emits energy in the form of one photon of radiation. No big deal, right? But how does the electron “jump” from one energy level to another? If it moves smoothly, like literally everything else in the Universe, we would see the energy involved change smoothly as well. But we don’t.
So does the electron magically disappear from one energy level and magically reappear in another? If it does, name one other physical object in the Universe that acts like that. While you’re at it, please give me a physical description of the unfolding of this magic act. I’ll wait.
Quantum mechanics is completely silent on how the electron changes orbitals; it just blandly states that it does and tells us what outcomes to expect when that happens.
How are we supposed to wrap our heads around that? How can we possibly come to grips with a theory that doesn’t explain how anything works? People have struggled with these questions ever since quantum mechanics was developed, and they’ve come up with a number of ways to make sense of the processes involved in quantum behavior.
I confess to being a Copenhagenist, with hints of “shut up and calculate” — anyone who has done solid state electronics will understand why, and also why some suggest that “roughly a quarter of our world’s GDP relies on quantum mechanics.” Empirically reliable, astonishingly precise but intractable conceptually and often downright weird. Feynman is hardly the only Physicist or Nobel Prize winner to suggest that no one understands Q Mech.
But now, we are looking at computers that don’t just use Q Mech to power the devices in circuits that neatly deliver 1’s [hi voltages] and 0’s [lo voltages], electronic extensions of arrangements of switches:
Where, each switch latches in on or off states and stores one binary digit, bit of information.
We represent such gates — in electronic circuit form — with modified amplifier symbols, with the bubble as NOT, e.g.
Things get interesting when we use feedback and create memory elements [the core of registers] starting with the RS latch. Just to mix things up, let’s use the NOR gate latch:
Here, the HOLD state is a memory storage state. Latches and flip flops are core to registers, which in turn are at the heart of a classical digital computer such as the classic IBM s360. In outline:
But now, we have gone to superposed quantum state bit elements, Qubits. As Wikipedia helpfully summarises:
Thus, we see the Bloch sphere representation of the superposed state of a Qubit:
So, as Mark Hill [Ed] et al summarise:
a classical bit exists in one of two well-defined states, “1”or “0”.
On the other hand, the basic unit of state in quantum computers, the qubit, is described
by quantum mechanical two-level systems, such as the two spin states of spin 1/2 atoms, or the
horizontal and vertical polarization states of a single photon. The difference between a qubit and
a bit is that the physical state of a qubit is described by complex-valued amplitudes equal to the
square root of finding the qubit in one of the two binary states “0” and “1”. Similarly, the state of
an n-qubit quantum system is described by 2^n complex-valued probability amplitudes, each equal to
the probability of finding the quantum system into any of the 2^n possible n-bit binary bitstrings.
Mathematically, the state of an n-qubit quantum system can be represented as a complex-valued
2^n -element vector. Furthermore, a single quantum gate (represented as a 2^n × 2^n unitary matrix)
applied to an n-qubit quantum system acts simultaneously on all 2^n elements of the system state vector. This means that the amount of information that can potentially be processed by quantum
computers doubles with each addition qubit in the system. [Quantum Computing for Computer Architects, 2nd Edn, pp. 7 – 8.]
This makes a quantum computer into a powerful device, once we can put together enough qubits, and once we can figure out how to manipulate them effectively. Just 300 qubits represents a span of 2^300 = 2.04*10^90 states. Estimates for practically scaled machines run to 10,000 – 10 million gates. And no, there is no reason to believe the industry can leverage a Moore’s Law type scaling effect. Which is running out of steam for Silicon.
Qiskit has a useful online site.
So, Q: Hype or hope?
A: Superposition. KF