Quantum computing, of course, has been a hot sci-tech topic in recent years, what with stories as to how it will obsolete the large prime number product encryption schemes that give us some of our strongest codes, and stories of vast computing power exponentially beyond our hottest supercomputers today. With hot money being poured in by the wheelbarrow load. (Well, maybe buckets of bits as most serious transactions are digital nowadays. Itself already a problem . . . security is an issue.)
What are we to make of this? (My bet is, superposition. Itself, a core quantum issue.)
Reader and commenter, Relatd, has given us a useful, first level video:
(A good place to begin, a useful survey with some good food for thought and better than the vids I had found in my searches; thanks. I do have a few quibbles, starting with the common tendency to use the loose language of being “both 1 and 0” to explain superposed — and often, entangled — wave functions in quantum computers. To give a fairly rough analogy, if we have a 2-d space and we say a point,
P — or, 0P, or |P> — is |0X> + |0Y>, all suitably dressed up in kets . . .

we don’t usually couch that as P is X and Y at the same time, it is something else, a vector away from 0X and away from 0Y, a case of real emergence through interaction. Not, spooky inexplicable emergence. BTW, integers have size and direction from 0 so are already vectors; as are Reals, which are mile-posted by the integers. [Where, yes I deliberately use 0 for zero point, rather than O for origin point. While we are at it, |s> is a column vector, ket notation and bra <t| is a row vector. Let’s add, with <bras| as row vectors and |kets> as column vectors, a matrix is a bra of kets <|k1>, |k2> . . . |kn>| or equivalently, a ket of bras, |<b1|, <b2| . . . <bn|>, which allows us to address row or column operations conveniently, and is a possible nested matrix representation that emphasises the vector components. Yes, we here see linear algebra with matrices and tensors lurking.])
Here is a short vid that may also be useful, especially as it gives a flavour of some problems that may be solvable in a decade or so, or so the guess is:
Ms Hossenfelder, of course, cries hype:
Take headlines with a grain of salt, in short.
Her bet is, most likely, this will be a bubble that fails and in a decade or so, you may be able to discuss multiple particle entanglement with your taxi driver. Maybe.
JVL, another reader and frequent commenter, has given another interesting link, where Sutter guides us in (not) understanding quantum mechanics. A slice gives the flavour:
Yet despite its overwhelming success as a framework for understanding what nature does, quantum mechanics tells us very little about how nature works. Quantum mechanics provides a powerful set of tools for successfully making predictions about what subatomic particles will do, but the theory itself is relatively silent about how those subatomic particles actually go about their lives.
For example, take the familiar concept of a quantum jump. An electron in an atom changes energy levels and thus either absorbs or emits energy in the form of one photon of radiation. No big deal, right? But how does the electron “jump” from one energy level to another? If it moves smoothly, like literally everything else in the Universe, we would see the energy involved change smoothly as well. But we don’t.
So does the electron magically disappear from one energy level and magically reappear in another? If it does, name one other physical object in the Universe that acts like that. While you’re at it, please give me a physical description of the unfolding of this magic act. I’ll wait.
Quantum mechanics is completely silent on how the electron changes orbitals; it just blandly states that it does and tells us what outcomes to expect when that happens.
How are we supposed to wrap our heads around that? How can we possibly come to grips with a theory that doesn’t explain how anything works? People have struggled with these questions ever since quantum mechanics was developed, and they’ve come up with a number of ways to make sense of the processes involved in quantum behavior.
I confess to being a Copenhagenist, with hints of “shut up and calculate” — anyone who has done solid state electronics will understand why, and also why some suggest that “roughly a quarter of our world’s GDP relies on quantum mechanics.” Empirically reliable, astonishingly precise but intractable conceptually and often downright weird. Feynman is hardly the only Physicist or Nobel Prize winner to suggest that no one understands Q Mech.
But now, we are looking at computers that don’t just use Q Mech to power the devices in circuits that neatly deliver 1’s [hi voltages] and 0’s [lo voltages], electronic extensions of arrangements of switches:

Where, each switch latches in on or off states and stores one binary digit, bit of information.
We represent such gates — in electronic circuit form — with modified amplifier symbols, with the bubble as NOT, e.g.

Things get interesting when we use feedback and create memory elements [the core of registers] starting with the RS latch. Just to mix things up, let’s use the NOR gate latch:

Here, the HOLD state is a memory storage state. Latches and flip flops are core to registers, which in turn are at the heart of a classical digital computer such as the classic IBM s360. In outline:

But now, we have gone to superposed quantum state bit elements, Qubits. As Wikipedia helpfully summarises:

Thus, we see the Bloch sphere representation of the superposed state of a Qubit:

So, as Mark Hill [Ed] et al summarise:
a classical bit exists in one of two well-defined states, “1”or “0”.
On the other hand, the basic unit of state in quantum computers, the qubit, is described
by quantum mechanical two-level systems, such as the two spin states of spin 1/2 atoms, or the
horizontal and vertical polarization states of a single photon. The difference between a qubit and
a bit is that the physical state of a qubit is described by complex-valued amplitudes equal to the
square root of finding the qubit in one of the two binary states “0” and “1”. Similarly, the state of
an n-qubit quantum system is described by 2^n complex-valued probability amplitudes, each equal to
the probability of finding the quantum system into any of the 2^n possible n-bit binary bitstrings.
Mathematically, the state of an n-qubit quantum system can be represented as a complex-valued
2^n -element vector. Furthermore, a single quantum gate (represented as a 2^n × 2^n unitary matrix)
applied to an n-qubit quantum system acts simultaneously on all 2^n elements of the system state vector. This means that the amount of information that can potentially be processed by quantum
computers doubles with each addition qubit in the system. [Quantum Computing for Computer Architects, 2nd Edn, pp. 7 – 8.]
This makes a quantum computer into a powerful device, once we can put together enough qubits, and once we can figure out how to manipulate them effectively. Just 300 qubits represents a span of 2^300 = 2.04*10^90 states. Estimates for practically scaled machines run to 10,000 – 10 million gates. And no, there is no reason to believe the industry can leverage a Moore’s Law type scaling effect. Which is running out of steam for Silicon.
Qiskit has a useful online site.
So, Q: Hype or hope?
A: Superposition. KF
L&FP, 65g: Quantum vs classical digital computing — hope or hype? (Or, superposition?)
Not hard to find answers.
https://www.hpcwire.com/2022/06/09/qa-with-ibms-jay-gambetta-hpcwire-person-to-watch-in-2022/
https://www.sdxcentral.com/articles/interview/ibm-leader-explains-downstream-impact-of-quantum-computing/2022/10/
Personally, I have no idea. What do our resident quantum physicists have to say?
They say, “Do your own homework.”
-Q
So they’re in a superposition of all possible answers until I go and ask them?
Sev, you know your answer already, this is intended to be a briefing. You know in outline here how qubits work [by superposition of states and entanglement, both already outlined] and have been invited to watch suggested primers on quantum computers; which are a potentially transformative technology. The hope vs hype theme is there and the suggested answer is, superposition. Whether commercial success within a decade or two is an open question — let’s add, is Schroedinger’s cat alive or dead. Notice, too, that the future state is not resolved until observed, a familiar point. We should note that for computers the industry as a whole lost money until the mid 80s or so, in effect, forty years. IBM bet the farm on the s360 in the 1960’s, going in way over their heads [$ 5 bn, mostly to be its own manufacturer], they said never again. KF
PS, if you are an investor, the issue is to know enough to know who to bet on, and as a tax payer you are already helping to pay for basic research and should be pondering policy options and their proposers, on the principle of taxation calls for representation.
PPS, we are already seeing here, how superposition is a weak form emergence and how probabilities are partly an index of ignorance [thus are informational as risk is better than utter uncertainty], but Q computers are going to manipulate superposition and entanglement to explore a search space, hopefully giving an optimal or near optimal result. Our space for logic and first principles is growing.
F/N: I see I forgot to link Qiskit, will do so https://qiskit.org/
Querius @4
Applause! Applause!
Kf@6.
Ignore banalities.
Belfast, as quantum keeps coming up, let’s try for a 101 first primer that gets us beyond mystique, in steps. Here, we bring together emergence [legitimate form], superposition [no, it is not in 0 and 1 states but a superposition], entanglement and the peculiar blend of partial knowledge and partial ignorance thus degree of information implicit in probabilities. We extend to investment and voting on technology policy. All of these concepts of course are key to logic and first principles and so to ID debates. KF
PS, the Bloch sphere of unit radius has a 3rd dimension, reflecting the j-axis, j or i for Mathematicians, being a sq rt of – 1, or perhaps better, a vector rotation. j*x is a right angle rot, j*j* x is – x or -1*x so j*j is by substitution – 1.
Of note: Quantum computation excels at the ‘traveling salesman problem’:
And, ‘Traveling salesman problems’ are notorious for keeping classical supercomputers busy for days, and are ‘Just about the meanest problems you can set a computer (on)’.
Moreover, “the protein-folding problem is computationally “hard” in the same way that the traveling-salesman problem is hard.”
“Therefore,,, given that proteins obviously do fold, (then) they are doing so, not by random search, but by following favored pathways. The challenge of the protein folding problem is to learn what those pathways are.”
Moreover, “Luo and Lo say that if this process (of protein folding) were a quantum one, the shape could change by quantum transition, meaning that the protein could ‘jump’ from one shape to another without necessarily forming the shapes in between.
Luo and Lo explore this idea using a mathematical model of how this would work and then derive equations that describe how the rate of “quantum folding” would change with temperature.”
On top of that, the following 2015 article experimentally confirmed that proteins are indeed based on quantum principles. More specifically, “Quantum coherent-like state (have been) observed in a biological protein for the first time”
Thus, we have fairly good evidence that protein folding itself is based on quantum computation, not on a ‘random’ search as was presupposed within Darwinian thinking.
As well, besides proteins,, it is now found that DNA itself does not belong to the world of classical mechanics but instead belongs to the world of quantum mechanics. In the following video, at the 22:20 minute mark, Dr Rieper shows why high temperatures do not prevent DNA from having entanglement and then at 24:00 minute mark Dr Rieper goes on to remark that practically the whole DNA molecule can be viewed as quantum information with classical information embedded within it.
And indeed, quantum computation in DNA would go a long way towards explaining how it is even remotely possible for DNA to quickly locate “a one-half-inch stretch within those 24 miles (of DNA)”,,
And it would also go a long way towards explaining how it is even remotely possible for DNA to spot “potholes on every street all over the country and getting them fixed before the next rush hour.”
The implication of all these advances in quantum biology are fairly obvious. As the following article stated, “The discovery of quantum states in protein-DNA complexes would thus allude to the tantalizing possibility that these systems might be candidates for quantum computation.”
In other words, it is becoming increasingly obvious that human engineers ought to look to quantum systems in biology in order to try to find some solid hints as to how to build better quantum computers.
Of supplemental note:
Sorry, I’m in the negative camp – it ain’t gonna happen.
With the number of cohering Qbits currently around 100, although there are plans for 1000 or so, and the minimum number needed to do useful stuff better than classical computers estimated to be more than 100,000, they are about as far from the hype as is fusion energy: a couple of orders of magnitude at least.
All the problems with quantum computing: decoherence, noise, input and output precision, algorithm speed, etc. all get worse as the number of Qbits increases. Imagine trying to set the initial superposition of 100,000 Qbits precisely at a few degrees Kelvin, within a few microseconds. This is after you entangle them in some zero state. Then you have to run them through the algorithm and read out the result – which comes with some probability – all before the noise effects accumulate and the Qbits decohere. But you say, they now have quantum error-correction! Yes, except that to do that effectively you need another two orders of magnitude in the number of Qbits!
Don’t get me wrong, it is very interesting science and technology, and perhaps some niche applications will evolve (via ID), such as quantum modelling and sorting out molecular structures – some tasks that perhaps need fewer Qbits or less precision, for which probable answers are good enough. But even those will take a long time to get to a useful level, and the hype bubble may indeed burst before then.
Fasteddious at 12,
You seem to know some things but not others. Quantum Computing becomes functional this year – 2023. You should contact IBM directly. Fusion ca be done but not for continuous power generation. A wrong comparison.
Relatd @ 13
You seem to be rather optimistic. “Functional” is not the same as useful and commercially viable. IBM has had a “functional” quantum computer for a long time, available on line; it has five (count ’em, 5) Qbits. IBM is hoping to release a 1000 Qbit chip this year, but no info yet on how well it works for actual quantum processing, and still two orders of magnitude away from the >100,000 Qbits needed to make it all useful to anyone but quantum researchers.
Fasteddious at 14,
Your research in this area appears to be lacking.
“In 2022, we unveiled the 433-qubit Osprey processor, just one year after breaking the 100-qubit barrier with our 127-qubit Eagle chip.
“In 2023, we are on track deliver the 1,121-qubit Condor processor. These processors push the limits of what can be done with single chip processors and controlling large systems.”
Source: https://www.ibm.com/quantum/roadmap
IBM has done far more for quantum computing and in less time than the development of break-even fusion energy. You need a broader overview:
https://www.insidequantumtechnology.com/news-archive/quantum-news-briefs-january-13-quantum-machine-learning-to-progress-in-2023-ibm-announces-725m-quantum-computing-deal-with-australian-government-new-industry-academia-research-collaboration-announ/
Relatd 15: Yes I read about those chips and plans in the IEEE Spectrum. I will await independent assessment of how these chip fare in terms of true quantum processing. In any case, they are still two orders of magnitude from 100,000 Qbits, seen as a bare minimum for general processing. Perhaps my view is too narrow or even cynical, but I prefer to call it realistic and practical. Recall that the videos at the top were about the hype surrounding quantum computing. That hype is sustained by announcements like the ones you provided.
FE & Related, we are back at hope vs hype. The answer is, superposition, to be resolved as the actual future emerges into observable reality. Strategic planning is a complex process of trying to manage uncertainty. KF
PS, meanwhile, having a clear idea of the quantum process and how it limits our understanding is itself significant for moving beyond being readily taken in by hype.