The usage of the word “supremacy” is probably the primary thing that rubs people the wrong way. When I first heard of it myself, it did feel a little strange.
Doing some research into it though, it seems like the phrase and usage of quantum supremacy is pretty well defined and accepted by and large within the physics community. Supposedly the coiner of the term believed “quantum advantage” wouldn’t emphasize the point enough.
The basic components of achieving what the term defines though is relatively straightforward. It’s basically just about definitively showcasing a problem where using a classical computer would take super polynomial time, but with a quantum computer ends up taking some significantly lesser time complexity.
Google’s Quantum AI Lab achieved exactly that based on the results provided in the paper released by NASA. Granted, the problem in question is extremely specific, but that doesn’t invalidate it as being an admissible problem. The author raises points in contention, but they largely just seem like nitpicks to me.
> The author raises points in contention, but they largely just seem like nitpicks to me.
It doesn't seem just like nitpicks to me. The core issue the author raises is the noise in the final output. If the quantum computer can only produce significantly noisy data (ie not in accordance with the theoretical distribution) whereas the conventional computer produces noiseless data, then that isn't a clear case of quantum supremacy.
We can look directly to the paper itself to see how it addresses the issues of error uncertainty that the author premises his blogpost around:
“This fidelity should be resolvable with a few million measurements, since the uncertainty on FXEB is 1/√Ns, where Ns is the number of samples. Our model assumes that entangling larger and larger systems does not introduce additional error sources beyond the errors we measure at the single and two-qubit level — in the next section we will see how well this hypothesis holds.
FIDELITY ESTIMATION IN THE SUPREMACY REGIME
The gate sequence for our pseudo-random quantum circuit generation is shown in Fig.3. One cycle of the algorithm consists of applying single-qubit gates chose randomly from {√X,√Y,√W} on all qubits, followed by two-qubit gates on pairs of qubits. The sequences of gates which form the “supremacy circuits” are designed to minimize the circuit depth required to create a highly entangled state, which ensures computational complexity and classical hardness. While we cannot compute FXEB in the supremacy regime, we can estimate it using three variations to reduce the complexity of the circuits. In “patch circuits”, we remove a slice of two-qubit gates (a small fraction of the total number of two-qubit gates), splitting the circuit into two spatially isolated, non-interacting patches of qubits. We then compute the total fidelity as the product of the patch fidelities, each of which can be easily calculated. In “elided circuits”, we remove only a fraction of the initial two-qubit gates along the slice, allowing for entanglement between patches, which more closely mimics the full experiment while still maintaining simulation feasibility. Finally, we can also run full “verification circuits” with the same gate counts as our supremacy circuits, but with a different pattern for the sequence of two-qubit gates which is much easier to simulate classically [29]. Comparison between these variations allows tracking of the system fidelity as we approach the supremacy regime. We first check that the patch and elided versions of the verification circuits produce the same fidelity as the full verification circuits up to 53 qubits, as shown in Fig.4a. For each data point, we typically collect Ns=5×10^6 total samples over ten circuit instances, where instances differ only in the choices of single-qubit gates in each cycle. We also show predicted FXEB values computed by multiplying the no-error probabilities of single- and two-qubit gates and measurement [29]. Patch, elided, and predicted fidelities all show good agreement with the fidelities of the corresponding full circuits, despite the vast differences in computational complexity and entanglement. This gives us confidence that elided circuits can be used to accurately estimate the fidelity of more complex circuits. We proceed now to benchmark our most computationally difficult circuits. In Fig.4b, we show the measured FXEB for 53-qubit patch and elided versions of the full supremacy circuits with increasing depth. For the largest circuit with 53 qubits and 20 cycles, we collected Ns=30×10^6 samples over 10 circuit instances, obtaining FXEB=(2.24±0.21)×10^−3 for the elided circuits. With 5σ confidence, we assert that the average fidelity of running these circuits on the quantum processor is greater than at least 0.1%. The full data for Fig.4b should have similar fidelities, but are only archived since the simulation times (red numbers) take too long. It is thus in the quantum supremacy regime.”
> We can look directly to the paper itself to see how it addresses the issues of error uncertainty that the author premises his blogpost around
Can you demonstrate how the author "premises his blogpost around" the "issues of error uncertainty"? I don't think that was the main premise of the paper, at all.
I’m unsure of what you’re trying to say here. The issues that Kalai poses as to the validity of the experiments done by Google’s Quantum AI Lab is that the difference between the ideal distribution D and sampled distribution D’ is meaningfully different enough from each other, that significant results cannot be obtained from the experiment in comparing performance to classical simulations.
The excerpt I posted above from the actual paper directly addresses the points made by Kalai, and provides reasoning and analysis for determining the 5 sigma confidence of their results.
This is again, why in my original post, I said I believed Kalai to be nitpicking, primarily because the additional statistical testing he proposed should be done, while surely never a bad thing wouldn’t do anything to change the ultimate confidence determinations and results. Kalai of course believes that the testing was insufficient. The easiest thing to do resolve such an issue is to perform the additional work that Kalai asks for in order to appease his suspicions. I have personally no problem with that. It’s never a bad thing to do more tests.
I think you have serious conceptual holes in your understanding of the post.
> The issues that Kalai poses as to the validity of the experiments done by Google’s Quantum AI Lab is that the difference between the ideal distribution D and sampled distribution D’ is meaningfully different enough from each other, that significant results cannot be obtained from the experiment in comparing performance to classical simulations.
This is categorically false. Can you quote the passage(s) that lead you to this conclusion?
> By creating a 0-1 distribution we mean sampling sufficiently many times from that distribution D so it allows us to show that the sampled distribution is close enough to D. Because of the imperfection (noise) of qubits and gates (and perhaps some additional sources of noise) we actually do not sample from D but from another distribution D’. However if D’ is close enough to D, the conclusion that classical computers cannot efficiently sample according to D’ is plausible.
Headline and is simply describing some technical details of the experiment. This is not the author bringing any points of contention, there is no disagreement here at all, this is merely defining the criteria for how quantum supremacy is defined.
They need to understand the D' distribution more by running the experiment on lower qubit configurations, comparing the experimentally sampled distributions with one another across qubit configurations and across multiple runs of the same qubit configurations. As it is, he says that they may not have even sampled from D'. The burden of proof is on the experimenters to quantitatively show that they did.
There were other issues raised, like Google not being quantitative enough with their claims of the gains achieved in their supremacy statement.
He also brings up a more general issue with quantum computing in correlated errors, which are described in more detail in his paper here: http://www.ma.huji.ac.il/~kalai/Qitamar.pdf
But it boils down to that qubit logic gates experience positively correlated errors, which unless corrected with quantum fault tolerance will have an impact on any result.
I hope this clears up some misconceptions. In general it is a good idea to use the principle of charity and try to address the best possible interpretation of someone's argument. This is true even more so when commenting on someone who is literally close to the top in their field.
If the points raised by the OP are nitpicks, then I guess it doesn’t matter to you that the sampled (D’) distribution is very close to some easily definable theoretical distribution (D)?
If so I can easily illustrate this “good enough quantum supremacy” in my own home and on a very limited budget. The simple reason is that it is absolutely trivial to set up a physical experiment (quantum or not) where it’s very hard for a computer to sample from the distribution of possible outcomes.
But is your low budget experiment programmable with a known sequence of well defined quantum gates? Can you statistically verify that your distribution is very close to D?
I don’t really understand... My claim started with “if so”. Now you’re (if I understood you) asking if I’m willing to make the unconditional claim (without “if so”)?
(No. If I was I wouldn’t have used the qualifier “if so”.)
In the paper linked by this post, the author first very precisely defines how quantum supremacy is defined:
> if you can show that D’ is close enough to D before you reach the supremacy regime, and you can carry out the sampling in the supremacy regime then this gives you good reason to think that your experiments in the supremacy regime demonstrate “quantum supremacy”.
Author references an older paper running this experiment, brings up a very good point that there is no quantitative measurement provided of the similarity of D and D':
> The Google group itself ran this experiment for 9 qubits in 2017. One concern I have with this experiment is that I did not see quantitative data indicating how close D’ is to D.
First point of contention, doesn't seem like merely a "nitpick" and if you disagree I'd love to hear your reasoning.
> The twist in Google’s approach is that they try to compute D’ based mainly on the 1-qubit and 2-qubit (and readout errors) errors and then run an experiment on 53 qubits where they can neither compute D nor verify that they sample from D’. In fact they sample 10^7 samples from 0-1 strings of length 53 so this is probably much too sparse to distinguish between D and the uniform distribution
If they aren't sampling from D', then they can't compare it to D, and so this violates the basic definition of quantum supremacy via sampling of random circuits. The author's point about sample size being too sparse to compute the difference between D and the uniform distribution is also valid.
He made this caveat at the start
> A single run of the quantum computer gives you only one sample from D’ so to get a meaningful description of the target distribution you need to have many samples.
> What is needed is experiments to understand probability distributions obtained by pseudorandom circuits on 9-, 15-, 20-, 25- qubits. How close they are to the ideal distribution D and how robust they are (namely, what is the gap between experimental distributions for two samples obtained by two runs of the experiment.)
Seems like a big oversight to not do multiple runs and to compare the sampled D' distributions.
> basically just about definitively showcasing a problem where using a classical computer would take super polynomial time, but with a quantum computer ends up taking some significantly lesser time complexity.
Quantum computers should only really be able to outperform classical computers on quantum specific problems.
Doing some research into it though, it seems like the phrase and usage of quantum supremacy is pretty well defined and accepted by and large within the physics community. Supposedly the coiner of the term believed “quantum advantage” wouldn’t emphasize the point enough.
The basic components of achieving what the term defines though is relatively straightforward. It’s basically just about definitively showcasing a problem where using a classical computer would take super polynomial time, but with a quantum computer ends up taking some significantly lesser time complexity. Google’s Quantum AI Lab achieved exactly that based on the results provided in the paper released by NASA. Granted, the problem in question is extremely specific, but that doesn’t invalidate it as being an admissible problem. The author raises points in contention, but they largely just seem like nitpicks to me.