Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The author raises points in contention, but they largely just seem like nitpicks to me.

It doesn't seem just like nitpicks to me. The core issue the author raises is the noise in the final output. If the quantum computer can only produce significantly noisy data (ie not in accordance with the theoretical distribution) whereas the conventional computer produces noiseless data, then that isn't a clear case of quantum supremacy.



Probably a very ignorant statement, but such noisy data really seems to point to the quantum mind hypothesis doesn't it?


We can look directly to the paper itself to see how it addresses the issues of error uncertainty that the author premises his blogpost around:

“This fidelity should be resolvable with a few million measurements, since the uncertainty on FXEB is 1/√Ns, where Ns is the number of samples. Our model assumes that entangling larger and larger systems does not introduce additional error sources beyond the errors we measure at the single and two-qubit level — in the next section we will see how well this hypothesis holds.

FIDELITY ESTIMATION IN THE SUPREMACY REGIME

The gate sequence for our pseudo-random quantum circuit generation is shown in Fig.3. One cycle of the algorithm consists of applying single-qubit gates chose randomly from {√X,√Y,√W} on all qubits, followed by two-qubit gates on pairs of qubits. The sequences of gates which form the “supremacy circuits” are designed to minimize the circuit depth required to create a highly entangled state, which ensures computational complexity and classical hardness. While we cannot compute FXEB in the supremacy regime, we can estimate it using three variations to reduce the complexity of the circuits. In “patch circuits”, we remove a slice of two-qubit gates (a small fraction of the total number of two-qubit gates), splitting the circuit into two spatially isolated, non-interacting patches of qubits. We then compute the total fidelity as the product of the patch fidelities, each of which can be easily calculated. In “elided circuits”, we remove only a fraction of the initial two-qubit gates along the slice, allowing for entanglement between patches, which more closely mimics the full experiment while still maintaining simulation feasibility. Finally, we can also run full “verification circuits” with the same gate counts as our supremacy circuits, but with a different pattern for the sequence of two-qubit gates which is much easier to simulate classically [29]. Comparison between these variations allows tracking of the system fidelity as we approach the supremacy regime. We first check that the patch and elided versions of the verification circuits produce the same fidelity as the full verification circuits up to 53 qubits, as shown in Fig.4a. For each data point, we typically collect Ns=5×10^6 total samples over ten circuit instances, where instances differ only in the choices of single-qubit gates in each cycle. We also show predicted FXEB values computed by multiplying the no-error probabilities of single- and two-qubit gates and measurement [29]. Patch, elided, and predicted fidelities all show good agreement with the fidelities of the corresponding full circuits, despite the vast differences in computational complexity and entanglement. This gives us confidence that elided circuits can be used to accurately estimate the fidelity of more complex circuits. We proceed now to benchmark our most computationally difficult circuits. In Fig.4b, we show the measured FXEB for 53-qubit patch and elided versions of the full supremacy circuits with increasing depth. For the largest circuit with 53 qubits and 20 cycles, we collected Ns=30×10^6 samples over 10 circuit instances, obtaining FXEB=(2.24±0.21)×10^−3 for the elided circuits. With 5σ confidence, we assert that the average fidelity of running these circuits on the quantum processor is greater than at least 0.1%. The full data for Fig.4b should have similar fidelities, but are only archived since the simulation times (red numbers) take too long. It is thus in the quantum supremacy regime.”

Links to the figures can be found by viewing the paper directly here. If you have the time, I highly recommend reading the paper in full regardless: https://www.docdroid.net/file/download/h9oBikj/quantum-supre...


> We can look directly to the paper itself to see how it addresses the issues of error uncertainty that the author premises his blogpost around

Can you demonstrate how the author "premises his blogpost around" the "issues of error uncertainty"? I don't think that was the main premise of the paper, at all.


I’m unsure of what you’re trying to say here. The issues that Kalai poses as to the validity of the experiments done by Google’s Quantum AI Lab is that the difference between the ideal distribution D and sampled distribution D’ is meaningfully different enough from each other, that significant results cannot be obtained from the experiment in comparing performance to classical simulations.

The excerpt I posted above from the actual paper directly addresses the points made by Kalai, and provides reasoning and analysis for determining the 5 sigma confidence of their results.

This is again, why in my original post, I said I believed Kalai to be nitpicking, primarily because the additional statistical testing he proposed should be done, while surely never a bad thing wouldn’t do anything to change the ultimate confidence determinations and results. Kalai of course believes that the testing was insufficient. The easiest thing to do resolve such an issue is to perform the additional work that Kalai asks for in order to appease his suspicions. I have personally no problem with that. It’s never a bad thing to do more tests.


I think you have serious conceptual holes in your understanding of the post.

> The issues that Kalai poses as to the validity of the experiments done by Google’s Quantum AI Lab is that the difference between the ideal distribution D and sampled distribution D’ is meaningfully different enough from each other, that significant results cannot be obtained from the experiment in comparing performance to classical simulations.

This is categorically false. Can you quote the passage(s) that lead you to this conclusion?


> By creating a 0-1 distribution we mean sampling sufficiently many times from that distribution D so it allows us to show that the sampled distribution is close enough to D. Because of the imperfection (noise) of qubits and gates (and perhaps some additional sources of noise) we actually do not sample from D but from another distribution D’. However if D’ is close enough to D, the conclusion that classical computers cannot efficiently sample according to D’ is plausible.


The passage you quoted is under the:

> Achieving quantum supremacy via sampling

Headline and is simply describing some technical details of the experiment. This is not the author bringing any points of contention, there is no disagreement here at all, this is merely defining the criteria for how quantum supremacy is defined.

I highlighted his points of contention as a response to your original comment that said he is nitpicking, here: https://news.ycombinator.com/item?id=21168813

His main point of contention seems to be:

They need to understand the D' distribution more by running the experiment on lower qubit configurations, comparing the experimentally sampled distributions with one another across qubit configurations and across multiple runs of the same qubit configurations. As it is, he says that they may not have even sampled from D'. The burden of proof is on the experimenters to quantitatively show that they did.

There were other issues raised, like Google not being quantitative enough with their claims of the gains achieved in their supremacy statement.

He also brings up a more general issue with quantum computing in correlated errors, which are described in more detail in his paper here: http://www.ma.huji.ac.il/~kalai/Qitamar.pdf

But it boils down to that qubit logic gates experience positively correlated errors, which unless corrected with quantum fault tolerance will have an impact on any result.

I hope this clears up some misconceptions. In general it is a good idea to use the principle of charity and try to address the best possible interpretation of someone's argument. This is true even more so when commenting on someone who is literally close to the top in their field.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: