The usage of the word “supremacy” is probably the primary thing that rubs people the wrong way. When I first heard of it myself, it did feel a little strange.
Doing some research into it though, it seems like the phrase and usage of quantum supremacy is pretty well defined and accepted by and large within the physics community. Supposedly the coiner of the term believed “quantum advantage” wouldn’t emphasize the point enough.
The basic components of achieving what the term defines though is relatively straightforward. It’s basically just about definitively showcasing a problem where using a classical computer would take super polynomial time, but with a quantum computer ends up taking some significantly lesser time complexity.
Google’s Quantum AI Lab achieved exactly that based on the results provided in the paper released by NASA. Granted, the problem in question is extremely specific, but that doesn’t invalidate it as being an admissible problem. The author raises points in contention, but they largely just seem like nitpicks to me.
> The author raises points in contention, but they largely just seem like nitpicks to me.
It doesn't seem just like nitpicks to me. The core issue the author raises is the noise in the final output. If the quantum computer can only produce significantly noisy data (ie not in accordance with the theoretical distribution) whereas the conventional computer produces noiseless data, then that isn't a clear case of quantum supremacy.
We can look directly to the paper itself to see how it addresses the issues of error uncertainty that the author premises his blogpost around:
“This fidelity should be resolvable with a few million measurements, since the uncertainty on FXEB is 1/√Ns, where Ns is the number of samples. Our model assumes that entangling larger and larger systems does not introduce additional error sources beyond the errors we measure at the single and two-qubit level — in the next section we will see how well this hypothesis holds.
FIDELITY ESTIMATION IN THE SUPREMACY REGIME
The gate sequence for our pseudo-random quantum circuit generation is shown in Fig.3. One cycle of the algorithm consists of applying single-qubit gates chose randomly from {√X,√Y,√W} on all qubits, followed by two-qubit gates on pairs of qubits. The sequences of gates which form the “supremacy circuits” are designed to minimize the circuit depth required to create a highly entangled state, which ensures computational complexity and classical hardness. While we cannot compute FXEB in the supremacy regime, we can estimate it using three variations to reduce the complexity of the circuits. In “patch circuits”, we remove a slice of two-qubit gates (a small fraction of the total number of two-qubit gates), splitting the circuit into two spatially isolated, non-interacting patches of qubits. We then compute the total fidelity as the product of the patch fidelities, each of which can be easily calculated. In “elided circuits”, we remove only a fraction of the initial two-qubit gates along the slice, allowing for entanglement between patches, which more closely mimics the full experiment while still maintaining simulation feasibility. Finally, we can also run full “verification circuits” with the same gate counts as our supremacy circuits, but with a different pattern for the sequence of two-qubit gates which is much easier to simulate classically [29]. Comparison between these variations allows tracking of the system fidelity as we approach the supremacy regime. We first check that the patch and elided versions of the verification circuits produce the same fidelity as the full verification circuits up to 53 qubits, as shown in Fig.4a. For each data point, we typically collect Ns=5×10^6 total samples over ten circuit instances, where instances differ only in the choices of single-qubit gates in each cycle. We also show predicted FXEB values computed by multiplying the no-error probabilities of single- and two-qubit gates and measurement [29]. Patch, elided, and predicted fidelities all show good agreement with the fidelities of the corresponding full circuits, despite the vast differences in computational complexity and entanglement. This gives us confidence that elided circuits can be used to accurately estimate the fidelity of more complex circuits. We proceed now to benchmark our most computationally difficult circuits. In Fig.4b, we show the measured FXEB for 53-qubit patch and elided versions of the full supremacy circuits with increasing depth. For the largest circuit with 53 qubits and 20 cycles, we collected Ns=30×10^6 samples over 10 circuit instances, obtaining FXEB=(2.24±0.21)×10^−3 for the elided circuits. With 5σ confidence, we assert that the average fidelity of running these circuits on the quantum processor is greater than at least 0.1%. The full data for Fig.4b should have similar fidelities, but are only archived since the simulation times (red numbers) take too long. It is thus in the quantum supremacy regime.”
> We can look directly to the paper itself to see how it addresses the issues of error uncertainty that the author premises his blogpost around
Can you demonstrate how the author "premises his blogpost around" the "issues of error uncertainty"? I don't think that was the main premise of the paper, at all.
I’m unsure of what you’re trying to say here. The issues that Kalai poses as to the validity of the experiments done by Google’s Quantum AI Lab is that the difference between the ideal distribution D and sampled distribution D’ is meaningfully different enough from each other, that significant results cannot be obtained from the experiment in comparing performance to classical simulations.
The excerpt I posted above from the actual paper directly addresses the points made by Kalai, and provides reasoning and analysis for determining the 5 sigma confidence of their results.
This is again, why in my original post, I said I believed Kalai to be nitpicking, primarily because the additional statistical testing he proposed should be done, while surely never a bad thing wouldn’t do anything to change the ultimate confidence determinations and results. Kalai of course believes that the testing was insufficient. The easiest thing to do resolve such an issue is to perform the additional work that Kalai asks for in order to appease his suspicions. I have personally no problem with that. It’s never a bad thing to do more tests.
I think you have serious conceptual holes in your understanding of the post.
> The issues that Kalai poses as to the validity of the experiments done by Google’s Quantum AI Lab is that the difference between the ideal distribution D and sampled distribution D’ is meaningfully different enough from each other, that significant results cannot be obtained from the experiment in comparing performance to classical simulations.
This is categorically false. Can you quote the passage(s) that lead you to this conclusion?
> By creating a 0-1 distribution we mean sampling sufficiently many times from that distribution D so it allows us to show that the sampled distribution is close enough to D. Because of the imperfection (noise) of qubits and gates (and perhaps some additional sources of noise) we actually do not sample from D but from another distribution D’. However if D’ is close enough to D, the conclusion that classical computers cannot efficiently sample according to D’ is plausible.
Headline and is simply describing some technical details of the experiment. This is not the author bringing any points of contention, there is no disagreement here at all, this is merely defining the criteria for how quantum supremacy is defined.
They need to understand the D' distribution more by running the experiment on lower qubit configurations, comparing the experimentally sampled distributions with one another across qubit configurations and across multiple runs of the same qubit configurations. As it is, he says that they may not have even sampled from D'. The burden of proof is on the experimenters to quantitatively show that they did.
There were other issues raised, like Google not being quantitative enough with their claims of the gains achieved in their supremacy statement.
He also brings up a more general issue with quantum computing in correlated errors, which are described in more detail in his paper here: http://www.ma.huji.ac.il/~kalai/Qitamar.pdf
But it boils down to that qubit logic gates experience positively correlated errors, which unless corrected with quantum fault tolerance will have an impact on any result.
I hope this clears up some misconceptions. In general it is a good idea to use the principle of charity and try to address the best possible interpretation of someone's argument. This is true even more so when commenting on someone who is literally close to the top in their field.
If the points raised by the OP are nitpicks, then I guess it doesn’t matter to you that the sampled (D’) distribution is very close to some easily definable theoretical distribution (D)?
If so I can easily illustrate this “good enough quantum supremacy” in my own home and on a very limited budget. The simple reason is that it is absolutely trivial to set up a physical experiment (quantum or not) where it’s very hard for a computer to sample from the distribution of possible outcomes.
But is your low budget experiment programmable with a known sequence of well defined quantum gates? Can you statistically verify that your distribution is very close to D?
I don’t really understand... My claim started with “if so”. Now you’re (if I understood you) asking if I’m willing to make the unconditional claim (without “if so”)?
(No. If I was I wouldn’t have used the qualifier “if so”.)
In the paper linked by this post, the author first very precisely defines how quantum supremacy is defined:
> if you can show that D’ is close enough to D before you reach the supremacy regime, and you can carry out the sampling in the supremacy regime then this gives you good reason to think that your experiments in the supremacy regime demonstrate “quantum supremacy”.
Author references an older paper running this experiment, brings up a very good point that there is no quantitative measurement provided of the similarity of D and D':
> The Google group itself ran this experiment for 9 qubits in 2017. One concern I have with this experiment is that I did not see quantitative data indicating how close D’ is to D.
First point of contention, doesn't seem like merely a "nitpick" and if you disagree I'd love to hear your reasoning.
> The twist in Google’s approach is that they try to compute D’ based mainly on the 1-qubit and 2-qubit (and readout errors) errors and then run an experiment on 53 qubits where they can neither compute D nor verify that they sample from D’. In fact they sample 10^7 samples from 0-1 strings of length 53 so this is probably much too sparse to distinguish between D and the uniform distribution
If they aren't sampling from D', then they can't compare it to D, and so this violates the basic definition of quantum supremacy via sampling of random circuits. The author's point about sample size being too sparse to compute the difference between D and the uniform distribution is also valid.
He made this caveat at the start
> A single run of the quantum computer gives you only one sample from D’ so to get a meaningful description of the target distribution you need to have many samples.
> What is needed is experiments to understand probability distributions obtained by pseudorandom circuits on 9-, 15-, 20-, 25- qubits. How close they are to the ideal distribution D and how robust they are (namely, what is the gap between experimental distributions for two samples obtained by two runs of the experiment.)
Seems like a big oversight to not do multiple runs and to compare the sampled D' distributions.
> basically just about definitively showcasing a problem where using a classical computer would take super polynomial time, but with a quantum computer ends up taking some significantly lesser time complexity.
Quantum computers should only really be able to outperform classical computers on quantum specific problems.
What do you mean by “matching”? The bullish take is just a different take that presumably must develop answers to the points of this post to remain a viable belief option.
It actually strikes me somewhat as editorializing to place this link here with the wording you chose.
If anything in your linked discussion actually addresses the substantive points of this post, why not link to those items specifically? What would a generic link to discussion on it that’s not tied to this post’s claims be contributing? Why would it matter if that post was “bullish”? If it adds some context related to this post, what is that context?
"Matching" in this context I believe refers to the fact that these are articles written on the person blogs of people that have very deep knowledge of the industry and what is required of quantum computers.
No I just think people eager to push for the positive result interpretation from the Google paper rush to astroturf, and you can reasonably detect it even from limited text like this case.
I've had my eyes on Adrian Thompson's genetically evolved FPGA circuits for a while now - they do amazing things very economically and exploit analog circuit properties. So I've always wondered what if we make unreliable but super tiny atomic level programmable gates where we know the unreliability stems from quantum fluctuations, and then evolve circuits over millions of generations (A.T. ran thousands) to see if they manage to start exploiting the quantum effects.
PS: I don't have the expertise to make a strong argument here, but it seems like an intriguing idea. Anything fundamentally against it?
'Quantum effects' are already exploited in analog components. TFET's work by modulating quantum tunneling. Zener reverse breakdown in Zener diode is also quantum effect. Esaki diode uses quantum effects.
Unfortunately just because the single component relies on quantum effects does not have anything to do with quantum speedup in computation.
Quantum computation exploits entanglement in larger scale than normal. The whole computational state must be entangled quantum state. Separated quantum effects result just classical computer. The quantum circuit must be carefully arranged so that the interference pattern yields the result you want.
Well, trivially that's a yes. However, the "specification level breach" property of A.T.'s work is what intrigued me - i.e. the circuits are specified to do a digital task, but work in an analog manner - constructing antennas and receivers - to the extent that the same digital circuit wouldn't work when written to another FPGA.
Also, entanglement doesn't need to be perfect. You can have 1% entanglement too and have that propagate over time and operations. The question is whether an evolved circuit can figure out pathways to use that little bit of entanglement in ways that our understanding doesn't quite admit .. in much the same way as a digital FPGA designer wouldn't think about using not-gates as antennae.
An unreliable circuit achieved this way would also be interesting I think.
Most likely yes. But, as I noted in another comment, that would still be interesting since we can test for entanglement on a larger scale. I'm kind of expecting error correction to "evolve" in the iterations. Robustness can come later.
I've worked a decent bit with stochastic search (not necessarily just EAs/GAs but also Metropolis-Hastings and extensions of MH) and the search process tends to favor probabilistic and inaccurate individual units but gangs many of them together for reliability.
This is wholly different from how we view computer programs today. It may work, but you'd better have a good application in mind, otherwise you'll get laughed/shooted out of the room.
As far as QC scalability, the thing I wonder about is the cost of maintaining full entanglement of N qubits as N grows large.
The debbie-downer perspective would be that for each additional qubit you add, you effectively double the cost of isolation from the environment, quantum error correction schemes, etc.
So, while compute power for quantum algorithms grows exponentially in N, so would the cost of operating the machine.
Do people who work in QC see this as a concern? Are there scientific arguments or engineering insights that lessen or obviate this concern?
For your enjoyment and amusement, here is a QC-related show-HN!
This is the primary concern and barrier to most realizations of quantum computers including Google’s most recent efforts, though there are exceptions like linear optical quantum computing which have different problems. So for people working in QC this is in fact the perspective, though they are mostly optimistic that we can continue to engineer/discover better solutions to this problem.
> The debbie-downer perspective would be that for each additional qubit you add, you effectively double the cost of isolation from the environment, quantum error correction schemes, etc.
This is exactly my concern with quantum computing. Not only might it get more expensive but it may simply be limited by the laws of our universe. If the difficulty of entangling and isolating qbits increases exponentially with the number of bits then quantum computing could be a dead end. You may still get some nice results of you can decrease the exponent. I'd love to be proven wrong.
My level of expertise on quantum computing is low.
The announce of the imminent advent of the Quantum Computer seems to be recurring every few months these last few years because it's a moving target.
HN recently featured an article about a supposed quantum-only algorithm being applied to normal computing. Algorithms are a vastly larger space to explore than general computing paradigms.
Computer hardware is still advancing forward in performance even though single-threaded performance is stalling in relation.
We're moving away from general computing, this is always happening, but the current environments are pushing us back towards transputers. There's a place for special purpose quantum hardware, but right now the money is in making the most general quantum cpus possible(D-wave being an exception because annealing has many uses where it excels.)
> The announce of the imminent advent of the Quantum Computer seems to be recurring every few months these last few years because it's a moving target.
That's mostly misreporting by the media and perhaps d-wave overselling their capabilities. If you actually listen to experts they're far more guarded in their statements.
Yes, there has been a lot of animosity in the quantum information community for a long time against D-Wave for this very reason, as researchers are worried about them over-promising and ushering in an analogue of the AI winter as a result.
To my knowledge D-wave has never demonstrated any useful application of their annealer. They have also never managed to show any "quantum advantage" where an algorithm on their machine scales better than the best known classical algorithm.
Am I understanding correctly that the whole idea is to get a bunch of qubits in one machine and then have them evaluate every possibility in a super dumb way but count on them to get the desired result faster than a transister based machine that knows what its looking for?
And that we currently just can’t coordinate that many qubits at once and also cant keep them cool long enough to function??
This is not the case and is a constant frustration researchers have with how quantum computing is represented in the media. So much so that Scott Aaronson (who works on computational complexity theory, particularly as it relates to quantum computing) has a statement telling you that’s not the case in the heading of his blog: https://www.scottaaronson.com/blog/
I attempted to put the complete title but it was 35 characters too long. I have no personal take on this and am not attempting to editorialize the title, if the mods have a different opinion on what the proper shortening of the original title should be (or if they have the power to put the actual complete title) then I am all for it.
Here is the complete title: Quantum computers: amazing progress (Google & IBM), and extraordinary but probably false supremacy claims (Google).
If quantum supremacy was not possible, wouldn't that mean that something is wrong with our physics understanding?
So when people say quantum supremacy is impossible, do they say that the device itself is extremely complicated to build (like an earth to moon elevator for example), or that quantum supremacy isn't allowed not even in principle?
Not necessarily. I believe it's the case that for every quantum algorithm that is exponentially better than the best known classical algorithm for solving the same problem, it is not proven that an equally good (up to polynomial overhead) classical algorithm does not exist.
Therefore, one way quantum computers could fail to be exponentially better than classical ones, without us having to revise physics, is that there are as of yet unknown classical algorithms that would erase the apparent difference between the two classes. You would have quantum computers, but not quantum supremacy.
I believe that there are quantum algorithms that are provably better than any classical one, known or unknown, but I think these only provide at most polynomial speedup, not exponential. So you might still call it quantum supremacy to have algorithms that are merely polynomially better.
I've heard it called "Aaronson's trilemma", the fact that at least one of these three things must be true:
* Quantum computers are not possible even in principle (new physics required, since current physics says they are), or
* The extended Chuch-Turing thesis is incorrect (because quantum supremacy implies not all computers are within polynomial overhead of each other), or
* There exist polynomial time classical algorithms for factoring and discrete logarithms.
I remember seeing a lecture by Aaronson where he described the idea of a “relativity computer” where you set a computer running on some exponential task, but then hop in a spaceship and fly off at nearly the speed of light, and come back to find the answer in an amount of time only polynomial for you.
He “debunked” this by mentioning to do it you’d potentially need an exponentially large supply of fuel to accelerate yourself to a speed exponentially close to the speed of light as the problem to solve grows more complex.
I wonder if something similar could be true about quantum computing. Maybe it is theoretically possible to solve problems polynomially that can only be solved exponentially in a classical computer. But maybe it requires “exponentially more” of some resource, like a power supply to power the physical devices needed to do an amount of error correction to make it physically realizable as the problem size grows.
Would it be possible for the Church-Turing thesis to be false mathematically, but for the amount of quantum error correction to require an amount of physical resources that grow prohibitively and prevent it from being practically possible (like the fuel limitation prevents the otherwise physically possible relativity computer)?
More importantly, it may be that the cost of a QC implementation for an algorithm is lower, even if there’s no quantum supremacy. As an analogy, consider the case of comparing a tape computer to a RAM computer: they’re both classical, but the RAM computer is vastly faster.
At the moment the problem of superseding ordinary computers still looks like one of technical/engineering nature.
If achieving this is not possible even in principle it could lead to some new developments.
There are some people considering/working on superdeterminism as a possible interpretation of quantum mechanics (the idea that everything, every outcome of an measurement is predetermined by the universe in such a way for us not to gain the complete information about the quantum system. I would not like to discourage anyone, but in fact this would be a conspiracy theory on a cosmic scale. In some discussion on physics stackexchange it was pointed out by Peter Shor that if it were true we wouldn't be able to achieve quantum supremacy.
It is true that Gerard 't Hooft, the most famous
proponent of superdeterminism is an asymptotic quantum supremacy skeptic, but I don't think superdeterminism implies no quantum supremacy. I'm a fan of superdeterminism, but I consider it to be in a pre-interpretation level of maturity, where the idea isn't even completely fleshed out. I hope that some day it can be turned into a proper interpretation like Many-Worlds and QBism, and at that point it will give all the standard predictions of QM, including the possibility of quantum supremacy.
> If quantum supremacy was not possible, wouldn't that mean that something is wrong with our physics understanding?
Yes, but that's not outrageous. For instance, Shor's algorithm on a cryptographically interesting factorization problem would be testing the predictions of QM in a new regime (exact cancellation of a sum with roughly as many terms as the size of the product being factored, during the quantum fourier transform.) It's quite reasonable to think that QM is an approximation which will break down at that level of precision.
You are right - most physicists believe that quantum supremacy is possible, and if we discover that in fact it is not, then we'll need to understand why.
> So when people say quantum supremacy is impossible, do they say that the device itself is extremely complicated to build (like an earth to moon elevator for example), or that quantum supremacy isn't allowed not even in principle?
Both, actually. Some interpretations of quantum mechanics imply that QC is impossible, some imply that it's infeasible for any meaningful problems.
Quantum computing is a typical example of taking a model (QM), further than it was ever intended to.
Instead of recognizing the fallacy, they blindly accept its promises : cramming an infinite computational power using very little mass.
They then uses the excuse that it's extremely complicated to build to cover for the lack of results demonstrating the model is holding its promises.
After a few decades, they will achieve at tremendous cost a "success" where quantum computer are typically faster by a factor 10^6 than a classical computer, and with better energy efficiency,
and recognize then that the model had indeed its limits, and that quantum supremacy was never about delivering an exponential speed-up (who would be stupid enough to believe such fallacies, of course all models have their limits) but just a better way of computing.
Is there a way to find how far a model stays valid other than taking it further than it was ever intended to? I personally like to think that physics can be explained by a discrete finite automaton so quantum computers are not possible, but believing this would be more stupid than believing the opposite. Whichever alternative is true, there is a good reason to try to build quantum computer, because the process itself will improve our understanding. And implying that there is some conspiracy to pretend quantum computers are real, is strange to say the least.
If your model can't predict its limits, it is an indication that you are already past its limits.
When you build a model, you build it to map the range of behaviors you are interested in. When mathematical infinities of any kind (like infinite computational power) emerge it's usually a strong hint that the model is not applicable, not an invitation to fantasize about the things you will be able to achieve following your model outside of its region of trust.
You question your hypothesis and then look for an alternative model that is more probable. You don't spend your resources doubling down on blind model following.
Assuming some priors and Bayesian updating your beliefs about how the world works is probably a better strategy.
I am not a Physicist so I don't have skin in this game, but the QM scene really look like a mix between snake oil vendor and religion, and doing more of this "science" by marketing firms isn't really any scientist should wish for.
All the physical theories we had so far require infinite computational power, because they work with real numbers, it's easy to say that it is wrong, but that's not really useful without saying what is right.
There are several interpretations of quantum mechanics that predict quantum computers not working in different ways, to find which one of them is correct you need an experiment that is not described by traditional view. Building quantum computers is the first experiment that has a chance to show what exactly is wrong with QM. Even the people who think there is nothing wrong with QM agree that quantum computer not working would be a bigger discovery than working, and are considering all the alternatives, so i don't see how it is anything like a religion.
Working with real numbers, doesn't mean requiring infinite computational power. Numerical integration can make the integration error arbitrary small, even without symplectic integrators, this mean you can work with finite-precision number instead.
The thing with building a quantum computer is that the original hard test (breaking RSA) is being watered down. Until you prove that you've done it you only get non-results telling you that you are not there yet but you don't know why and require ever more funds. So the incentives are badly aligned and you prove a softer test that you try to sell as something as good as the hard test. If you are not familiar with bias you might even fool yourself into thinking you are making progress because you managed to reach the softer goal you have set for yourself while you are adding complexity to obscure your theoretical shortcomings.
>Building quantum computers is the first experiment that has a chance to show what exactly is wrong with QM.
Don't blindly trust experiments : Bell officially proved that what's very probably wrong is right.
Especially when they require expensive equipment or specialized knowledge to reproduce. If there is something wrong with the protocol you can easily falsely convince yourself. Putting a non-zero prior on unknowns unknowns should be a must.
If you ever try to question the Gospel of Non-Locality, you will find yourself cast aside like many before as the vast literature show.
My level of expertise on quantum computing is very low.
The announce of the imminent advent of the Quantum Computer seems to be recurring every few months since at least 20 years, at this point I don't care anymore.
As a licensed quantum computologist (um... not really, but I do work for a major QC effort)... your skepticism is not misplaced.
I can only speak for my place of work (not publicly) and pass along scuttlebutt... but, my understanding is that it's largely the same everywhere. Some efforts have tight budgets, some have billions backing them; but QC research is hugely expensive with more unknowns than knowns. People giving out the cash want results sooner than later. At my workplace, the researchers are in a continual pitched battle with management to keep a rein on marketing, limit expectations, etc., and it's a major distraction. When management misunderstands an early result, and oversells it to their superiors (if not the press!), suddenly they think it's appropriate to make "Quantum Supremacy" a task with a 6-month deadline.
It all makes me want to crawl into a box and take a nap.
Only popular science articles and marketing shticks discuss “imminent quantum computers” in the sense of a commercially viable machine that displaces existing classical computers. Quantum computers exist. They can solve real-world problems, like simulating molecules. They don’t, however, outperform your cell phone in any “practical”/“real-world” problem.
Google’s supremacy result is an indicator of progress, not an indicator of practicality or “advantage”. Google knows this, and hasn’t claimed anything more.
The supremacy result should be viewed only for its scientific meaning and merit and nothing more.
No one is saying quantum computing for practical use is upon us. Quantum supremacy is a well defined milestone that doesn't say anything about everyday use of QC.
No that's the funniest aspect of the Google result. They barely have any control over what their gates do.
Gil makes this point, but doesn't call it out: they're claiming supremacy by turning the challenge around. "You can't classically simulate our device (which largely does it's own thing because of issues)."
A kid shoots an arrow at a target. The arrow hits the haybale, but not the target. Suddenly the kid yells "I bet you can't hit my arrow!" and claims Archer Supremacy because nobody even cares to try. Are you impressed? I'm not.
Scott Aaronson's description of the result made it out to be more subtle. If I understood it properly, it might be more like the kid shooting 20 arrows, which all form a particular pattern around a point that the kid can't choose or control at all. Now a regular archer can readily choose a particular point, in a way that the kid can't, but can't produce the sort of pattern that the kid does with nearly as much accuracy and nearly as little effort.
Or to make a less specific analogy, there is something about the kid's archery that regular archers can't replicate with their archery skills, but it's not really something that anyone would traditionally have described as "skilled archery". Then Aaronson and Kalai disagree about whether or not this unusual feat that's not very easy to relate conceptually to the ability to hit targets is a sign that the kid is plausibly going to be able to achieve traditional archery skill in the future.
> Or to make a less specific analogy, there is something about the kid's archery that regular archers can't replicate with their archery skills, but it's not really something that anyone would traditionally have described as "skilled archery".
> They barely have any control over what their gates do.
I don't understand this, could you explain?
My understanding is that the gates are perfectly normal quantum gates, which they use to connect qubits in a programmable way. Due to the probabilistic nature of quantum computing, running this circuit generates samples from a random distribution.
I attended a Google talk where they acknowledged difficulty in controlling their device. That was given as the motivation for running problems that consist of "random gates".
My understanding from reading Scott Aaronson's FAQ is that the "random" in "random gates" just means that they pick a random circuit to evaluate. But this circuit is known, just like a program can pick a random number and then print it out.
The fact that quantum computing behaves "randomly" by nature further complicates the discussion :)
That's unlikely to be the "source" of his skepticism - rather, his understanding of the theoretical foundations of the subject would be the source. That same understanding is the basis for his ability to identify possible flaws.
Not everyone is an armchair commentator merely repeating their unsubstantiated opinions. In Kalai's case, you could just say "Kalai's conjectures" and people familiar with the field are likely to recognize what you're referring to, although the may ask "which ones?" since he's produced several well-known ones in different math subfields.
Doing some research into it though, it seems like the phrase and usage of quantum supremacy is pretty well defined and accepted by and large within the physics community. Supposedly the coiner of the term believed “quantum advantage” wouldn’t emphasize the point enough.
The basic components of achieving what the term defines though is relatively straightforward. It’s basically just about definitively showcasing a problem where using a classical computer would take super polynomial time, but with a quantum computer ends up taking some significantly lesser time complexity. Google’s Quantum AI Lab achieved exactly that based on the results provided in the paper released by NASA. Granted, the problem in question is extremely specific, but that doesn’t invalidate it as being an admissible problem. The author raises points in contention, but they largely just seem like nitpicks to me.