Yes, especially since the study looked at professions on death certificates (likely to be last profession). There are many better ways to do this.
Studies like this that are obviously flawed to anyone with a modicum of actual scientific/stats knowledge being called "landmark" studies really casts doubt on peer review. Sad to see this behavior yet again from researchers in the Alzheimer's field. How many more billions of donations must be wasted by "scientists" that don't understand basic statistics before something changes?
Seems like some kind of achiral algae would actually be the most dangerous.
People forget that blue-green algae caused a global climate apocalypse, polluted the oceans and atmosphere with deadly oxygen, caused all exposed iron to rust massively changing ocean chemistry, and threw the entire globe into an ice-age that lasted 300 million years.
I wonder how we would stop something like that. It'd be like the algae bloom from hell. Plankton likely wouldn't be very successful in attempting to eat it.
The report goes into a green goo scenario, a photosynthetic mirror bacterium eating the bottom of the oceanic food web and sending stuff up the chain extinct. One scenario that they deem less likely is it sucking enough co2 out of the atmosphere to doom us to an ice age, though they couldn't rule it out.
>it would also take ~10^25 years for a classical computer to directly verify the quantum computer’s results!!
This claim makes little sense. There are many problems that are much easier to verify than to solve. Why isn't that approach ever used to validate these quantum computing claims?
That's what the author is saying. Researchers in this field should, for credibility reasons, be solving test problems that can be quickly verified. As to why this isn't done:
(1) They're picking problems domains that are maximally close to the substrate of the computation device, so they can hit maximum problem sizes (like 10^25). For many (all?) fast-verifiable problems they can't currently handle impressively large problem sizes. In the same way that GPUs are only really good at "embarrassingly parallel" algorithms like computer graphics and linear algebra, these quantum chips are only really good at certain classes of algorithms that don't require too much coherence.
(2) A lot of potential use cases are NOT easy to validate, but are still very useful and interesting. Weather and climate prediction, for example. Quantum chemistry simulations is another. Nuclear simulations for the department of energy. Cryptography is kinda exceptional in that it provides easily verifiable results.
I would add one more to this, which I would argue is the main reason:
(0) For a quantum algorithm/simulation to be classically verifiable, it needs additional structure; something that leads to a structured, verifiable output despite the intermediate steps being intractable to simulate classically. That additional structure necessarily adds complexity beyond what can be run on current devices.
To pick an arbitrary example I'm familiar with, this paper (https://arxiv.org/abs/2104.00687) relies on the quantum computer implementing a certain cryptographic hash function. This alone makes the computation way more complex than what can be run on current hardware.
Maybe a working quantum algorithm for weather prediction would outperform currently used classical simulations, but I wouldn't expect it to be bang on every time. Inputs are imperfect. So at best you could benchmark it, and gain some confidence over time. It could very well be good enough for weather prediction though.
Also I doubt that a quantum algorithm is possible that provably solves the Navier-Stokes equations with known boundary and initial conditions. At least you need some discretization, and maybe you can get a quantum algorithm that provably converges to the real solution (which alone would be a breakthrough, I believe). Then you need some experimental lab setup with well controlled boundary and initial conditions that you can measure against.
In any case the validation would be at a very different standard compared to verifying prime factorization. At most you can gain confidence in the correctness of the simulation, but never absolute certainty.
At scale, yes. But this would still be solving toy problems with less variables, fewer dimensions.
And they’re not actually solving weather problems right now, I think. That was just an example. What they are actually solving are toy mathematical challenges.
Because we don't currently know a problem like this that both has a quantum algorithm we can run on this type of device with expected exponential speedup and has a fast classical verification algorithm. That's exactly the author's point/he has been advocating for quite a while the importance of researching such an example that would be better to use.
Depends on what you mean by "this type of device." Strictly speaking there are many efficiently verifiable quantum algorithms (including Shor's algorithm, the one that breaks RSA). But if you mean "this particular device," then yes, none are simple enough to run on a processor of this scale.
> Why isn't that approach ever used to validate these quantum computing claims?
Hossenfelder’s linked tweet addresses this head on [1]. We need four orders of magnitude more qubits before a QC can simulate anything real.
In the meantime, we’re stuck with toy problems (absent the sort of intermediate test algorithms Aaronson mentions, though the existence of such algorithms would undermine the feat’s PR value, as it would afford cheap takedowns about the QC lacking supremacy).
That's pretty much the kind of problem they're using here. The problem is that to verify the simulation, you need to run the simulation on the classic device, and that requires time that is exponential in the number of particles being simulated. They've verified that the classical and quantum simulations agree for n=1, 2, 3, ..., now here's a quantum simulation for n that would take 10^25 years to do classically.
If what you are simulating is a physical system, then to verify it you only need to replicate the physical system, not rerun the simulation on another device.
I suggested simulating the experiment of n molecules in a vacuum, another experiment might be a chaotic system like a double pendulum. Although there would need to be a high level of precision in setting up the physical parameters of the experiment.
Could it be that it's not a chance if these kind of problems are chosen? Somehow we can get from a quantum system an amount of computation that goes well beyond what a classical system can perform, but we can't seem to extract any useful information from it. Hmm.
Right, Factoring and discrete logs both come to mind; is Google's quantum computer not able to achieve measurable speedups on those versus classical computation?
It is perfectly general, but the error rate is too high to operate all qubits simultaneously for more than a few tens of gates without error. This is why error correction is needed but then you need orders of magnitude more physical qubits to deal with the overhead.
No, not quite, it's about the error-per-gate. RCS has very loose requirements on the error per gate, since all they need is enough gates to build up some arbitrary entangled state (a hundred or so gates on this system). Other algorithms have very tight requirements on the error-per-gate, since they must perform a very long series of operations without error.
The question is not whether a problem is easier to verify than to solve but whether there is a problem that is provably faster (in the complexity sense) on a quantum computer than a classical computer that is easy to verify on a classical computer.
No, yet another conclusion where the headline that runs is one of the least probable explanations.
More likely: They opened it in a museum lab with a mediocre clean room that had some stuff floating around in it and ended up culturing organisms that didn't even make the trip to space.
We know that all technology is a double-edged sword.
Much has been lost to man-made fire, entire villages, cities, even humans purposefully lit on fire and made to burn while others watched. It can be used for great evil.
But given the choice, would you put it back in the box?
Model 3 is priced under the average US car price. It is a mass market car.
Yes Chinese EVs have superior economics. It's quite difficult to have both high labor rates "living wages!" and cheap cars. Tesla is still the most competitive in this regard vs. other US automakers.
Tesla just announced their mass market robocar. Do you believe they won't actually produce it?
Makes sense to oppose subsidies, Tesla's stance has always been if you remove the subsidies for gas cars then EVs will already be highly competitive just on merit alone.
Tesla has already fully opened the supercharging network including the plug interface. What other company would do that? Imagine if Comcast were forced to share all their cable lines.
"Tesla just announced their mass market robocar. Do you believe they won't actually produce it?"
- Tesla generally is 1-3 years late on actual production of vehicles
- previous cars required only prosaic (but nonetheless difficult) industrial production scaling. The robotaxi requires Tesla to succeed in a huge AI leap, not the Actually Indians leap.
- Tesla's previous cars were built upon a base of delivery from previous models. The cybertruck is generally a failure, it won't be mass market. I predict Tesla stock will tank when its sales tank in the next 6 months.
"Yes Chinese EVs have superior economics. It's quite difficult to have both high labor rates "living wages!" and cheap cars. Tesla is still the most competitive in this regard vs. other US automakers."
- Tesla doesn't have battery tech or drivetrain leadership. It may have a couple percent efficiency advantage. They have no battery economic advantage over other makers, especially since the future is about sodium ion and LFP from CATL and other Chinese battery makers.
- Sodium Ion in particular is the great leveler for the car industry. 1/3rd the cost of NMC chemistry, it will enable even legacy auto to make dirt cheap cars. Tesla won't be willing to go downmarket
"Makes sense to oppose subsidies, Tesla's stance has always been if you remove the subsidies for gas cars then EVs will already be highly competitive just on merit alone."
Are there mass market sales of EVs from a dozen car companies? Is 30% of new car sales EVs, or even PHEV or EVs combined?
Is it paramount to get consumer transportation electrified from a carbon emissions standpoint?
Fine, it's clear you aren't an environmentalist and are just pro-Tesla from a stock fundamentals / crass business perspective. But this statement ALSO means that Tesla isn't environmentalist anymore, and excluding them from subsidies is perfectly plausible.
"Yes Chinese EVs have superior economics. It's quite difficult to have both high labor rates "living wages!" and cheap cars. Tesla is still the most competitive in this regard vs. other US automakers."
- Tesla is better than other union automakers in terms of living wages and cost competition? Do you have data?
You can work hard and not be stressed. Stress often comes from lack of control, what are the severe dangers?
The most significant danger seems to be you get a low performance review. As a faang engineer that's a pretty weak a significant danger as there are many other jobs that will simply assume you're good.
Not just lack of control, but lack of a light at the end of the tunnel. A lot of people can tolerate high stress for a while provided they know there's a payoff at the end, an assurance that things are gonna get better.
But when that payoff is taken away or non existent (no money, dead end job, shit living conditions, no chance of home ownership, no having a family, friends group, etc) people can start to fall apart even with low amounts of stress.
It's also notable that this is in fact what the now vast majority of other OpenAI founders chose to do.