Hacker Newsnew | past | comments | ask | show | jobs | submit | qnleigh's commentslogin

Ugh just looking at their list, this paper gets a hard no from me. Intelligence isn't mastery of some arbitrary list of mathematical subjects. It's the ability to learn and apply these subjects (or anything else) after minimal exposure to the topic.

For a bar as high as AGI (and not just 'the skills of an educated person,' which is what this paper seems to be describing), we should include abstract mathematical reasoning, and the ability to generate new ideas or even whole subfields to solve open problems.


If sodium batteries are so much cheaper, why is the emphasis of this article on batteries for trucks and not grid-scale storage? Isn't the latter much more impactful?

Also naively I would expect sodium batteries to be heavier that lithium, which would make them worse for transportation but still fine for energy storage.


I haven't seen any inverters for grid purposes for the wide Voltage range that Sodium produces. It may be the inverter organisations haven't got their yet and are waiting for the batteries to be available and cheap before it makes sense.

I think a lot of households will choose Sodium just because of how cheap it will be but not until there is the basic inverter equipment to make use of it from the usual manufacturers.


They haven't produced many yet. And when they do they'll probably sell them for applications where they can charge most and make profits. The Pioneer Na(sodium) portable power station thing in the article isn't cheap. The grid storage will come when production ramps up.

Probably familiarity bias; the author has more contact with EVs and/or expect the reader to have more contact with them.

Isn't sodium also heavier?

> They're larger and heavier for the same capacity, but the lower price makes up for it

So yes, the battery will be heavier because sodium's heavier, but it's so much cheaper that you can afford the extra footprint.


Yes the claims here allow the classical computer to use a random number generator.

I know what you're talking about, but I think you happened to pick a bad example to pick on here. This wind tunnel analogy resembles a common criticism of the prior experiments that were done by Google and others over the last few years. Those experiments ran highly unstructured, arbitrary circuits that don't compute anything useful. They hardly resembled the kind of results that you would expect from a general purpose, programmable computer. It's a valid criticism, and it seems like the above commenter came to this conclusion on their own.

To that comment, the present result is a step up from these older experiments in that they a) Run a more structured circuit b) Use the device to compute something reproducible (as opposed to sampling randomly from a certain probability distribution) c) The circuits go toward simulating a physical system of real-world relevance to chemistry.

Now you might say that even c) is just a quantum computer simulating another quantum thing. All I'll say is that if you would only be convinced by a quantum computer factoring a large number, don't hold your breath: https://algassert.com/post/2500


Thanks for the reply and explanation. I admit that I'm not well-versed in quantum computers (trying to understand even basic quantum mechanics seems to elude me), and I deeply appreciate the link for further reading.

Just gonna leave this here: Why haven't quantum computers factored 21 yet? https://algassert.com/post/2500

This excerpt misses the wider point of the paper. The paragraph immediately following the one you quote still does make claims of quantum advantage:

"Our second order OTOC circuits introduce a minimal structure while remaining sufficiently generic that circumvents this challenge by exploiting the exponential advantage provided by including time inversion in the measurement protocol, see arXiv:2407.07754."

The advantage claimed by the paper isn't about Hamiltonian learning (i.e extracting model parameters from observational data), but instead about computing the expectation value of a particular observable. They acknowledge that the advantage isn't provable (even the advantage of Shor's algorithm isn't provable), but they argue that there likely is an advantage.


Shor’s algorithm’s advantage isn’t proven, but a proof that integer factorization doesn’t admit a classical algorithm faster than O((log N)^3) could be found. The same applies for Google’s artificial problem.

An analogy which is closer to Google's experiment: measuring versus calculating the energy gaps in benzene to an arbitrary accuracy.

It is "verifiably" faster to measure those with state of the art "quantum tools" but that does not improve our understanding of other aromatic molecules.

(we may still get some insights about anthracene however)

The googles' advantage can be satisfactorily summarised as "not having to write the problem (ie off-diagonal terms) in terms of a classical basis" -- there is a severe overhead of having to represent qubits as bits.

Still, I suspect that that 13000x came from not putting in effort to implement the aforementioned "minimal structure" in their classical counterparts. They emphasized "echoes" and "ergodicity" & I think the "quantum" can further be dropped :)

In general, I do believe that whatever "informational advantage" we can get from these experiments should likewise be used to improve classical calculations.

As another eg: In the arxiv paper linked by GP they talk about provably-efficient shallow classical shadows


The quantum algorithm that would break certain kinds of public key cryptography schemes (not even the core part of Bitcoin blockchains, which are not vulnerable to quantum computers) will take days to weeks to break a single key [0]. This is another reason why we will have plenty of warning before quantum computing causes any major disruptions to daily life.

What I would start worrying about is the security of things like messages sent via end-to-end encrypted services like WhatsApp and Signal. Intercepted messages can be saved now and decrypted any time in the future, so it's better to switch to more robust cryptography sooner rather than later. Signal has taken steps in this direction recently: https://arstechnica.com/security/2025/10/why-signals-post-qu....

[0] https://arxiv.org/pdf/2505.15917



Usually, the crypto should have Forward Secrecy already even without being PQ-safe (e.g., via https://en.wikipedia.org/wiki/Double_Ratchet_Algorithm) so in practice the attacker would need to break many successive session keys - which rotates every time a new message is sent.

> not even the core part of Bitcoin blockchains, which are not vulnerable to quantum computers

Um, what? Shor’s algorithm can take the public key of a wallet (present on any outgoing transaction in the ledger) and produce its private key. So now you can hijack any wallet that has transferred any Bitcoin. Notably only one successful run of the algorithm is needed per wallet, so you could just pick a big one if it takes weeks.

It probably wouldn’t help you mine in practice, sure. Technically it would give you better asymptotic mining performance (via Grover’s algorithm) but almost certainly worse in practice for the foreseeable future.


> public key of a wallet (present on any outgoing transaction in the ledger)

Genuine question: is this true? I only know a little bit about Bitcoin, but I thought there was a notion of an "extended public key" that's not exposed to the ledger, where each individual public key on the ledger is only used once, or something like that.

I'm not at all confident in my understanding, so I'd love if you or someone else knowledgeable could help fill in the gaps.


Extended public keys can be used to generate a family of addresses, but each transaction still needs the public key for any address that has sent money. Someone who uses it religiously can keep most of their money in addresses with no outgoing transactions, meaning their public key is actually secret and therefore cannot be attacked. But there’s so many addresses that have outgoing transactions and huge balances that it wouldn’t make a difference to an attacker - they could skim a fortune and cash out from wallets that are not so well protected.

The real issue is as soon as this is done once Bitcoin's value plummets to a fraction of what it was today while people scramble to fix the algorithms.

Thanks for explaining!

This is quite different from their previous random circuit sampling (RCS) experiments that have made headlines a few times in the past. The key difference from an applied standpoint is that the output of RCS is a random bitstring which is different every time you run the algorithm. These bitstrings are not reproducible, and also not particularly interesting, except for the fact that only a quantum computer can generate them efficiently.

The new experiment generates the same result every time you run it (after a small amount of averaging). It also involves running a much more structured circuit (as opposed to a random circuit), so all-in-all, the result is much more 'under control.'

As a cherry on top, the output has some connection to molecular spectroscopy. It still isn't that useful at this scale, but it is much more like the kind of thing you would hope to use a quantum computer for someday (and certainly more useful than generating random bitstrings).


I asked the same question here a while ago. I don't know any good reasons for disabling zoom, but I did learn how to override it at least:

https://news.ycombinator.com/item?id=45289150#45319333


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: