Hacker News new | past | comments | ask | show | jobs | submit login

Recently, I have been working on some things that sometimes overlap with quantum computation and I have realized that the way all the quantum languages approach this wrong. If you are programming explicitly with CNOT gates, you are doing it wrong. A Python programmer doesn't care about logical gates, and neither should a quantum programmer. To that end, I'm convinced that the interplay between coinduction and induction, or coalgebra and algebra is the way forward.



Maybe not your point, but a lot of programmers like to take this as "I should invent the Python of quantum computing", which is just about as silly as trying to approach anyone in 1960 and telling them they shouldn't be thinking about electronics at all.


This very well may be true, in the same sense that a Python programmer doesn't (usually) care about assembly language. But at their lowest levels, quantum computers are executing a sequence of gates, and much of the work and sophistication of a quantum compiler is to optimize said sequences for rapid and accurate execution. So while gate sequences may not be the end-game to the programmers of tomorrow, they're undoubtedly a necessary component in any software stack.


I never said that they weren't. It's just that most programming language don't deal with gates as a part of their API.


This is true, but at that point, the goal would be to have a higher level language above the quantum gates. And at that point, I'd guess the code is less educational?

That is, at the end of the day, something like Shor's algorithm can be reduced to some math constructs we roughly know. The speedup comes from these only being efficient using quantum gates. Implementing the code using abstract quantum gates isn't to try and compete, but to try and understand the gates and how they work at a logical level.

This is like learning how boolean gates work to understand some ideas of how computers work. The only people that really think in many of those terms are the CPU designers. Teaching the next round of the designers does so by working with boolean models to get there.

And you are correct that we may have other quantum constructs someday. Just like much of what goes into a CPU isn't strictly OR/AND/etc. With the way gates are wired, it can be confusing to folks as the input signal can also be seen as destroyed in the circuit, but a deterministic signal is captured on the other side.

Now, the above is all from my weak intuition here. I would not be shocked to find I'm wrong on parts.


> That is, at the end of the day, something like Shor's algorithm can be reduced to some math constructs we roughly know.

Ok, what's the programmatic model there then?


At a higher level, it is "prime_factors(n)", no? The Shor's part could be said as a smaller part, but it is still doing something we have named.

Again, though, these programs are not to write an algorithm in quantum code. These are to understand the building blocks of a quantum computer. Just as a BDD/ZDD can be used to find optimal gate designs of an added built on boolean chains. You first have to understand the boolean chains.


To be honest, quantum computers might come and go, I want to see what is the invariant that will stay the same across all of them.


> the interplay between ... coalgebra and algebra

If you have not already seen it, you may be interested in Squiggol (to get an idea of its age, it probably had some indirect influence upon Python)

cf Charity: https://prism.ucalgary.ca/server/api/core/bitstreams/756b50a...


I have seen Charity yes, total functional programming is the future.


(disclaimer: I work on the team developing Q# and its tooling)

That's part of the goal of Q#. It's designed to be a language which allows you to build up from quantum gates, efficiently work with quantum concepts such as 'adjoint' and 'controlled' operations, and build that up into a higher level of abstraction. You can see an old post as to some of the reasoning when it was first developed at <https://devblogs.microsoft.com/qsharp/why-do-we-need-q/>.

Another consideration to some of the points raised here, is that even on today's state-of-the-art hardware you typically only get a couple thousand gates at best before noise overwhelms the system and the qubits 'decohere' (https://en.wikipedia.org/wiki/Quantum_decoherence). So you do often want to develop at a level where you can squeeze every last gate out of whatever program you're writing. (If you intend to run it on a quantum computer and not just simulations).

Being that the post is about quantum simulation, you can see the one our team built in Rust at https://github.com/qir-alliance/qir-runner/blob/main/sparses... . This uses 'sparse' simulation, which means any state with a probability of 0 isn't tracked, which turns out to be quite a few in a lot of algorithms. This allows you to simulate many more qubits than you can with a full state simulator (where you need to track 2^n states for n qubits). It also does some other nifty tricks where you can elide or combine gates before they are performed to get even more perf. We use it in our new Q# stack (https://github.com/microsoft/qsharp) to run program simulations in our CLI or in the browser (such as on our new https://quantum.microsoft.com site), or inside VS Code (desktop or web)).

We are looking to evolve the Q# language and improve the quantum development experience, with a focus given to a 'scalable' quantum future where gate count and noise is less of a limit, and moving development higher up in abstraction - as you outline. So if it is something you have an interest in, we're more than happy to get the input on the qsharp GitHub repo linked to above.


TBH I was never quite sold qubits. Is analog quantum out of the question?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: