The hardest part of implementing a quantum state vector simulation is understanding how the tensor product expands an n-qubit gate to apply to an m-qubit system. (A third of the linked post is dedicated to it; appropriately.)
If you use a language or framework that's based on tensors to start with, things can be quite succinct (though you still need to understand the concepts). For example, in numpy, if you store the state vector in an array of shape (2,) * num_qubits, you can apply gates as one-liners using np.einsum:
import numpy as np
# Init 4-qubit system with all amplitude in the 0000 state.
state = np.zeros(shape=(2,) * 4, dtype=np.complex64)
state[(0,) * 4] = 1
# Unitary matrix of Hadamard gate
H = np.array([[1, 1], [1, -1]], dtype=np.complex64) / 2**0.5
# Apply Hadamard gate to third qubit of four qubit system.
state = np.einsum('XY,abXd->abYd', H, state)
Here's a post explaining what np.einsum does: https://obilaniu6266h16.wordpress.com/2016/02/04/einstein-su... . In the above einsum string 'XY,abXd->abYd' the 'XY' part is naming the input and output axes of the Hadamard matrix, and the 'abXd->abYd' part is saying to multiply the matrix into the third axis of the state tensor. The notation is pretty general, able to permute and repeat axes in order to express things like traces and transposes and dot products and etc.
> The hardest part of implementing a quantum state vector simulation is understanding how the tensor product expands an n-qubit gate to apply to an m-qubit system. (A third of the linked post is dedicated to it; appropriately.)
This feels like it could be the "git gets easier once you understand branches are homeomorphic endofunctors mapping submanifolds of a Hilbert space" of physics.
One of the aspects emphasized in TFA is being able to simulate gates of any dimension/number of qubits (like a 3-qubit Toffoli or a 5-qubit Molmer-Sorensen), instead of just 1- and 2-qubit gates. Have you thought about extending your simulator to support gates of greater than two qubits?
I guess back when I wrote it I didn't see the need given that single qubit gates along with CNOT are universal and they're implemented. I think airing on the side of 'as few features as is educational' was my mentality on this.
See e.g. equations (5) and (6) from [1], where any qubit subset of a system might undergo the action of the M–S gate. But definitely the usual presentation of the physics of M–S is on a two qubit system.
I'm deciding between Common Lisp and Mojo, so maybe I will try to implement this in Mojo and compare. Anybody have any thoughts on this, since I am a mediocre Lisper and a beginning Mojo person. I am familiar with Numpy and I also program in APL and J.
May I suggest Julia? Efficient and capable of (Common Lisp style) abstractions. Unsurprisingly, there are more than a few quantum projects in the ecosystem already.
I've used Julia with Pluto and Weave to create engineering calculations and reports when I was working at an engineering company, but I want something more general purpose. I've used CL for little scripts and utilities with Portacle, and I find I am more comfortable with the Lisp way of doing things. I had used Mathlab before, so Julia's syntax is very familiar. I do like the Models.jl library.
Don't fret if you think you're mediocre. I myself have been trying to get through On Lisp since 2008. Then, after that, Let over Lambda. Not better, but v strong book, i can tell the guy while not the best at humblebragging has so much cool stuff there.
ANSI Common Lisp n that's plenty to not be mediocre! I did get through the whole thing in 2009, what a great book!
Thanks for the motivation. I love Lisp, and even was into Shen[0] for a while, but Common Lisp has all the libraries and legacy tutorials and books that I will stick it out a bit more. Which matrix/math libraries do you recommend to compete with the likes of Numpy and Mojo for ML apps? Because I love APL, April, Array Programming Re-Imagined in Lisp[1].
"Thanks for interviewing with us today at xyz... please click on that link I just sent you in the chat. Today we will be doing a little quantum coding question. Try to speak out loud so I know what your thinking and your thought process."
> In Common Lisp, the matrix <a 2x2 matrix with 1 2 in first row and 3 4 in second row> is written #2A((1 2) (3 4)).
I do understand that the row view and column view are symmetrical in linear algebra. But nevertheless, unless I'm really getting something wrong, matrices are typically (at least in undergraduate mathematics courses) introduced as collections of _column vectors_: the matrix is a linear transformation that sends a vector into the space spanned by those column vectors. And you can see this by how common it is to define vectors to be column vectors, e.g. using x^T notation to indicate that, or saying it explicitly. So, if I'm not wrong there, why do all programming languages define matrices as collections of rows??
Programming languages usually have arrays, not matrices, and I think row-ordered array data has been a natural way to think in many fields (e.g., databases, finance, etc.), especially when you consider that adding a row ought to be less burdensome than adding a column in these contexts.
I see, thanks I guess you're right. I'd been thinking of languages with matrices that support matrix operations but yes I think perhaps I'd been missing the point you make that it would be confusing for those to be list-of-column if list-of-row is used elsewhere in the language. It's a bit of a shame for working with matrices.
One of the authors, Michael Nielson, also wrote the "bible" of quantum computing, as referenced in a sibling comment. The remains one of the most popular and cited introductory texts to the subject, though today there are many more options on the market.
Quantum Computation and Quantum Information by Nielson and Chuang is the standard textbook for quantum computing. It has excellent background chapters on linear algebra and quantum mechanics, that will get you a lot of mileage.
Another book that I would recommend is Introduction to Classical and Quantum Computing by Thomas Wong. It's recent and has lots of examples, including lots of code, to show how to work with the math.
maybe not of your interest, but if one is more from a ML background, there's this nice book/overview by Maria Schuld and Francesco Petruccione, called Machine Learning with Quantum Computers, which is mostly regarding variational algorithm and quantum kernel machine. However, the best intro material is the one by Nielsen and Chuang.
Moreover, if you feel more comfortable by running examples there are qiskit, pennylane and other libraries which shares a lot of tutorials and notebook (and some can be run on a true quantum computer if you have the money or the time to wait in queue!)
Recently, I have been working on some things that sometimes overlap with quantum computation and I have realized that the way all the quantum languages approach this wrong. If you are programming explicitly with CNOT gates, you are doing it wrong. A Python programmer doesn't care about logical gates, and neither should a quantum programmer. To that end, I'm convinced that the interplay between coinduction and induction, or coalgebra and algebra is the way forward.
Maybe not your point, but a lot of programmers like to take this as "I should invent the Python of quantum computing", which is just about as silly as trying to approach anyone in 1960 and telling them they shouldn't be thinking about electronics at all.
This very well may be true, in the same sense that a Python programmer doesn't (usually) care about assembly language. But at their lowest levels, quantum computers are executing a sequence of gates, and much of the work and sophistication of a quantum compiler is to optimize said sequences for rapid and accurate execution. So while gate sequences may not be the end-game to the programmers of tomorrow, they're undoubtedly a necessary component in any software stack.
This is true, but at that point, the goal would be to have a higher level language above the quantum gates. And at that point, I'd guess the code is less educational?
That is, at the end of the day, something like Shor's algorithm can be reduced to some math constructs we roughly know. The speedup comes from these only being efficient using quantum gates. Implementing the code using abstract quantum gates isn't to try and compete, but to try and understand the gates and how they work at a logical level.
This is like learning how boolean gates work to understand some ideas of how computers work. The only people that really think in many of those terms are the CPU designers. Teaching the next round of the designers does so by working with boolean models to get there.
And you are correct that we may have other quantum constructs someday. Just like much of what goes into a CPU isn't strictly OR/AND/etc. With the way gates are wired, it can be confusing to folks as the input signal can also be seen as destroyed in the circuit, but a deterministic signal is captured on the other side.
Now, the above is all from my weak intuition here. I would not be shocked to find I'm wrong on parts.
At a higher level, it is "prime_factors(n)", no? The Shor's part could be said as a smaller part, but it is still doing something we have named.
Again, though, these programs are not to write an algorithm in quantum code. These are to understand the building blocks of a quantum computer. Just as a BDD/ZDD can be used to find optimal gate designs of an added built on boolean chains. You first have to understand the boolean chains.
(disclaimer: I work on the team developing Q# and its tooling)
That's part of the goal of Q#. It's designed to be a language which allows you to build up from quantum gates, efficiently work with quantum concepts such as 'adjoint' and 'controlled' operations, and build that up into a higher level of abstraction. You can see an old post as to some of the reasoning when it was first developed at <https://devblogs.microsoft.com/qsharp/why-do-we-need-q/>.
Another consideration to some of the points raised here, is that even on today's state-of-the-art hardware you typically only get a couple thousand gates at best before noise overwhelms the system and the qubits 'decohere' (https://en.wikipedia.org/wiki/Quantum_decoherence). So you do often want to develop at a level where you can squeeze every last gate out of whatever program you're writing. (If you intend to run it on a quantum computer and not just simulations).
Being that the post is about quantum simulation, you can see the one our team built in Rust at https://github.com/qir-alliance/qir-runner/blob/main/sparses... . This uses 'sparse' simulation, which means any state with a probability of 0 isn't tracked, which turns out to be quite a few in a lot of algorithms. This allows you to simulate many more qubits than you can with a full state simulator (where you need to track 2^n states for n qubits). It also does some other nifty tricks where you can elide or combine gates before they are performed to get even more perf. We use it in our new Q# stack (https://github.com/microsoft/qsharp) to run program simulations in our CLI or in the browser (such as on our new https://quantum.microsoft.com site), or inside VS Code (desktop or web)).
We are looking to evolve the Q# language and improve the quantum development experience, with a focus given to a 'scalable' quantum future where gate count and noise is less of a limit, and moving development higher up in abstraction - as you outline. So if it is something you have an interest in, we're more than happy to get the input on the qsharp GitHub repo linked to above.
Not everyone is using PHP, sorry. I promise that once you get past the syntax shock, Lisp is not that difficult to grok. It looks like some people here took your comment the wrong way but please don't be discouraged by that. There is a genuine lack of good Lisp outreach, compounded by trolls and bombastic essays that are high on platitudes and low on actual insight, and I'm slowly working on correcting that in my spare time.
If you use a language or framework that's based on tensors to start with, things can be quite succinct (though you still need to understand the concepts). For example, in numpy, if you store the state vector in an array of shape (2,) * num_qubits, you can apply gates as one-liners using np.einsum:
Here's a post explaining what np.einsum does: https://obilaniu6266h16.wordpress.com/2016/02/04/einstein-su... . In the above einsum string 'XY,abXd->abYd' the 'XY' part is naming the input and output axes of the Hadamard matrix, and the 'abXd->abYd' part is saying to multiply the matrix into the third axis of the state tensor. The notation is pretty general, able to permute and repeat axes in order to express things like traces and transposes and dot products and etc.