The hardest part of implementing a quantum state vector simulation is understanding how the tensor product expands an n-qubit gate to apply to an m-qubit system. (A third of the linked post is dedicated to it; appropriately.)
If you use a language or framework that's based on tensors to start with, things can be quite succinct (though you still need to understand the concepts). For example, in numpy, if you store the state vector in an array of shape (2,) * num_qubits, you can apply gates as one-liners using np.einsum:
import numpy as np
# Init 4-qubit system with all amplitude in the 0000 state.
state = np.zeros(shape=(2,) * 4, dtype=np.complex64)
state[(0,) * 4] = 1
# Unitary matrix of Hadamard gate
H = np.array([[1, 1], [1, -1]], dtype=np.complex64) / 2**0.5
# Apply Hadamard gate to third qubit of four qubit system.
state = np.einsum('XY,abXd->abYd', H, state)
Here's a post explaining what np.einsum does: https://obilaniu6266h16.wordpress.com/2016/02/04/einstein-su... . In the above einsum string 'XY,abXd->abYd' the 'XY' part is naming the input and output axes of the Hadamard matrix, and the 'abXd->abYd' part is saying to multiply the matrix into the third axis of the state tensor. The notation is pretty general, able to permute and repeat axes in order to express things like traces and transposes and dot products and etc.
> The hardest part of implementing a quantum state vector simulation is understanding how the tensor product expands an n-qubit gate to apply to an m-qubit system. (A third of the linked post is dedicated to it; appropriately.)
This feels like it could be the "git gets easier once you understand branches are homeomorphic endofunctors mapping submanifolds of a Hilbert space" of physics.
If you use a language or framework that's based on tensors to start with, things can be quite succinct (though you still need to understand the concepts). For example, in numpy, if you store the state vector in an array of shape (2,) * num_qubits, you can apply gates as one-liners using np.einsum:
Here's a post explaining what np.einsum does: https://obilaniu6266h16.wordpress.com/2016/02/04/einstein-su... . In the above einsum string 'XY,abXd->abYd' the 'XY' part is naming the input and output axes of the Hadamard matrix, and the 'abXd->abYd' part is saying to multiply the matrix into the third axis of the state tensor. The notation is pretty general, able to permute and repeat axes in order to express things like traces and transposes and dot products and etc.