For a known fixed value of n, you can get amazing convergence rates. For example, the basic midpoint rule gives O(1/n^2) and Simpson's rule gives O(1/n^4).
Despite this, there are two major use cases for quasirandom sequences: The first is when n is not known upfront and so you can't create a lattice; and the second is in higher dimensions. The reason for this latter situation is because the number of points required to make a lattice in D dimensions scales exponentially in D, whereas the convergence rates for random or quasirandom sampling are (amazingly!) independent on the dimension.
I think that you would really like Owen's paper on this topic. It is very comprehensive and yet extremely readable [1]
Despite this, there are two major use cases for quasirandom sequences: The first is when n is not known upfront and so you can't create a lattice; and the second is in higher dimensions. The reason for this latter situation is because the number of points required to make a lattice in D dimensions scales exponentially in D, whereas the convergence rates for random or quasirandom sampling are (amazingly!) independent on the dimension.
I think that you would really like Owen's paper on this topic. It is very comprehensive and yet extremely readable [1]
1[ ]http://statweb.stanford.edu/~owen/mc/Ch-quadrature.pdf