That's what it means here, too. The idea is that if you take the N-dimensional state-space of neural activity --- each dimension being the activity of a neuron or electrode or voxel or whatever --- and look at how the state evolves through time, you discover that it "lives" in a manifold, k, of much smaller dimension than the full space, N.
This is, I think, simply a restatement of the fact that neurons exhibit correlations in their activity patterns. That is, not all vectors of activity are "allowed". Still, it has implications for everything from learning to brain-machine interfaces.
A meta comment: in academia, for an outsider a specific paper's contribution is usually hard to decipher. Many papers will sound fairly similar. Popular articles like TFA will essentially have to resort to describing a whole branch of research or even a whole subfield. Authors of the academic paper often don't even recognize their work from just reading the PR summary produced by the marketing department of the university (although this one is written by the original authors).
Hardly any work is ground breaking in science nowadays. There are incremental advances most of the time.
Manifold in the sense of a low dimensional locally flat object, in a high dime sional space. An autoencoder has a manifold at the bottleneck layer, for example.