The signals do degrade with time. They appear to be using Utah arrays which have been around for a while, so the old problems likely still exist. A second problem is that the signals themselves, assuming they are good, change as well. They're doing ~15m calibrations followed by ~40m activity sessions.
I think ultimately for these things to really work, there has to be a shift in the HCI paradigm. Specifically, the interaction context needs to be structured such that the behavior of the subject is highly predictable, with that being the case it then becomes easier to continuously retrain online in the background without the subject being aware of it.
That said, they got it to work! Always nice to see hard projects come to fruition!