>4.4 Cross-Subject Performance
Cross-subject performance is of vital importance for practical usage. To further report the We further
provide a comparison with both baseline methods and a representative meta-learning (DA/DG)
method, MAML [9], which is widely used in cross-subject problems in EEG classification below.
Table 2: Cross-subject performance average decreasing comparison on 18 human subjects, where
MAML denotes the method with MAML training. The metric is the lower the better.
Calib Data Method Eye fixation −∆(%) ↓ Raw EEG waves −∆(%) ↓
B-2 B-4 R-P R-F B-2 B-4 R-P R-F
× Baseline 3.38 2.08 2.14 2.80 7.94 5.38 6.02 5.89
Baseline+MAML [9] 2.51 1.43 1.08 1.23 6.86 4.22 4.08 4.79
× DeWave 2.35 1.25 1.16 1.17 6.24 3.88 3.94 4.28
DeWave+MAML [9] 2.08 1.25 1.16 1.17 6.24 3.88 3.94 4.28
Figure 4: The cross-subjects performance variance without calibration
In Table 2, we compare with MAML by reporting the average performance drop ratio between withinsubject and cross-subject translation metrics on 18 human subjects on both eye-fixation sliced features
and raw EEG waves. We compare the DeWave with the baseline under both direct testing (without
Calib data) and with MAML (with Calib data). The DeWave model shows superior performance
in both settings. To further illustrate the performance variance on different subjects, we train the
model by only using the data from subject YAG and test the metrics on all other subjects. The results
are illustrated in Figure 4, where the radar chart denotes the performance is stable across different
subjects.
My intuition is at least in the beginning, but with enough individual data won't you have a model that can generalize pretty well over similar cultures? Maybe moreso for the sheep, just speculating... who knows!