Monthly
288 pp. per issue
6 x 9, illustrated
ISSN
0899-7667
E-ISSN
1530-888X
2014 Impact factor:
2.21

Neural Computation

May 2019, Vol. 31, No. 5, Pages 919-942
(doi: 10.1162/neco_a_01183)
© 2019 Massachusetts Institute of Technology
Semisupervised Deep Stacking Network with Adaptive Learning Rate Strategy for Motor Imagery EEG Recognition
Article PDF (1.21 MB)
Abstract
Practical motor imagery electroencephalogram (EEG) data-based applications are limited by the waste of unlabeled samples in supervised learning and excessive time consumption in the pretraining period. A semisupervised deep stacking network with an adaptive learning rate strategy (SADSN) is proposed to solve the sample loss caused by supervised learning of EEG data and the extraction of manual features. The SADSN adopts the idea of an adaptive learning rate into a contrastive divergence (CD) algorithm to accelerate its convergence. Prior knowledge is introduced into the intermediary layer of the deep stacking network, and a restricted Boltzmann machine is trained by a semisupervised method in which the adjusting scope of the coefficient in learning rate is determined by performance analysis. Several EEG data sets are carried out to evaluate the performance of the proposed method. The results show that the recognition accuracy of SADSN is advanced with a more significant convergence rate and successfully classifies motor imagery.