Gang Liu

and 1 more

Objective.Modeling the brain as a white box is vital for investigating the brain. However, the physical properties of the human brain are unclear. Therefore, BCI algorithms using EEG signals are generally a data-driven approach and generate a black- or gray-box model. This paper presents the first EEG-based BCI algorithm (EEGBCI using Gang neurons, EEGG) decomposing the brain into some simple components with physical meaning and integrating recognition and analysis of brain activity. Approach. Independent and interactive components of neurons or brain regions can fully describe the brain. This paper constructed a relationship frame based on the independent and interactive compositions for intention recognition and analysis using a novel dendrite module of Gang neurons. A total of 4,906 EEG data of left- and right-hand motor imagery(MI) from 26 subjects were obtained from GigaDB. Firstly, this paper explored EEGG’s classification performance by cross-subject accuracy. Secondly, this paper transformed the trained EEGG model into a relation spectrum expressing independent and interactive components of brain regions. Then, the relation spectrum was verified using the known ERD/ERS phenomenon. Finally, this paper explored the previously unreachable further BCIbased analysis of the brain. Main results. (1) EEGG was more robust than typical “CSP+” algorithms for the poorquality data. (2) The relation spectrum showed the known ERD/ERS phenomenon. (3) Interestingly, EEGG showed that interactive components between brain regions suppressed ERD/ERS effects on classification. This means that generating fine hand intention needs more centralized activation in the brain. Significance. EEGG decomposed the biological EEG-intention system of this paper into the relation spectrum inheriting the Taylor series (in analogy with the data-driven but human-readable Fourier transform and frequency spectrum), which offers a novel frame for analysis of the brain.

Gang Liu

and 2 more

Background: At present, the gesture recognition using sEMG signals requires vast amounts of training data or limits to a few hand movements. This paper presents a novel dynamic energy model that can decode continuous hand actions with force information, by training small amounts of sEMG data. Method: As activating the forearm muscles, the corresponding fingers are moving or tend to move (namely exerting force). The moving fingers store kinetic energy, and the fingers with moving trends store potential energy. The kinetic and potential energy of fingers is dynamically allocated due to the adaptive-coupling mechanism of five-fingers in actual motion. At this certain moment, the sum of the two energies is constant. We regarded energy mode with the same direction of acceleration of each finger, but likely different movements, as the same one, and divided hand movements into ten energy modes. Independent component analysis and machine learning methods were used to model associations between sEMG signals and energy mode, to determine the hand action, including speed and force adaptively. This theory imitates the self-adapting mechanism in the actual task; thus, ten healthy subjects were recruited, and three experiments mimicking activities of daily living were designed to evaluate the interface: (1) decoding untrained configurations, (2) decoding the amount of single-finger energy, and (3) real-time control. Results:(1) Participants completed the untrained hand movements (100 /100, p < 0.0001). (2) The test of pricking balloon with a needle tip was designed with significantly better than chance (779 /1000, p < 0.0001).(3) The test of punching a hole in the plasticine on the balloon was with over 95% success rate (97.67±5.04 %, p <0.01). Conclusion: The model can achieve continuous hand actions with force information, by training small amounts of sEMG data, which reduces trained complexity.