Sensor fusion applications such as intention detection have been emphasized as a major challenge for both robotic orthoses and prostheses. a b s t r a c t Modern wearable robots are not yet intelligent enough to fully satisfy the demands of end-users, as they lack the sensor fusion algorithms needed to provide optimal assistance and react quickly to perturbations or changes in user intentions. Online evaluation of sensor fusion methods is crucial.Emphasizes multimodality, adaptation and switching between sensor fusion schemes.Main sensors: electromyography, electroencephalography, and mechanical sensors.Overview of sensor fusion in wearable robots like prostheses and exoskeletons.Furthermore, possible future directions will be presented to pave the way for future research. We hope that deep learning can aid in eliminating factors that hinder the development of EMG-based HMI systems. We attempt to provide a comprehensive analysis of current research by discussing the advantages, challenges, and opportunities brought by deep learning. New issues, including multimodal sensing, inter-subject/inter-session, and robustness toward disturbances will be discussed. Recent progress in typical tasks such as movement classification, joint angle prediction, and force/torque estimation will be introduced. An overview of typical network structures and processing schemes will be provided. In this paper, we analyze recent papers and present a literature review describing the role that deep learning plays in EMG-based HMI. Recently, many EMG pattern recognition tasks have been addressed using deep learning methods. Determining how to decode the information inside EMG signals robustly and accurately is a key problem for which we urgently need a solution. Electromyography (EMG) has already been broadly used in human-machine interaction (HMI) applications.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |