Untethered microrobots have attracted extensive attention due to their potential for biomedical applications and micromanipulation at the small scale. Soft microrobots are of great research importance because of their highly deformable ability to achieve not only multiple locomotion mechanisms but also minimal invasion to the environment. However, the existing microrobots are still limited in their ability to locomote and cross obstacles in unstructured environments compared to conventional legged robots. Nature provides much inspiration for developing miniature robots. Here, we propose a bionic quadruped soft thin-film microrobot with a nonmagnetic soft body and 4 magnetic flexible legs. The quadruped soft microrobot can achieve multiple controllable locomotion modes in the external magnetic field. The experiment demonstrated the robot's excellent obstacle-crossing ability by walking on the surface with steps and moving in the bottom of a stomach model with gullies. In particular, by controlling the conical angle of the external conical magnetic field, microbeads gripping, transportation, and release of the microrobot were demonstrated. In the future, the quadruped microrobot with excellent obstacle-crossing and gripping capabilities will be relevant for biomedical applications and micromanipulation.
{"title":"Multimodal Locomotion and Cargo Transportation of Magnetically Actuated Quadruped Soft Microrobots.","authors":"Chenyang Huang, Zhengyu Lai, Xinyu Wu, Tiantian Xu","doi":"10.34133/cbsystems.0004","DOIUrl":"https://doi.org/10.34133/cbsystems.0004","url":null,"abstract":"<p><p>Untethered microrobots have attracted extensive attention due to their potential for biomedical applications and micromanipulation at the small scale. Soft microrobots are of great research importance because of their highly deformable ability to achieve not only multiple locomotion mechanisms but also minimal invasion to the environment. However, the existing microrobots are still limited in their ability to locomote and cross obstacles in unstructured environments compared to conventional legged robots. Nature provides much inspiration for developing miniature robots. Here, we propose a bionic quadruped soft thin-film microrobot with a nonmagnetic soft body and 4 magnetic flexible legs. The quadruped soft microrobot can achieve multiple controllable locomotion modes in the external magnetic field. The experiment demonstrated the robot's excellent obstacle-crossing ability by walking on the surface with steps and moving in the bottom of a stomach model with gullies. In particular, by controlling the conical angle of the external conical magnetic field, microbeads gripping, transportation, and release of the microrobot were demonstrated. In the future, the quadruped microrobot with excellent obstacle-crossing and gripping capabilities will be relevant for biomedical applications and micromanipulation.</p>","PeriodicalId":72764,"journal":{"name":"Cyborg and bionic systems (Washington, D.C.)","volume":"2022 ","pages":"0004"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10010670/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9180489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human action representation is derived from the description of human shape and motion. The traditional unsupervised 3-dimensional (3D) human action representation learning method uses a recurrent neural network (RNN)-based autoencoder to reconstruct the input pose sequence and then takes the midlevel feature of the autoencoder as representation. Although RNN can implicitly learn a certain amount of motion information, the extracted representation mainly describes the human shape and is insufficient to describe motion information. Therefore, we first present a handcrafted motion feature called pose flow to guide the reconstruction of the autoencoder, whose midlevel feature is expected to describe motion information. The performance is limited as we observe that actions can be distinctive in either motion direction or motion norm. For example, we can distinguish "sitting down" and "standing up" from motion direction yet distinguish "running" and "jogging" from motion norm. In these cases, it is difficult to learn distinctive features from pose flow where direction and norm are mixed. To this end, we present an explicit pose decoupled flow network (PDF-E) to learn from direction and norm in a multi-task learning framework, where 1 encoder is used to generate representation and 2 decoders are used to generating direction and norm, respectively. Further, we use reconstructing the input pose sequence as an additional constraint and present a generalized PDF network (PDF-G) to learn both motion and shape information, which achieves state-of-the-art performances on large-scale and challenging 3D action recognition datasets including the NTU RGB+D 60 dataset and NTU RGB+D 120 dataset.
{"title":"Generalized Pose Decoupled Network for Unsupervised 3D Skeleton Sequence-Based Action Representation Learning.","authors":"Mengyuan Liu, Fanyang Meng, Yongsheng Liang","doi":"10.34133/cbsystems.0002","DOIUrl":"https://doi.org/10.34133/cbsystems.0002","url":null,"abstract":"<p><p>Human action representation is derived from the description of human shape and motion. The traditional unsupervised 3-dimensional (3D) human action representation learning method uses a recurrent neural network (RNN)-based autoencoder to reconstruct the input pose sequence and then takes the midlevel feature of the autoencoder as representation. Although RNN can implicitly learn a certain amount of motion information, the extracted representation mainly describes the human shape and is insufficient to describe motion information. Therefore, we first present a handcrafted motion feature called pose flow to guide the reconstruction of the autoencoder, whose midlevel feature is expected to describe motion information. The performance is limited as we observe that actions can be distinctive in either motion direction or motion norm. For example, we can distinguish \"sitting down\" and \"standing up\" from motion direction yet distinguish \"running\" and \"jogging\" from motion norm. In these cases, it is difficult to learn distinctive features from pose flow where direction and norm are mixed. To this end, we present an explicit pose decoupled flow network (PDF-E) to learn from direction and norm in a multi-task learning framework, where 1 encoder is used to generate representation and 2 decoders are used to generating direction and norm, respectively. Further, we use reconstructing the input pose sequence as an additional constraint and present a generalized PDF network (PDF-G) to learn both motion and shape information, which achieves state-of-the-art performances on large-scale and challenging 3D action recognition datasets including the NTU RGB+D 60 dataset and NTU RGB+D 120 dataset.</p>","PeriodicalId":72764,"journal":{"name":"Cyborg and bionic systems (Washington, D.C.)","volume":"2022 ","pages":"0002"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10076048/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9289631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Liming Li, Vamiq M. Mustahsan, Guangyu He, F. Tavernier, Gurtej Singh, B. Boyce, F. Khan, I. Kao
Intraoperative confirmation of negative resection margins is an essential component of soft tissue sarcoma surgery. Frozen section examination of samples from the resection bed after excision of sarcomas is the gold standard for intraoperative assessment of margin status. However, it takes time to complete histologic examination of these samples, and the technique does not provide real-time diagnosis in the operating room (OR), which delays completion of the operation. This paper presents a study and development of sensing technology using Raman spectroscopy that could be used for detection and classification of the tumor after resection with negative sarcoma margins in real time. We acquired Raman spectra from samples of sarcoma and surrounding benign muscle, fat, and dermis during surgery and developed (i) a quantitative method (QM) and (ii) a machine learning method (MLM) to assess the spectral patterns and determine if they could accurately identify these tissue types when compared to findings in adjacent H&E-stained frozen sections. High classification accuracy (>85%) was achieved with both methods, indicating that these four types of tissue can be identified using the analytical methodology. A hand-held Raman probe could be employed to further develop the methodology to obtain spectra in the OR to provide real-time in vivo capability for the assessment of sarcoma resection margin status.
{"title":"Classification of Soft Tissue Sarcoma Specimens with Raman Spectroscopy as Smart Sensing Technology","authors":"Liming Li, Vamiq M. Mustahsan, Guangyu He, F. Tavernier, Gurtej Singh, B. Boyce, F. Khan, I. Kao","doi":"10.34133/2021/9816913","DOIUrl":"https://doi.org/10.34133/2021/9816913","url":null,"abstract":"Intraoperative confirmation of negative resection margins is an essential component of soft tissue sarcoma surgery. Frozen section examination of samples from the resection bed after excision of sarcomas is the gold standard for intraoperative assessment of margin status. However, it takes time to complete histologic examination of these samples, and the technique does not provide real-time diagnosis in the operating room (OR), which delays completion of the operation. This paper presents a study and development of sensing technology using Raman spectroscopy that could be used for detection and classification of the tumor after resection with negative sarcoma margins in real time. We acquired Raman spectra from samples of sarcoma and surrounding benign muscle, fat, and dermis during surgery and developed (i) a quantitative method (QM) and (ii) a machine learning method (MLM) to assess the spectral patterns and determine if they could accurately identify these tissue types when compared to findings in adjacent H&E-stained frozen sections. High classification accuracy (>85%) was achieved with both methods, indicating that these four types of tissue can be identified using the analytical methodology. A hand-held Raman probe could be employed to further develop the methodology to obtain spectra in the OR to provide real-time in vivo capability for the assessment of sarcoma resection margin status.","PeriodicalId":72764,"journal":{"name":"Cyborg and bionic systems (Washington, D.C.)","volume":"2021 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41401671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The deep learning gesture recognition based on surface electromyography plays an increasingly important role in human-computer interaction. In order to ensure the high accuracy of deep learning in multistate muscle action recognition and ensure that the training model can be applied in the embedded chip with small storage space, this paper presents a feature model construction and optimization method based on multichannel sEMG amplification unit. The feature model is established by using multidimensional sequential sEMG images by combining convolutional neural network and long-term memory network to solve the problem of multistate sEMG signal recognition. The experimental results show that under the same network structure, the sEMG signal with fast Fourier transform and root mean square as feature data processing has a good recognition rate, and the recognition accuracy of complex gestures is 91.40%, with the size of 1 MB. The model can still control the artificial hand accurately when the model is small and the precision is high.
{"title":"Application Research on Optimization Algorithm of sEMG Gesture Recognition Based on Light CNN+LSTM Model","authors":"Dianchun Bai, Tie Liu, Xinghua Han, Hongyu Yi","doi":"10.34133/2021/9794610","DOIUrl":"https://doi.org/10.34133/2021/9794610","url":null,"abstract":"The deep learning gesture recognition based on surface electromyography plays an increasingly important role in human-computer interaction. In order to ensure the high accuracy of deep learning in multistate muscle action recognition and ensure that the training model can be applied in the embedded chip with small storage space, this paper presents a feature model construction and optimization method based on multichannel sEMG amplification unit. The feature model is established by using multidimensional sequential sEMG images by combining convolutional neural network and long-term memory network to solve the problem of multistate sEMG signal recognition. The experimental results show that under the same network structure, the sEMG signal with fast Fourier transform and root mean square as feature data processing has a good recognition rate, and the recognition accuracy of complex gestures is 91.40%, with the size of 1 MB. The model can still control the artificial hand accurately when the model is small and the precision is high.","PeriodicalId":72764,"journal":{"name":"Cyborg and bionic systems (Washington, D.C.)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41544326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DNA nanotechnology takes DNA molecule out of its biological context to build nanostructures that have entered the realm of robots and thus added a dimension to cyborg and bionic systems. Spurred by spring-like properties of DNA molecule, the assembled nanorobots can be tuned to enable restricted, mechanical motion by deliberate design. DNA nanorobots can be programmed with a combination of several unique features, such as tissue penetration, site-targeting, stimuli responsiveness, and cargo-loading, which makes them ideal candidates as biomedical robots for precision medicine. Even though DNA nanorobots are capable of detecting target molecule and determining cell fate via a variety of DNA-based interactions both in vitro and in vivo, major obstacles remain on the path to real-world applications of DNA nanorobots. Control over nanorobot's stability, cargo loading and release, analyte binding, and dynamic switching both independently and simultaneously represents the most eminent challenge that biomedical DNA nanorobots currently face. Meanwhile, scaling up DNA nanorobots with low-cost under CMC and GMP standards represents other pertinent challenges regarding the clinical translation. Nevertheless, DNA nanorobots will undoubtedly be a powerful toolbox to improve human health once those remained challenges are addressed by using a scalable and cost-efficient method.
{"title":"Self-Assembly of DNA Molecules: Towards DNA Nanorobots for Biomedical Applications","authors":"Yong Hu","doi":"10.34133/2021/9807520","DOIUrl":"https://doi.org/10.34133/2021/9807520","url":null,"abstract":"DNA nanotechnology takes DNA molecule out of its biological context to build nanostructures that have entered the realm of robots and thus added a dimension to cyborg and bionic systems. Spurred by spring-like properties of DNA molecule, the assembled nanorobots can be tuned to enable restricted, mechanical motion by deliberate design. DNA nanorobots can be programmed with a combination of several unique features, such as tissue penetration, site-targeting, stimuli responsiveness, and cargo-loading, which makes them ideal candidates as biomedical robots for precision medicine. Even though DNA nanorobots are capable of detecting target molecule and determining cell fate via a variety of DNA-based interactions both in vitro and in vivo, major obstacles remain on the path to real-world applications of DNA nanorobots. Control over nanorobot's stability, cargo loading and release, analyte binding, and dynamic switching both independently and simultaneously represents the most eminent challenge that biomedical DNA nanorobots currently face. Meanwhile, scaling up DNA nanorobots with low-cost under CMC and GMP standards represents other pertinent challenges regarding the clinical translation. Nevertheless, DNA nanorobots will undoubtedly be a powerful toolbox to improve human health once those remained challenges are addressed by using a scalable and cost-efficient method.","PeriodicalId":72764,"journal":{"name":"Cyborg and bionic systems (Washington, D.C.)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48652024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hao Wang, Jiacheng Kan, Xin Zhang, C. Gu, Zhan Yang
Swimming micro-nanorobots have attracted researchers' interest in potential medical applications on target therapy, biosensor, drug carrier, and others. At present, the experimental setting of the swimming micro-nanorobots was mainly studied in pure water or H2O2 solution. This paper presents a micro-nanorobot that applied glucose in human body fluid as driving fuel. Based on the catalytic properties of the anode and cathode materials of the glucose fuel cell, platinum (Pt) and carbon nanotube (CNT) were selected as the anode and cathode materials, respectively, for the micro-nanorobot. The innovative design adopted the method of template electrochemical and chemical vapor deposition to manufacture the Pt/CNT micro-nanorobot structure. Both the scanning electron microscope (SEM) and transmission electron microscope (TEM) were employed to observe the morphology of the sample, and its elements were analyzed by energy-dispersive X-ray spectroscopy (EDX). Through a large number of experiments in a glucose solution and according to Stoker's law of viscous force and Newton's second law, we calculated the driving force of the fabricated micro-nanorobot. It was concluded that the structure of the Pt/CNT micro-nanorobot satisfied the required characteristics of both biocompatibility and motion.
{"title":"Pt/CNT Micro-Nanorobots Driven by Glucose Catalytic Decomposition","authors":"Hao Wang, Jiacheng Kan, Xin Zhang, C. Gu, Zhan Yang","doi":"10.34133/2021/9876064","DOIUrl":"https://doi.org/10.34133/2021/9876064","url":null,"abstract":"Swimming micro-nanorobots have attracted researchers' interest in potential medical applications on target therapy, biosensor, drug carrier, and others. At present, the experimental setting of the swimming micro-nanorobots was mainly studied in pure water or H2O2 solution. This paper presents a micro-nanorobot that applied glucose in human body fluid as driving fuel. Based on the catalytic properties of the anode and cathode materials of the glucose fuel cell, platinum (Pt) and carbon nanotube (CNT) were selected as the anode and cathode materials, respectively, for the micro-nanorobot. The innovative design adopted the method of template electrochemical and chemical vapor deposition to manufacture the Pt/CNT micro-nanorobot structure. Both the scanning electron microscope (SEM) and transmission electron microscope (TEM) were employed to observe the morphology of the sample, and its elements were analyzed by energy-dispersive X-ray spectroscopy (EDX). Through a large number of experiments in a glucose solution and according to Stoker's law of viscous force and Newton's second law, we calculated the driving force of the fabricated micro-nanorobot. It was concluded that the structure of the Pt/CNT micro-nanorobot satisfied the required characteristics of both biocompatibility and motion.","PeriodicalId":72764,"journal":{"name":"Cyborg and bionic systems (Washington, D.C.)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43127525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the past few years, we have gained a better understanding of the information processing mechanism in the human brain, which has led to advances in artificial intelligence and humanoid robots. However, among the various sensory systems, studying the somatosensory system presents the greatest challenge. Here, we provide a comprehensive review of the human somatosensory system and its corresponding applications in artificial systems. Due to the uniqueness of the human hand in integrating receptor and actuator functions, we focused on the role of the somatosensory system in object recognition and action guidance. First, the low-threshold mechanoreceptors in the human skin and somatotopic organization principles along the ascending pathway, which are fundamental to artificial skin, were summarized. Second, we discuss high-level brain areas, which interacted with each other in the haptic object recognition. Based on this close-loop route, we used prosthetic upper limbs as an example to highlight the importance of somatosensory information. Finally, we present prospective research directions for human haptic perception, which could guide the development of artificial somatosensory systems.
{"title":"Human Somatosensory Processing and Artificial Somatosensation","authors":"Luyao Wang, Lihua Ma, Jiajia Yang, Jinglong Wu","doi":"10.34133/2021/9843259","DOIUrl":"https://doi.org/10.34133/2021/9843259","url":null,"abstract":"In the past few years, we have gained a better understanding of the information processing mechanism in the human brain, which has led to advances in artificial intelligence and humanoid robots. However, among the various sensory systems, studying the somatosensory system presents the greatest challenge. Here, we provide a comprehensive review of the human somatosensory system and its corresponding applications in artificial systems. Due to the uniqueness of the human hand in integrating receptor and actuator functions, we focused on the role of the somatosensory system in object recognition and action guidance. First, the low-threshold mechanoreceptors in the human skin and somatotopic organization principles along the ascending pathway, which are fundamental to artificial skin, were summarized. Second, we discuss high-level brain areas, which interacted with each other in the haptic object recognition. Based on this close-loop route, we used prosthetic upper limbs as an example to highlight the importance of somatosensory information. Finally, we present prospective research directions for human haptic perception, which could guide the development of artificial somatosensory systems.","PeriodicalId":72764,"journal":{"name":"Cyborg and bionic systems (Washington, D.C.)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44083594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The purpose of this study was to examine whether interactive video game (IVG) training is an effective way to improve postural control outcomes and decrease the risk of falls. A convenience sample of 12 prefrail older adults were recruited and divided into two groups: intervention group performed IVG training for 40 minutes, twice per week, for a total of 16 sessions. The control group received no intervention and continued their usual activity. Outcome measures were centre of pressure (COP), mean velocity, sway area, and sway path. Secondary outcomes were Berg Balance Scale, Timed Up and Go (TUG), Falls Efficacy Scale International (FES-I), and Activities-Specific Balance Confidence (ABC). Assessment was conducted with preintervention (week zero) and postintervention (week eight). The intervention group showed significant improvement in mean velocity, sway area, Berg Balance Scale, and TUG (p < 0.01) compared to the control group. However, no significant improvement was observed for sway path (p = 0.35), FES-I (p = 0.383), and ABC (p = 0.283). This study showed that IVG training led to significant improvements in postural control but not for risk of falls.
{"title":"Application of Interactive Video Games as Rehabilitation Tools to Improve Postural Control and Risk of Falls in Prefrail Older Adults","authors":"Hammad Alhasan, P. Wheeler, D. Fong","doi":"10.34133/2021/9841342","DOIUrl":"https://doi.org/10.34133/2021/9841342","url":null,"abstract":"The purpose of this study was to examine whether interactive video game (IVG) training is an effective way to improve postural control outcomes and decrease the risk of falls. A convenience sample of 12 prefrail older adults were recruited and divided into two groups: intervention group performed IVG training for 40 minutes, twice per week, for a total of 16 sessions. The control group received no intervention and continued their usual activity. Outcome measures were centre of pressure (COP), mean velocity, sway area, and sway path. Secondary outcomes were Berg Balance Scale, Timed Up and Go (TUG), Falls Efficacy Scale International (FES-I), and Activities-Specific Balance Confidence (ABC). Assessment was conducted with preintervention (week zero) and postintervention (week eight). The intervention group showed significant improvement in mean velocity, sway area, Berg Balance Scale, and TUG (p < 0.01) compared to the control group. However, no significant improvement was observed for sway path (p = 0.35), FES-I (p = 0.383), and ABC (p = 0.283). This study showed that IVG training led to significant improvements in postural control but not for risk of falls.","PeriodicalId":72764,"journal":{"name":"Cyborg and bionic systems (Washington, D.C.)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43964753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The lower-limb robotic prostheses can provide assistance for amputees' daily activities by restoring the biomechanical functions of missing limb(s). To set proper control strategies and develop the corresponding controller for robotic prosthesis, a prosthesis user's intent must be acquired in time, which is still a major challenge and has attracted intensive attentions. This work focuses on the robotic prosthesis user's locomotion intent recognition based on the noninvasive sensing methods from the recognition task perspective (locomotion mode recognition, gait event detection, and continuous gait phase estimation) and reviews the state-of-the-art intent recognition techniques in a lower-limb prosthesis scope. The current research status, including recognition approach, progress, challenges, and future prospects in the human's intent recognition, has been reviewed. In particular for the recognition approach, the paper analyzes the recent studies and discusses the role of each element in locomotion intent recognition. This work summarizes the existing research results and problems and contributes a general framework for the intent recognition based on lower-limb prosthesis.
{"title":"Noninvasive Human-Prosthesis Interfaces for Locomotion Intent Recognition: A Review","authors":"Dongfang Xu, Qining Wang","doi":"10.34133/2021/9863761","DOIUrl":"https://doi.org/10.34133/2021/9863761","url":null,"abstract":"The lower-limb robotic prostheses can provide assistance for amputees' daily activities by restoring the biomechanical functions of missing limb(s). To set proper control strategies and develop the corresponding controller for robotic prosthesis, a prosthesis user's intent must be acquired in time, which is still a major challenge and has attracted intensive attentions. This work focuses on the robotic prosthesis user's locomotion intent recognition based on the noninvasive sensing methods from the recognition task perspective (locomotion mode recognition, gait event detection, and continuous gait phase estimation) and reviews the state-of-the-art intent recognition techniques in a lower-limb prosthesis scope. The current research status, including recognition approach, progress, challenges, and future prospects in the human's intent recognition, has been reviewed. In particular for the recognition approach, the paper analyzes the recent studies and discusses the role of each element in locomotion intent recognition. This work summarizes the existing research results and problems and contributes a general framework for the intent recognition based on lower-limb prosthesis.","PeriodicalId":72764,"journal":{"name":"Cyborg and bionic systems (Washington, D.C.)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44478880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Origami, a traditional Japanese art, is an example of superior handwork produced by human hands. Achieving such extreme dexterity is one of the goals of robotic technology. In the work described in this paper, we developed a new general-purpose robot system with sufficient capabilities for performing Origami. We decomposed the complex folding motions into simple primitives and generated the overall motion as a combination of these primitives. Also, to measure the paper deformation in real-time, we built an estimator using a physical simulator and a depth camera. As a result, our experimental system achieved consecutive valley folds and a squash fold.
{"title":"Origami Folding by Multifingered Hands with Motion Primitives","authors":"A. Namiki, Shuichi Yokosawa","doi":"10.34133/2021/9851834","DOIUrl":"https://doi.org/10.34133/2021/9851834","url":null,"abstract":"Origami, a traditional Japanese art, is an example of superior handwork produced by human hands. Achieving such extreme dexterity is one of the goals of robotic technology. In the work described in this paper, we developed a new general-purpose robot system with sufficient capabilities for performing Origami. We decomposed the complex folding motions into simple primitives and generated the overall motion as a combination of these primitives. Also, to measure the paper deformation in real-time, we built an estimator using a physical simulator and a depth camera. As a result, our experimental system achieved consecutive valley folds and a squash fold.","PeriodicalId":72764,"journal":{"name":"Cyborg and bionic systems (Washington, D.C.)","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45018139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}