Pub Date : 2021-08-11DOI: 10.1109/ICAS49788.2021.9551113
Dongqi Liu, Yutaka Naito, Chen Zhang, S. Muramatsu, H. Yasuda, Kiyoshi Hayasaka, Y. Otake
In this study, a cyber-physical system (CPS) for river flow path control is proposed using reinforcement learning. Recently, there has been a frequent occurrence of river flooding due to heavy rains, resulting in serious economic losses and victims. One cause of river flooding is the meandering due to the river bed growing and flow path change. As a mean of avoiding the meandering, river groynes can be used to regularize the flow. However, the mechanism of the flow path growing, and its optimal control is unclear. Therefore, in this study, a dynamic flow path control system is proposed using a data-driven approach to solve the problem at once. As a data-driven approach, reinforcement learning is adopted. The proposed system is designed to control meandering by adaptively deforming and moving the groynes with the reward of the flow path health. The effectiveness of the proposed flow path control system is verified through a simulation of the river model.
{"title":"River Flow Path Control With Reinforcement Learning","authors":"Dongqi Liu, Yutaka Naito, Chen Zhang, S. Muramatsu, H. Yasuda, Kiyoshi Hayasaka, Y. Otake","doi":"10.1109/ICAS49788.2021.9551113","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551113","url":null,"abstract":"In this study, a cyber-physical system (CPS) for river flow path control is proposed using reinforcement learning. Recently, there has been a frequent occurrence of river flooding due to heavy rains, resulting in serious economic losses and victims. One cause of river flooding is the meandering due to the river bed growing and flow path change. As a mean of avoiding the meandering, river groynes can be used to regularize the flow. However, the mechanism of the flow path growing, and its optimal control is unclear. Therefore, in this study, a dynamic flow path control system is proposed using a data-driven approach to solve the problem at once. As a data-driven approach, reinforcement learning is adopted. The proposed system is designed to control meandering by adaptively deforming and moving the groynes with the reward of the flow path health. The effectiveness of the proposed flow path control system is verified through a simulation of the river model.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132258280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-11DOI: 10.1109/ICAS49788.2021.9551171
Yue Jin, Shuangqing Wei, Jian Yuan, Xudong Zhang
In multi-agent deep reinforcement learning, extracting sufficient and compact information of other agents is critical to attain efficient convergence and scalability of an algorithm. In canonical frameworks, distilling of such information is often done in an implicit and uninterpretable manner, or explicitly with cost functions not able to reflect the relationship between information compression and utility in representation. In this paper, we present Information-Bottleneck-based Other agents’ behavior Representation learning for Multi-agent reinforcement learning (IBORM) to explicitly seek low-dimensional mapping encoder through which a compact and informative representation relevant to other agents’ behaviors is established. IBORM leverages the information bottleneck principle to compress observation information, while retaining sufficient information relevant to other agents’ behaviors used for cooperation decision. Empirical results have demonstrated that IBORM delivers the fastest convergence rate and the best performance of the learned policies, as compared with implicit behavior representation learning and explicit behavior representation learning without explicitly considering information compression and utility.
{"title":"Information-Bottleneck-Based Behavior Representation Learning for Multi-Agent Reinforcement Learning","authors":"Yue Jin, Shuangqing Wei, Jian Yuan, Xudong Zhang","doi":"10.1109/ICAS49788.2021.9551171","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551171","url":null,"abstract":"In multi-agent deep reinforcement learning, extracting sufficient and compact information of other agents is critical to attain efficient convergence and scalability of an algorithm. In canonical frameworks, distilling of such information is often done in an implicit and uninterpretable manner, or explicitly with cost functions not able to reflect the relationship between information compression and utility in representation. In this paper, we present Information-Bottleneck-based Other agents’ behavior Representation learning for Multi-agent reinforcement learning (IBORM) to explicitly seek low-dimensional mapping encoder through which a compact and informative representation relevant to other agents’ behaviors is established. IBORM leverages the information bottleneck principle to compress observation information, while retaining sufficient information relevant to other agents’ behaviors used for cooperation decision. Empirical results have demonstrated that IBORM delivers the fastest convergence rate and the best performance of the learned policies, as compared with implicit behavior representation learning and explicit behavior representation learning without explicitly considering information compression and utility.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132748690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-11DOI: 10.1109/ICAS49788.2021.9551190
Hasti Khiabani, M. Ahmadi
Surface Electromyography (sEMG)-based intention-detection systems of lower limb can intelligently augment human- robot interaction (HRI) systems to detect subject’s walking direction prior-to or during walking. Ten Subject-Exclusive (Subj-Ex) and Generalized (Gen) Classical Machine Learning (C-ML)-based models are employed to detect direction intentions and evaluate inter-subject robustness in one knee/foot- gesture and three walking-related scenarios. In each, sEMG signals are collected from eight muscles of nine subjects during at least nine distinct gestures/activities. Linear Discriminant Analysis (LDA) and Random Forest (RF) classifiers, applied to the Time-Domain (TD) feature set (of the four input sets), provided the best accuracy. Subj-Ex approach achieves the highest prediction accuracy, facing occasional competition from the Gen approach. In knee/foot gesture scenario, LDA reaches an accuracy of 91.67%, signifying its applicability to robotic-assisted walking, prosthetics, and orthotics. The overall prediction accuracy among walking- related scenarios, though not as remarkably high as in the knee/foot gesture recognition scenario, can reach up to 75%.
{"title":"A Classical Machine Learning Approach For Emg-Based Lower Limb Intention Detection For Human-Robot Interaction Systems","authors":"Hasti Khiabani, M. Ahmadi","doi":"10.1109/ICAS49788.2021.9551190","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551190","url":null,"abstract":"Surface Electromyography (sEMG)-based intention-detection systems of lower limb can intelligently augment human- robot interaction (HRI) systems to detect subject’s walking direction prior-to or during walking. Ten Subject-Exclusive (Subj-Ex) and Generalized (Gen) Classical Machine Learning (C-ML)-based models are employed to detect direction intentions and evaluate inter-subject robustness in one knee/foot- gesture and three walking-related scenarios. In each, sEMG signals are collected from eight muscles of nine subjects during at least nine distinct gestures/activities. Linear Discriminant Analysis (LDA) and Random Forest (RF) classifiers, applied to the Time-Domain (TD) feature set (of the four input sets), provided the best accuracy. Subj-Ex approach achieves the highest prediction accuracy, facing occasional competition from the Gen approach. In knee/foot gesture scenario, LDA reaches an accuracy of 91.67%, signifying its applicability to robotic-assisted walking, prosthetics, and orthotics. The overall prediction accuracy among walking- related scenarios, though not as remarkably high as in the knee/foot gesture recognition scenario, can reach up to 75%.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"181 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131908918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-11DOI: 10.1109/ICAS49788.2021.9551188
Yingxu Wang, I. Pitas, K. Plataniotis, C. Regazzoni, Brian M. Sadler, A. Roy-Chowdhury, Ming Hou, Arash Mohammadi, L. Marcenaro, Farokh Atashzar, S. alZahir
Autonomous Systems (AS) are perceived as the most advanced intelligent systems evolved from those of reflexive, imperative, and adaptive intelligence. A plenary panel on “Future Development of Autonomous Systems” is organized at the inaugural IEEE ICAS’21. This paper reports the panel discussions about the-state-of-the-art and paradigms of AS, the basic research on theoretical foundations and mathematical means of AS, and challenges to the future development of AS. As an emerging and increasingly demanded field, AS provide an unprecedented approach to contemporary intelligent industries including deep machine learning, highly intelligent robotics, cognitive computers, general AI technologies, and industrial applications enabled by transdisciplinary advances in intelligence science, system science, brain science, cognitive science, robotics, computational intelligence, and intelligent mathematics.
{"title":"On Future Development of Autonomous Systems: A Report of the Plenary Panel at IEEE ICAS’21","authors":"Yingxu Wang, I. Pitas, K. Plataniotis, C. Regazzoni, Brian M. Sadler, A. Roy-Chowdhury, Ming Hou, Arash Mohammadi, L. Marcenaro, Farokh Atashzar, S. alZahir","doi":"10.1109/ICAS49788.2021.9551188","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551188","url":null,"abstract":"Autonomous Systems (AS) are perceived as the most advanced intelligent systems evolved from those of reflexive, imperative, and adaptive intelligence. A plenary panel on “Future Development of Autonomous Systems” is organized at the inaugural IEEE ICAS’21. This paper reports the panel discussions about the-state-of-the-art and paradigms of AS, the basic research on theoretical foundations and mathematical means of AS, and challenges to the future development of AS. As an emerging and increasingly demanded field, AS provide an unprecedented approach to contemporary intelligent industries including deep machine learning, highly intelligent robotics, cognitive computers, general AI technologies, and industrial applications enabled by transdisciplinary advances in intelligence science, system science, brain science, cognitive science, robotics, computational intelligence, and intelligent mathematics.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134292508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-11DOI: 10.1109/ICAS49788.2021.9551141
Anna Guerra, Francesco Guidi, D. Dardari, P. Djurić
In this paper we consider a joint detection, mapping and navigation problem by an unmanned aerial vehicle (UAV) with real-time learning capabilities. We formulate this problem as a Markov decision process (MDP), where the UAV is equipped with a THz radar capable to electronically scan the environment with high accuracy and to infer its probabilistic occupancy map. The navigation task amounts to maximizing the desired mapping accuracy and coverage and to decide whether targets (e.g., people carrying radio devices) are present or not. With the numerical results, we analyze the robustness of the considered Q-learning algorithm, and we discuss practical applications.
{"title":"Real-Time Learning for THZ Radar Mapping and UAV Control","authors":"Anna Guerra, Francesco Guidi, D. Dardari, P. Djurić","doi":"10.1109/ICAS49788.2021.9551141","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551141","url":null,"abstract":"In this paper we consider a joint detection, mapping and navigation problem by an unmanned aerial vehicle (UAV) with real-time learning capabilities. We formulate this problem as a Markov decision process (MDP), where the UAV is equipped with a THz radar capable to electronically scan the environment with high accuracy and to infer its probabilistic occupancy map. The navigation task amounts to maximizing the desired mapping accuracy and coverage and to decide whether targets (e.g., people carrying radio devices) are present or not. With the numerical results, we analyze the robustness of the considered Q-learning algorithm, and we discuss practical applications.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133123032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-11DOI: 10.1109/ICAS49788.2021.9551148
X. Cao, K. Lai, S. Yanushkevich, M. Smith
This work addresses two image-to-image translation tasks. The first task is to convert a visible face image into a thermal face image (V2T) and the second task is to convert a thermal face image into another thermal face image with a given target temperature (T2T). We propose to use conditional generative adversarial networks to solve the two tasks. We train our models using Carl and SpeakingFaces Datasets, and use SSIM to measure the performance of our models. The SSIM of the generated thermal images reach 0.82 and 0.84 for the V2T and T2T tasks respectively.
{"title":"Thermal Face Image Generator","authors":"X. Cao, K. Lai, S. Yanushkevich, M. Smith","doi":"10.1109/ICAS49788.2021.9551148","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551148","url":null,"abstract":"This work addresses two image-to-image translation tasks. The first task is to convert a visible face image into a thermal face image (V2T) and the second task is to convert a thermal face image into another thermal face image with a given target temperature (T2T). We propose to use conditional generative adversarial networks to solve the two tasks. We train our models using Carl and SpeakingFaces Datasets, and use SSIM to measure the performance of our models. The SSIM of the generated thermal images reach 0.82 and 0.84 for the V2T and T2T tasks respectively.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"30 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127790636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-11DOI: 10.1109/ICAS49788.2021.9551140
Teng Li, A. Torabi, Hongjun Xing, M. Tavakoli
Human perceptual sensitivity of various types of forces, e.g., stiffness and friction, is important for surgeons during robotic surgeries such as needle insertion and palpation. However, force feedback from robot end-effector is usually a combination of desired and undesired force components which could have an effect on the perceptual sensitivity of the desired one. In presence of undesired forces, to improve perceptual sensitivity of desired force could benefit robotic surgical outcomes. In this paper, we investigate how users’ perceptual sensitivity of friction and stiffness can be improved by taking advantage of kinematic redundancy of a user interface. Experimental results indicated that the perceptual sensitivity of both friction and stiffness can be significantly improved by maximizing the effective manipulability of the redundant user interface in its null space. The positive results provide a promising perspective to enhance surgeons’ haptic perceptual ability by making use of the robot redundancy.
{"title":"Improving a User’s Haptic Perceptual Sensitivity by Optimizing Effective Manipulability of a Redundant User Interface","authors":"Teng Li, A. Torabi, Hongjun Xing, M. Tavakoli","doi":"10.1109/ICAS49788.2021.9551140","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551140","url":null,"abstract":"Human perceptual sensitivity of various types of forces, e.g., stiffness and friction, is important for surgeons during robotic surgeries such as needle insertion and palpation. However, force feedback from robot end-effector is usually a combination of desired and undesired force components which could have an effect on the perceptual sensitivity of the desired one. In presence of undesired forces, to improve perceptual sensitivity of desired force could benefit robotic surgical outcomes. In this paper, we investigate how users’ perceptual sensitivity of friction and stiffness can be improved by taking advantage of kinematic redundancy of a user interface. Experimental results indicated that the perceptual sensitivity of both friction and stiffness can be significantly improved by maximizing the effective manipulability of the redundant user interface in its null space. The positive results provide a promising perspective to enhance surgeons’ haptic perceptual ability by making use of the robot redundancy.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116747041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-11DOI: 10.1109/ICAS49788.2021.9551172
Mohanad Abukmeil, A. Genovese, V. Piuri, F. Rundo, F. Scotti
Explainable autonomous driving systems (EADS) are emerging recently as a combination of explainable artificial intelligence (XAI) and vehicular automation (VA). EADS explains events, ambient environments, and engine operations of an autonomous driving vehicular, and it also delivers explainable results in an orderly manner. Explainable semantic segmentation (ESS) plays an essential role in building EADS, where it offers visual attention that helps the drivers to be aware of the ambient objects irrespective if they are roads, pedestrians, animals, or other objects. In this paper, we propose the first ESS model for EADS based on the variational autoencoder (VAE), and it uses the multiscale second-order derivatives between the latent space and the encoder layers to capture the curvatures of the neurons’ responses. Our model is termed as Mgrad2 VAE and is bench-marked on the SYNTHIA and A2D2 datasets, where it outperforms the recent models in terms of image segmentation metrics.
{"title":"Towards Explainable Semantic Segmentation for Autonomous Driving Systems by Multi-Scale Variational Attention","authors":"Mohanad Abukmeil, A. Genovese, V. Piuri, F. Rundo, F. Scotti","doi":"10.1109/ICAS49788.2021.9551172","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551172","url":null,"abstract":"Explainable autonomous driving systems (EADS) are emerging recently as a combination of explainable artificial intelligence (XAI) and vehicular automation (VA). EADS explains events, ambient environments, and engine operations of an autonomous driving vehicular, and it also delivers explainable results in an orderly manner. Explainable semantic segmentation (ESS) plays an essential role in building EADS, where it offers visual attention that helps the drivers to be aware of the ambient objects irrespective if they are roads, pedestrians, animals, or other objects. In this paper, we propose the first ESS model for EADS based on the variational autoencoder (VAE), and it uses the multiscale second-order derivatives between the latent space and the encoder layers to capture the curvatures of the neurons’ responses. Our model is termed as Mgrad2 VAE and is bench-marked on the SYNTHIA and A2D2 datasets, where it outperforms the recent models in terms of image segmentation metrics.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116437951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-11DOI: 10.1109/ICAS49788.2021.9551115
Yingxu Wang, James Y. Xu
A persistent challenge to AI theories and technologies is fake news recognition which demands not only syntactic analyses of language expressions, but also their semantics comprehension. This work presents an autonomous system for fake news recognition based on a novel approach of machine semantic learning. A training-free machine learning algorithm of Differential Sentence Semantic Analyses (DSSA) is designed and implemented for fake news detection. A large set of 876 experiments randomly selected from DataCup’ 19 has demonstrated a level of 70.4% accuracy that outperforms the traditional data-driven neural network technologies normally projected at the accuracy level of 55.0%. The DSSA methodology paves a way towards autonomous, training-free, and real-time trustworthy technologies for machine knowledge learning and semantics composition.
{"title":"An Autonomous Semantic Learning Methodology for Fake News Recognition","authors":"Yingxu Wang, James Y. Xu","doi":"10.1109/ICAS49788.2021.9551115","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551115","url":null,"abstract":"A persistent challenge to AI theories and technologies is fake news recognition which demands not only syntactic analyses of language expressions, but also their semantics comprehension. This work presents an autonomous system for fake news recognition based on a novel approach of machine semantic learning. A training-free machine learning algorithm of Differential Sentence Semantic Analyses (DSSA) is designed and implemented for fake news detection. A large set of 876 experiments randomly selected from DataCup’ 19 has demonstrated a level of 70.4% accuracy that outperforms the traditional data-driven neural network technologies normally projected at the accuracy level of 55.0%. The DSSA methodology paves a way towards autonomous, training-free, and real-time trustworthy technologies for machine knowledge learning and semantics composition.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129575094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-11DOI: 10.1109/ICAS49788.2021.9551144
E. Rahimian, Soheil Zabihi, A. Asif, S. F. Atashzar, Arash Mohammadi
This work is motivated by potentials of Deep Neural Networks (DNNs)-based solutions in improving myoelectric control for trustworthy Human-Machine Interfacing (HMI). In this context, we propose the Trustworthy Few Shot-Hand Gesture Recognition (TFS-HGR) framework as a novel DNN-based architecture for performing Hand Gesture Recognition (HGR) via multi-channel surface Electromyography (sEMG) signals. The main objective of the TFS-HGR framework is to employ Few-Shot Learning (FSL) formulation with a focus on transferring information and knowledge between source and target domains (despite their inherit differences) to address limited availability of training data. The NinaPro DB5 dataset is used for evaluation purposes. The proposed TFS-HGR achieves a performance of 83.17% for new repetitions with few-shot observations, i.e., 5-way 10-shot classification. Moreover, the TFS-HGR with the accuracy of 75.29% also generalize to new gestures with few-shot observations, i.e., 5-way 10-shot classification.
{"title":"Trustworthy Adaptation with Few-Shot Learning for Hand Gesture Recognition","authors":"E. Rahimian, Soheil Zabihi, A. Asif, S. F. Atashzar, Arash Mohammadi","doi":"10.1109/ICAS49788.2021.9551144","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551144","url":null,"abstract":"This work is motivated by potentials of Deep Neural Networks (DNNs)-based solutions in improving myoelectric control for trustworthy Human-Machine Interfacing (HMI). In this context, we propose the Trustworthy Few Shot-Hand Gesture Recognition (TFS-HGR) framework as a novel DNN-based architecture for performing Hand Gesture Recognition (HGR) via multi-channel surface Electromyography (sEMG) signals. The main objective of the TFS-HGR framework is to employ Few-Shot Learning (FSL) formulation with a focus on transferring information and knowledge between source and target domains (despite their inherit differences) to address limited availability of training data. The NinaPro DB5 dataset is used for evaluation purposes. The proposed TFS-HGR achieves a performance of 83.17% for new repetitions with few-shot observations, i.e., 5-way 10-shot classification. Moreover, the TFS-HGR with the accuracy of 75.29% also generalize to new gestures with few-shot observations, i.e., 5-way 10-shot classification.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121490268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}