Pub Date : 2021-03-02DOI: 10.1142/S0219843621500043
Guizhou Cao, B. Chu, Benyan Huo, Yanhong Liu
Inspired by nature, soft-bodied pneumatic network actuators (PNAs) composed of compliant materials have been successfully applied in the fields of industry and daily life because of large-amplitude motion and long life span. However, compliant materials simultaneously limit the output force, challenge the dynamic modeling and impede corresponding control. In this paper, we investigate the design, modeling and control of an enhanced PNA. First, an enhanced structure is proposed to improve the output force of PNAs with features of simplification of fabrication, lightweight and compliant material retentivity. Second, a dynamic model of the enhanced PNA is constructed based on the Euler–Lagrange (EL) method. Finally, an adaptive robust controller is addressed for PNAs in presence of system uncertainties without knowledge of its bounds in prior. Experiment results show that the output force of the enhanced PNA is four times greater than the actuator without enhanced structures, which affords to theoretical estimation. Moreover, the proposed controller is utilized and compared with previous works in humanoid finger experiments to illustrate the effectiveness.
{"title":"Design, Modeling and Control of an Enhanced Soft Pneumatic Network Actuator","authors":"Guizhou Cao, B. Chu, Benyan Huo, Yanhong Liu","doi":"10.1142/S0219843621500043","DOIUrl":"https://doi.org/10.1142/S0219843621500043","url":null,"abstract":"Inspired by nature, soft-bodied pneumatic network actuators (PNAs) composed of compliant materials have been successfully applied in the fields of industry and daily life because of large-amplitude motion and long life span. However, compliant materials simultaneously limit the output force, challenge the dynamic modeling and impede corresponding control. In this paper, we investigate the design, modeling and control of an enhanced PNA. First, an enhanced structure is proposed to improve the output force of PNAs with features of simplification of fabrication, lightweight and compliant material retentivity. Second, a dynamic model of the enhanced PNA is constructed based on the Euler–Lagrange (EL) method. Finally, an adaptive robust controller is addressed for PNAs in presence of system uncertainties without knowledge of its bounds in prior. Experiment results show that the output force of the enhanced PNA is four times greater than the actuator without enhanced structures, which affords to theoretical estimation. Moreover, the proposed controller is utilized and compared with previous works in humanoid finger experiments to illustrate the effectiveness.","PeriodicalId":312776,"journal":{"name":"Int. J. Humanoid Robotics","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134186410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-24DOI: 10.1142/S0219843621500055
Shiliang Shao, Ting Wang, Yun Su, Chen Yao, Chunhe Song, Zhaojie Ju
Discrimination of surface textures using tactile sensors has attracted increasing attention. Intelligent robotics with the ability to recognize and discriminate the surface textures of grasped objects are crucial. In this paper, a novel method for surface texture classification based on tactile signals is proposed. For the proposed method, first, the tactile signals of each channel (X, Y, Z, and S) are decomposed based on empirical mode decomposition (EMD). Then, the intrinsic mode functions (IMFs) are obtained. Second, based on the multiple IMFs, the sample entropy is calculated for each IMF. Therefore, the multi-IMF sample entropy (MISE) features are obtained. Last but not least, based on the two public datasets, a variety of machine learning algorithms are used to recognize different textures. The results show that the SVM classification method, with the proposed MISE features, achieves the highest classification accuracy. Undeniably, the MISE features with the SVM method, proposed in this paper, provide a novel idea for the recognition of surface texture based on tactile perception.
{"title":"Multi-IMF Sample Entropy Features with Machine Learning for Surface Texture Recognition Based on Robot Tactile Perception","authors":"Shiliang Shao, Ting Wang, Yun Su, Chen Yao, Chunhe Song, Zhaojie Ju","doi":"10.1142/S0219843621500055","DOIUrl":"https://doi.org/10.1142/S0219843621500055","url":null,"abstract":"Discrimination of surface textures using tactile sensors has attracted increasing attention. Intelligent robotics with the ability to recognize and discriminate the surface textures of grasped objects are crucial. In this paper, a novel method for surface texture classification based on tactile signals is proposed. For the proposed method, first, the tactile signals of each channel (X, Y, Z, and S) are decomposed based on empirical mode decomposition (EMD). Then, the intrinsic mode functions (IMFs) are obtained. Second, based on the multiple IMFs, the sample entropy is calculated for each IMF. Therefore, the multi-IMF sample entropy (MISE) features are obtained. Last but not least, based on the two public datasets, a variety of machine learning algorithms are used to recognize different textures. The results show that the SVM classification method, with the proposed MISE features, achieves the highest classification accuracy. Undeniably, the MISE features with the SVM method, proposed in this paper, provide a novel idea for the recognition of surface texture based on tactile perception.","PeriodicalId":312776,"journal":{"name":"Int. J. Humanoid Robotics","volume":"734 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115130486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-11DOI: 10.1142/s0219843620300019
F. Alnajjar, M. Cappuccio, Omar Mubin, R. Arshad, S. Shahid
Recent studies suggest that robot-based interventions are potentially effective in diagnosis and therapy of autism spectrum disorder (ASD), demonstrating that robots can improve the engagement abilities and attention in autistic children. While methodological approaches vary significantly in these studies and are not unified yet, researchers often develop similar solutions based on similar conceptual and practical premises. We systematically review the latest robot-intervention techniques in ASD research (18 research papers), comparing multiple dimensions of technological and experimental implementation. In particular, we focus on sensor-based assessment systems for automated and unbiased quantitative assessments of children’s engagement and attention fluctuations during interaction with robots. We examine related technologies, experimental and methodological setups, and the empirical investigations they support. We aim to assess the strengths and limitations of such approaches in a diagnostic context and to evaluate their potential in increasing our knowledge of autism and in supporting the development of social skills and attentional dispositions in ASD children. Using our acquired results from the overview, we propose a set of social cues and interaction techniques that can be thought to be most beneficial in robot-related autism intervention.
{"title":"Humanoid Robots and Autistic Children: A Review on Technological Tools to Assess Social Attention and Engagement","authors":"F. Alnajjar, M. Cappuccio, Omar Mubin, R. Arshad, S. Shahid","doi":"10.1142/s0219843620300019","DOIUrl":"https://doi.org/10.1142/s0219843620300019","url":null,"abstract":"Recent studies suggest that robot-based interventions are potentially effective in diagnosis and therapy of autism spectrum disorder (ASD), demonstrating that robots can improve the engagement abilities and attention in autistic children. While methodological approaches vary significantly in these studies and are not unified yet, researchers often develop similar solutions based on similar conceptual and practical premises. We systematically review the latest robot-intervention techniques in ASD research (18 research papers), comparing multiple dimensions of technological and experimental implementation. In particular, we focus on sensor-based assessment systems for automated and unbiased quantitative assessments of children’s engagement and attention fluctuations during interaction with robots. We examine related technologies, experimental and methodological setups, and the empirical investigations they support. We aim to assess the strengths and limitations of such approaches in a diagnostic context and to evaluate their potential in increasing our knowledge of autism and in supporting the development of social skills and attentional dispositions in ASD children. Using our acquired results from the overview, we propose a set of social cues and interaction techniques that can be thought to be most beneficial in robot-related autism intervention.","PeriodicalId":312776,"journal":{"name":"Int. J. Humanoid Robotics","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122074826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-11DOI: 10.1142/s0219843620500231
F. Rea, Austin Kothig, Lukas Grasse, Matthew S. Tata
Humans make extensive use of auditory cues to interact with other humans, especially in challenging real-world acoustic environments. Multiple distinct acoustic events usually mix together in a complex auditory scene. The ability to separate and localize mixed sound in complex auditory scenes remains a demanding skill for binaural robots. In fact, binaural robots are required to disambiguate and interpret the environmental scene with only two sensors. At the same time, robots that interact with humans should be able to gain insights about the speakers in the environment, such as how many speakers are present and where they are located. For this reason, the speech signal is distinctly important among auditory stimuli commonly found in human-centered acoustic environments. In this paper, we propose a Bayesian method of selectively processing acoustic data that exploits the characteristic amplitude envelope dynamics of human speech to infer the location of speakers in the complex auditory scene. The goal was to demonstrate the effectiveness of this speech-specific temporal dynamics approach. Further, we measure how effective this method is in comparison with more traditional methods based on amplitude detection only.
{"title":"Speech Envelope Dynamics for Noise-Robust Auditory Scene Analysis in Robotics","authors":"F. Rea, Austin Kothig, Lukas Grasse, Matthew S. Tata","doi":"10.1142/s0219843620500231","DOIUrl":"https://doi.org/10.1142/s0219843620500231","url":null,"abstract":"Humans make extensive use of auditory cues to interact with other humans, especially in challenging real-world acoustic environments. Multiple distinct acoustic events usually mix together in a complex auditory scene. The ability to separate and localize mixed sound in complex auditory scenes remains a demanding skill for binaural robots. In fact, binaural robots are required to disambiguate and interpret the environmental scene with only two sensors. At the same time, robots that interact with humans should be able to gain insights about the speakers in the environment, such as how many speakers are present and where they are located. For this reason, the speech signal is distinctly important among auditory stimuli commonly found in human-centered acoustic environments. In this paper, we propose a Bayesian method of selectively processing acoustic data that exploits the characteristic amplitude envelope dynamics of human speech to infer the location of speakers in the complex auditory scene. The goal was to demonstrate the effectiveness of this speech-specific temporal dynamics approach. Further, we measure how effective this method is in comparison with more traditional methods based on amplitude detection only.","PeriodicalId":312776,"journal":{"name":"Int. J. Humanoid Robotics","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121893344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-11DOI: 10.1142/s0219843620500255
Yinfeng Fang, Xuguang Zhang, Dalin Zhou, Honghai Liu
The learning of inter-day representation of electromyographic (EMG) signals across multiple days remains a challenging topic and not fully accommodated yet. This study aims to improve the inter-day hand motion classification accuracy via convolutional neural network (CNN)-based data feature fusion. An EMG database (ISRMyo-I) was recorded from six subjects on 10 days via a low density electrode setting. This study investigated CNNs’ capability of feature learning, and found that the output of the first fully connected layer (CNNFeats) was a decent supplement feature set to the most prevalent Hudgins’ time domain features in combination with fourth-order autoregressive coefficients (TDAR). Through adding the automatically learned CNNFeats to the handcrafted TDAR feature set, both linear discriminant analysis (LDA) and support vector machine (SVM) classifiers received [Formula: see text]3% accuracy improvement. Similarly, taking TDAR as additional input to the CNN improved the accuracy by [Formula: see text]1% in the comparison with the basic CNN. Our results also demonstrated that the CNN approach outperformed conventional approaches when multiple subjects’ data were available for training, while traditional approaches were more adept at presenting motion patterns for single subject. A preliminary conclusion is drawn that substantial “common knowledge/features” can be learned by CNNs from the raw EMG signals across multiple days and multiple subjects, and thus it is believed that a pre-trained CNN model would contribute to higher accuracy as well as the reduction of learning burden.
{"title":"Improve Inter-day Hand Gesture Recognition Via Convolutional Neural Network-based Feature Fusion","authors":"Yinfeng Fang, Xuguang Zhang, Dalin Zhou, Honghai Liu","doi":"10.1142/s0219843620500255","DOIUrl":"https://doi.org/10.1142/s0219843620500255","url":null,"abstract":"The learning of inter-day representation of electromyographic (EMG) signals across multiple days remains a challenging topic and not fully accommodated yet. This study aims to improve the inter-day hand motion classification accuracy via convolutional neural network (CNN)-based data feature fusion. An EMG database (ISRMyo-I) was recorded from six subjects on 10 days via a low density electrode setting. This study investigated CNNs’ capability of feature learning, and found that the output of the first fully connected layer (CNNFeats) was a decent supplement feature set to the most prevalent Hudgins’ time domain features in combination with fourth-order autoregressive coefficients (TDAR). Through adding the automatically learned CNNFeats to the handcrafted TDAR feature set, both linear discriminant analysis (LDA) and support vector machine (SVM) classifiers received [Formula: see text]3% accuracy improvement. Similarly, taking TDAR as additional input to the CNN improved the accuracy by [Formula: see text]1% in the comparison with the basic CNN. Our results also demonstrated that the CNN approach outperformed conventional approaches when multiple subjects’ data were available for training, while traditional approaches were more adept at presenting motion patterns for single subject. A preliminary conclusion is drawn that substantial “common knowledge/features” can be learned by CNNs from the raw EMG signals across multiple days and multiple subjects, and thus it is believed that a pre-trained CNN model would contribute to higher accuracy as well as the reduction of learning burden.","PeriodicalId":312776,"journal":{"name":"Int. J. Humanoid Robotics","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129540220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tactile feedback is beneficial to improve the hand prosthesis performance, alleviate phantom pain, reduce muscle fatigue, etc. During the manipulation process, muscle fatigue not only causes discomfort to prosthesis users but also disturbs the surface electromyographic (sEMG)-based motion recognition, which significantly deteriorates the prosthesis functional performance. Efforts have been made to explore appropriate signal processing algorithms which could be less influenced by muscle fatigue. However, few studies concern how to alleviate muscle fatigue directly. Thus, this study proposes a novel method to avoid excessive muscle fatigue based on electrotactile feedback. A potable electrotactile stimulator is developed with adjustable parameters, multiple channels and wireless communication. It is implemented in a virtual hand grasping platform driven by sEMG signals to investigate the impact of tactile feedback on muscle fatigue. Experimental results show a higher success rate of grasping with electrotactile feedback than that with no feedback. Moreover, compared with grasp in the no feedback condition, there is an observable decrease of sEMG intensity when grasping a heavy object with electrotactile feedback, despite a comparable performance on the light and medium objects in both feedback conditions. It indicates that tactile feedback helps to alleviate muscle fatigue caused by excessive muscle contraction, especially when large strength is needed.
{"title":"Electrotactile Feedback-Based Muscle Fatigue Alleviation for Hand Manipulation","authors":"Kairu Li, Yu Zhou, Dalin Zhou, Jia Zeng, Yinfeng Fang, Junyou Yang, Honghai Liu","doi":"10.1142/s0219843620500243","DOIUrl":"https://doi.org/10.1142/s0219843620500243","url":null,"abstract":"Tactile feedback is beneficial to improve the hand prosthesis performance, alleviate phantom pain, reduce muscle fatigue, etc. During the manipulation process, muscle fatigue not only causes discomfort to prosthesis users but also disturbs the surface electromyographic (sEMG)-based motion recognition, which significantly deteriorates the prosthesis functional performance. Efforts have been made to explore appropriate signal processing algorithms which could be less influenced by muscle fatigue. However, few studies concern how to alleviate muscle fatigue directly. Thus, this study proposes a novel method to avoid excessive muscle fatigue based on electrotactile feedback. A potable electrotactile stimulator is developed with adjustable parameters, multiple channels and wireless communication. It is implemented in a virtual hand grasping platform driven by sEMG signals to investigate the impact of tactile feedback on muscle fatigue. Experimental results show a higher success rate of grasping with electrotactile feedback than that with no feedback. Moreover, compared with grasp in the no feedback condition, there is an observable decrease of sEMG intensity when grasping a heavy object with electrotactile feedback, despite a comparable performance on the light and medium objects in both feedback conditions. It indicates that tactile feedback helps to alleviate muscle fatigue caused by excessive muscle contraction, especially when large strength is needed.","PeriodicalId":312776,"journal":{"name":"Int. J. Humanoid Robotics","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114837588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-06DOI: 10.1142/s021984362050022x
Jian Tian, Cheng Wei, Yang Zhao
This paper presents a hierarchical controller and extracts the singular configurations based on underactuated system and optimal distribution of forces. Unlike multi-legged and large-sole robots, biped robot with point contact cannot maintain a stable standing state, much less to adjust attitude and resist external impact. The support domain of point-foot bipedal robot degenerates into a line segment consisting of two footholds, introducing the underdactuated characteristic that challenges traditional algorithms based on polygonal domains and full variable optimization. To fully exploit the dynamic connection between support line and balance control, the accurate model is established as feedforward terms using virtual leg and floating reference system. The dynamics model is decomposed into an underactuated module and a force distribution module for hierarchical control, in which the former determines the control forces of base and the singularity corresponding to robot configuration, and the latter distributes forces on each leg according to its capability by solving a quadratic programming with constraints. The results verify the advanced stability of attitude adjustment and impact from external force of biped robot with point contact comparing to model predictive control, which is improved based on robot’s singular configuration.
{"title":"Singular Configurations and Underactuated Balance Control for Biped Robot with Point Contact","authors":"Jian Tian, Cheng Wei, Yang Zhao","doi":"10.1142/s021984362050022x","DOIUrl":"https://doi.org/10.1142/s021984362050022x","url":null,"abstract":"This paper presents a hierarchical controller and extracts the singular configurations based on underactuated system and optimal distribution of forces. Unlike multi-legged and large-sole robots, biped robot with point contact cannot maintain a stable standing state, much less to adjust attitude and resist external impact. The support domain of point-foot bipedal robot degenerates into a line segment consisting of two footholds, introducing the underdactuated characteristic that challenges traditional algorithms based on polygonal domains and full variable optimization. To fully exploit the dynamic connection between support line and balance control, the accurate model is established as feedforward terms using virtual leg and floating reference system. The dynamics model is decomposed into an underactuated module and a force distribution module for hierarchical control, in which the former determines the control forces of base and the singularity corresponding to robot configuration, and the latter distributes forces on each leg according to its capability by solving a quadratic programming with constraints. The results verify the advanced stability of attitude adjustment and impact from external force of biped robot with point contact comparing to model predictive control, which is improved based on robot’s singular configuration.","PeriodicalId":312776,"journal":{"name":"Int. J. Humanoid Robotics","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132670323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-01DOI: 10.1142/s021984362050019x
S. Tachi, Y. Inoue, F. Kato
Telexistence refers to the general technology that allows humans to experience the real-time sensation of being in another place, interacting with a remote environment, which may be real, virtual, or a combination of both. It also refers to an advanced type of teleoperation system that allows an operator behind the controls to perform remote tasks dexterously with the feeling of being in a surrogate robot working in a remote environment. Telexistence in a real environment through a virtual environment is also possible. The concept was originally proposed by the ̄rst author in 1980, and its feasibility has been demonstrated through the construction of alter-ego robot systems called Telexistence Surrogate Anthropomorphic Robot (TELESAR) I–V. TELESAR VI is a newly developed telexistence platform for the ACCEL Embodied Media Project. It was designed and implemented with a mechanically unconstrained full-body master cockpit and a 67 degree of freedom (DOF) anthropomorphic avatar robot. The avatar robot can operate in a sitting position since the main area of operation is intended to be manipulation and gestural. The system provides a full-body experience of our extended body schema," which allows users to maintain an up-to-date representation in space of the positions of their di®erent body parts, including their head, torso, arms, hands, and legs. All ten ̄ngers of the avatar robot are equipped with force, vibration, and temperature sensors and can faithfully transmit these elements of haptic information. Thus, the combined use of the robot and audiovisual information actualizes the remote sense of existence, as if the users physically existed there, with the avatar robot serving as their new body. With this experience, users can perform tasks dexterously and feel the robot's body as their own, which provides the most simple and fundamental experience of a remote existence.
{"title":"TELESAR VI: Telexistence Surrogate Anthropomorphic Robot VI","authors":"S. Tachi, Y. Inoue, F. Kato","doi":"10.1142/s021984362050019x","DOIUrl":"https://doi.org/10.1142/s021984362050019x","url":null,"abstract":"Telexistence refers to the general technology that allows humans to experience the real-time sensation of being in another place, interacting with a remote environment, which may be real, virtual, or a combination of both. It also refers to an advanced type of teleoperation system that allows an operator behind the controls to perform remote tasks dexterously with the feeling of being in a surrogate robot working in a remote environment. Telexistence in a real environment through a virtual environment is also possible. The concept was originally proposed by the ̄rst author in 1980, and its feasibility has been demonstrated through the construction of alter-ego robot systems called Telexistence Surrogate Anthropomorphic Robot (TELESAR) I–V. TELESAR VI is a newly developed telexistence platform for the ACCEL Embodied Media Project. It was designed and implemented with a mechanically unconstrained full-body master cockpit and a 67 degree of freedom (DOF) anthropomorphic avatar robot. The avatar robot can operate in a sitting position since the main area of operation is intended to be manipulation and gestural. The system provides a full-body experience of our extended body schema,\" which allows users to maintain an up-to-date representation in space of the positions of their di®erent body parts, including their head, torso, arms, hands, and legs. All ten ̄ngers of the avatar robot are equipped with force, vibration, and temperature sensors and can faithfully transmit these elements of haptic information. Thus, the combined use of the robot and audiovisual information actualizes the remote sense of existence, as if the users physically existed there, with the avatar robot serving as their new body. With this experience, users can perform tasks dexterously and feel the robot's body as their own, which provides the most simple and fundamental experience of a remote existence.","PeriodicalId":312776,"journal":{"name":"Int. J. Humanoid Robotics","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133198072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-01DOI: 10.1142/s0219843620500206
Muhammad Ali Dildar, Muhammad Asif, Asma Kanwal, M. Ahmad, S. A. Gilani
Since the last few decades, research in the area of robotics technology has been emphasizing in the modeling and development of cognitive machines. A cognitive machine can have multiple cognitive capabilities to be programmed to make it artificially intelligent. Numerous cognitive modules interact to mimic human behavior in machines and result in such a heavily coupled system that a minor change in logic or hardware may affect a large number of its modules. To address such a problem, several middlewares exist to ease the development of cognitive machines. Although these layers decouple the process of logic building and communication infrastructure of modules, they are language-dependent and have their limitations. A cognitive module developed for one research work cannot be a part of another research work resulting in the re-invention of the wheel. This paper proposes a RESTful technology-based framework that provides language-independent access to low-level control of the iCub’s sensory-motor system. Moreover, the model is flexible enough to provide hybrid communications between cognitive modules running on different platforms and operating systems. Furthermore, a cognitive client is developed to test the proposed model. The experimental analysis performed by creating different scenarios shows the effectiveness of the proposed framework.
{"title":"RESTCub: A Language Independent Middleware for Cognitive Robot","authors":"Muhammad Ali Dildar, Muhammad Asif, Asma Kanwal, M. Ahmad, S. A. Gilani","doi":"10.1142/s0219843620500206","DOIUrl":"https://doi.org/10.1142/s0219843620500206","url":null,"abstract":"Since the last few decades, research in the area of robotics technology has been emphasizing in the modeling and development of cognitive machines. A cognitive machine can have multiple cognitive capabilities to be programmed to make it artificially intelligent. Numerous cognitive modules interact to mimic human behavior in machines and result in such a heavily coupled system that a minor change in logic or hardware may affect a large number of its modules. To address such a problem, several middlewares exist to ease the development of cognitive machines. Although these layers decouple the process of logic building and communication infrastructure of modules, they are language-dependent and have their limitations. A cognitive module developed for one research work cannot be a part of another research work resulting in the re-invention of the wheel. This paper proposes a RESTful technology-based framework that provides language-independent access to low-level control of the iCub’s sensory-motor system. Moreover, the model is flexible enough to provide hybrid communications between cognitive modules running on different platforms and operating systems. Furthermore, a cognitive client is developed to test the proposed model. The experimental analysis performed by creating different scenarios shows the effectiveness of the proposed framework.","PeriodicalId":312776,"journal":{"name":"Int. J. Humanoid Robotics","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129609222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-30DOI: 10.1142/s0219843620500188
G. Rigatos, M. Abbaszadeh, J. Pomares, P. Wira
The use of robotic limb exoskeletons is growing fast either for rehabilitation purposes or in an aim to enhance human ability for lifting heavy objects or for walking for long distances without fat...
{"title":"A Nonlinear Optimal Control Approach for a Lower-Limb Robotic Exoskeleton","authors":"G. Rigatos, M. Abbaszadeh, J. Pomares, P. Wira","doi":"10.1142/s0219843620500188","DOIUrl":"https://doi.org/10.1142/s0219843620500188","url":null,"abstract":"The use of robotic limb exoskeletons is growing fast either for rehabilitation purposes or in an aim to enhance human ability for lifting heavy objects or for walking for long distances without fat...","PeriodicalId":312776,"journal":{"name":"Int. J. Humanoid Robotics","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122828831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}