Pub Date : 2023-05-05DOI: 10.20965/ijat.2023.p0305
Luis Garcia, U. Bielke, C. Neumann, Rainer Börret
This paper proposes a machine learning-based position prediction approach to determine the position of a light-emitting diode (LED) target using a new measuring system called the multi-aperture positioning system (MAPS). The measurement system is based on a photogrammetric approach using an aperture mask and a single camera sensor. To achieve high accuracy in position calculation, several complex algorithms with high computational complexity are used. The accuracy of the system is equal to or better than that of existing photogrammetric devices. We investigate whether a neural network (NN) can replace the algorithms currently used in the system software to increase the measurement frequency with similar accuracy. Simulated images are used to train the NN, while real images are used to measure performance. Previously, various algorithms were used to calculate the position of the target from the captured images. Our approach is to train an NN, using thousands of labeled images, to predict the position of the target from these images. We investigate whether systematic measurement errors can be avoided; not all factors affecting the measurement precision are yet known, can always be accurately determined, or change over time. When NNs are used, all information contained in the images is learned by the model, considering all influences present at the time of training. Results show that the trained NN can achieve similar performance to the previously used Gaussian algorithm in less time since no filters or other pre-processing of images are required. This factor directly affects the measurement frequency of the MAPS. The light spot center was detected with sub-pixel accuracy without systematic errors in contrast to some of the previously used algorithms. The simulation of the sensor images needs to be improved to investigate the full potential of the NN.
{"title":"Machine Learning Based Position Prediction of a Target Tracked by Multi-Aperture Positioning System","authors":"Luis Garcia, U. Bielke, C. Neumann, Rainer Börret","doi":"10.20965/ijat.2023.p0305","DOIUrl":"https://doi.org/10.20965/ijat.2023.p0305","url":null,"abstract":"This paper proposes a machine learning-based position prediction approach to determine the position of a light-emitting diode (LED) target using a new measuring system called the multi-aperture positioning system (MAPS). The measurement system is based on a photogrammetric approach using an aperture mask and a single camera sensor. To achieve high accuracy in position calculation, several complex algorithms with high computational complexity are used. The accuracy of the system is equal to or better than that of existing photogrammetric devices. We investigate whether a neural network (NN) can replace the algorithms currently used in the system software to increase the measurement frequency with similar accuracy. Simulated images are used to train the NN, while real images are used to measure performance. Previously, various algorithms were used to calculate the position of the target from the captured images. Our approach is to train an NN, using thousands of labeled images, to predict the position of the target from these images. We investigate whether systematic measurement errors can be avoided; not all factors affecting the measurement precision are yet known, can always be accurately determined, or change over time. When NNs are used, all information contained in the images is learned by the model, considering all influences present at the time of training. Results show that the trained NN can achieve similar performance to the previously used Gaussian algorithm in less time since no filters or other pre-processing of images are required. This factor directly affects the measurement frequency of the MAPS. The light spot center was detected with sub-pixel accuracy without systematic errors in contrast to some of the previously used algorithms. The simulation of the sensor images needs to be improved to investigate the full potential of the NN.","PeriodicalId":13583,"journal":{"name":"Int. J. Autom. Technol.","volume":"2 1","pages":"305-313"},"PeriodicalIF":0.0,"publicationDate":"2023-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88232909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-05DOI: 10.20965/ijat.2023.p0262
Sayaka Shibuya, Noriyuki Shido, Ryosuke Shirai, Kazuya Sase, Koki Ebina, Xiaoshuai Chen, T. Tsujita, S. Komizunai, T. Senoo, A. Konno
This study proposes simulation-based surgical navigation concept and describes the development of a laparoscopic surgical simulator that reflects the motion of surgical instruments in the real world. In the proposed simulation-based surgical navigation, movements of the surgical instruments are captured by a motion capture system, and the movements of the real surgical instruments are reflected in the movements of the virtual instruments in the simulation in real time. Contact of the virtual surgical instruments with organ model is detected based on the signed distance field (SDF) made around the organ model. The deformations of organs caused by contacts are calculated using dynamic finite element method (FEM). Using a cubic elastic object made of urethane resin, the accuracy of the calculation of the deformation was verified. The average error in the deformation verification experiments was within 1 mm. Simulations using hepato-biliary-pancreatic finite element (FE) models were performed, and computational costs of the simulation were validated. The time for one loop simulation with a hepato-biliary-pancreatic FE model of 3,225 elements and 1,663 nodes was 50 ms. The developed simulator can be applied to a simulation-based navigation system to update the states of organs in real time.
{"title":"Proposal of Simulation-Based Surgical Navigation and Development of Laparoscopic Surgical Simulator that Reflects Motion of Surgical Instruments in Real-World","authors":"Sayaka Shibuya, Noriyuki Shido, Ryosuke Shirai, Kazuya Sase, Koki Ebina, Xiaoshuai Chen, T. Tsujita, S. Komizunai, T. Senoo, A. Konno","doi":"10.20965/ijat.2023.p0262","DOIUrl":"https://doi.org/10.20965/ijat.2023.p0262","url":null,"abstract":"This study proposes simulation-based surgical navigation concept and describes the development of a laparoscopic surgical simulator that reflects the motion of surgical instruments in the real world. In the proposed simulation-based surgical navigation, movements of the surgical instruments are captured by a motion capture system, and the movements of the real surgical instruments are reflected in the movements of the virtual instruments in the simulation in real time. Contact of the virtual surgical instruments with organ model is detected based on the signed distance field (SDF) made around the organ model. The deformations of organs caused by contacts are calculated using dynamic finite element method (FEM). Using a cubic elastic object made of urethane resin, the accuracy of the calculation of the deformation was verified. The average error in the deformation verification experiments was within 1 mm. Simulations using hepato-biliary-pancreatic finite element (FE) models were performed, and computational costs of the simulation were validated. The time for one loop simulation with a hepato-biliary-pancreatic FE model of 3,225 elements and 1,663 nodes was 50 ms. The developed simulator can be applied to a simulation-based navigation system to update the states of organs in real time.","PeriodicalId":13583,"journal":{"name":"Int. J. Autom. Technol.","volume":"24 1","pages":"262-276"},"PeriodicalIF":0.0,"publicationDate":"2023-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81602912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-05DOI: 10.20965/ijat.2023.p0217
Yuji Kumano, S. Kanoga, Masataka Yamamoto, H. Takemura, M. Tada
A recurrent-neural-network-based deep-learning model was developed to estimate the three-axis joint angles of an entire body with 17 bones during walking from three inertial measurement units (IMUs) — one each on the left wrist and heels. In this model, the acceleration and angular velocity of the previous 49 frames and current frame were considered as inputs. The architecture comprises two hidden layers (two long short-term memory layers) and a dense layer. The performance of the model was evaluated using the National Institute of Advanced Industrial Science and Technology (AIST) Gait Database 2019 public dataset. Consequently, the root mean squared error of each joint angle was less than 12.28°. A comparison of the estimation results of the same model with IMUs at the pelvis and shanks revealed that the proposed model is advantageous in terms of balanced measurement accuracy and ease of use in realizing whole-body motion capture. Although the accuracy of the model was better than those of previous models in estimating the general whole-body motion from six IMUs, it was worse than that of a previous model in estimating only the lower-limb motion from three IMUs attached to the pelvis and shanks during walking. In the proposed model, IMUs are attached to the left wrist and heels, and whole-body motion can be easily captured using a smartwatch and smart shoes.
{"title":"Estimating Whole-Body Walking Motion from Inertial Measurement Units at Wrist and Heels Using Deep Learning","authors":"Yuji Kumano, S. Kanoga, Masataka Yamamoto, H. Takemura, M. Tada","doi":"10.20965/ijat.2023.p0217","DOIUrl":"https://doi.org/10.20965/ijat.2023.p0217","url":null,"abstract":"A recurrent-neural-network-based deep-learning model was developed to estimate the three-axis joint angles of an entire body with 17 bones during walking from three inertial measurement units (IMUs) — one each on the left wrist and heels. In this model, the acceleration and angular velocity of the previous 49 frames and current frame were considered as inputs. The architecture comprises two hidden layers (two long short-term memory layers) and a dense layer. The performance of the model was evaluated using the National Institute of Advanced Industrial Science and Technology (AIST) Gait Database 2019 public dataset. Consequently, the root mean squared error of each joint angle was less than 12.28°. A comparison of the estimation results of the same model with IMUs at the pelvis and shanks revealed that the proposed model is advantageous in terms of balanced measurement accuracy and ease of use in realizing whole-body motion capture. Although the accuracy of the model was better than those of previous models in estimating the general whole-body motion from six IMUs, it was worse than that of a previous model in estimating only the lower-limb motion from three IMUs attached to the pelvis and shanks during walking. In the proposed model, IMUs are attached to the left wrist and heels, and whole-body motion can be easily captured using a smartwatch and smart shoes.","PeriodicalId":13583,"journal":{"name":"Int. J. Autom. Technol.","volume":"81 1","pages":"217-225"},"PeriodicalIF":0.0,"publicationDate":"2023-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80075811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-05DOI: 10.20965/ijat.2023.p0284
T. Inamura
The keyword “human digital twin” has received considerable attention in recent years, and information technology has been developed in healthcare and sports training systems to guide human behavior to a better state. In contrast, from optimizing the production and maintenance processes of industrial products, which is the origin of the term “digital twin,” intelligent robot systems can be interpreted as a mainstream of digital twin. In other words, assistive robots that support humans in their daily lives and improve their life behavior require the integration of human digital twin and conventional object digital twin. However, integrating these two digital twins is not easy from the viewpoint of system integration. In addition, it is necessary to encourage humans to change their behavior to provide users with subjective and immersive experiences rather than simply displaying numerical information. This study reviews the current status and limitations of these digital twin technologies and proposes the concept of a virtual reality (VR) digital twin that integrates digital twins and VR toward assistive robotic systems. This will expand the experience of both humans and robots and open the way to the realization of robots that can better support our daily lives.
{"title":"Digital Twin of Experience for Human-Robot Collaboration Through Virtual Reality","authors":"T. Inamura","doi":"10.20965/ijat.2023.p0284","DOIUrl":"https://doi.org/10.20965/ijat.2023.p0284","url":null,"abstract":"The keyword “human digital twin” has received considerable attention in recent years, and information technology has been developed in healthcare and sports training systems to guide human behavior to a better state. In contrast, from optimizing the production and maintenance processes of industrial products, which is the origin of the term “digital twin,” intelligent robot systems can be interpreted as a mainstream of digital twin. In other words, assistive robots that support humans in their daily lives and improve their life behavior require the integration of human digital twin and conventional object digital twin. However, integrating these two digital twins is not easy from the viewpoint of system integration. In addition, it is necessary to encourage humans to change their behavior to provide users with subjective and immersive experiences rather than simply displaying numerical information. This study reviews the current status and limitations of these digital twin technologies and proposes the concept of a virtual reality (VR) digital twin that integrates digital twins and VR toward assistive robotic systems. This will expand the experience of both humans and robots and open the way to the realization of robots that can better support our daily lives.","PeriodicalId":13583,"journal":{"name":"Int. J. Autom. Technol.","volume":"143 1","pages":"284-291"},"PeriodicalIF":0.0,"publicationDate":"2023-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78197529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-05DOI: 10.20965/ijat.2023.p0292
Y. Endo, Tsubasa Maruyama, M. Tada
In this paper, as a practical approach to building a human-centered cyber-physical system (CPS), we propose a software platform that integrates hardware and software materials to realize a human digital twin (HDT) including model construction, data acquisition, analysis, and intervention in terms of the physical load and physical capabilities of humans. Furthermore, as a case study of this platform in industrial applications, we introduce an example of a human-centered CPS in which humans and robots work together to realize better human workability and production line productivity within the system.
{"title":"DhaibaWorks: A Software Platform for Human-Centered Cyber-Physical Systems","authors":"Y. Endo, Tsubasa Maruyama, M. Tada","doi":"10.20965/ijat.2023.p0292","DOIUrl":"https://doi.org/10.20965/ijat.2023.p0292","url":null,"abstract":"In this paper, as a practical approach to building a human-centered cyber-physical system (CPS), we propose a software platform that integrates hardware and software materials to realize a human digital twin (HDT) including model construction, data acquisition, analysis, and intervention in terms of the physical load and physical capabilities of humans. Furthermore, as a case study of this platform in industrial applications, we introduce an example of a human-centered CPS in which humans and robots work together to realize better human workability and production line productivity within the system.","PeriodicalId":13583,"journal":{"name":"Int. J. Autom. Technol.","volume":"36 1","pages":"292-304"},"PeriodicalIF":0.0,"publicationDate":"2023-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76258610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-05DOI: 10.20965/ijat.2023.p0277
Ryuma Niiyama, M. Ikeda, Young ah Seong
In a digital twin, a humanoid robot can be the counterpart of a simulated agent in the real world. In addition, a human, virtual avatar, and avatar robot might constitute digital triplets. We propose an inflatable cybernetic avatar (CA) with a humanoid upper body using an inflatable structure that can represent gestures. This inflatable CA is much lighter, safer, and cheaper than conventional humanoid robots and can be folded when deflated. These properties are ideal for physical human–robot interaction (pHRI) and allow real-time collection of human behavior through interaction. In the experiment, basic movements such as nodding and raising arms were measured using motion capture systems. This paper demonstrates the proposed inflatable CA in a hybrid event. We also conducted an experiment to measure the touch interactions using tactile sensors attached to the fabric of the inflatable part. A psychologically secure inflatable humanoid CA is a promising platform for physical interaction experiments.
{"title":"Inflatable Humanoid Cybernetic Avatar for Physical Human-Robot Interaction","authors":"Ryuma Niiyama, M. Ikeda, Young ah Seong","doi":"10.20965/ijat.2023.p0277","DOIUrl":"https://doi.org/10.20965/ijat.2023.p0277","url":null,"abstract":"In a digital twin, a humanoid robot can be the counterpart of a simulated agent in the real world. In addition, a human, virtual avatar, and avatar robot might constitute digital triplets. We propose an inflatable cybernetic avatar (CA) with a humanoid upper body using an inflatable structure that can represent gestures. This inflatable CA is much lighter, safer, and cheaper than conventional humanoid robots and can be folded when deflated. These properties are ideal for physical human–robot interaction (pHRI) and allow real-time collection of human behavior through interaction. In the experiment, basic movements such as nodding and raising arms were measured using motion capture systems. This paper demonstrates the proposed inflatable CA in a hybrid event. We also conducted an experiment to measure the touch interactions using tactile sensors attached to the fabric of the inflatable part. A psychologically secure inflatable humanoid CA is a promising platform for physical interaction experiments.","PeriodicalId":13583,"journal":{"name":"Int. J. Autom. Technol.","volume":"53 1","pages":"277-283"},"PeriodicalIF":0.0,"publicationDate":"2023-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79351434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-05DOI: 10.20965/ijat.2023.p0248
Ryo Hanashima, Takumi Tokuda, M. Mochimaru, J. Ohyama
Posture learning is required in rehabilitation and in sports such as yoga and martial arts. Virtual reality (VR) systems are being used to learn posture by superimposing the postures of the learner and instructor in cyberspace using avatars. In this study, we examined whether the presented perspective of the avatar (first-person vs. third-person perspective) and visuo-tactile feedback (tactile correct feedback + visual feedback vs. tactile incorrect feedback + visual feedback vs. visual only feedback) are effective for the posture learning. The results of an experiment (N = 24) suggested that use of the third-person perspective may result in accurate learning of the head position as compared with first-person perspective. Visuo-tactile feedback was found to improve the subjective rating on the ease of learning, while the presentation method in which tactile feedback is given when body position is correct was found to be more effective than tactile feedback given when body position is incorrect. The sense of agency was maintained at a high level under all conditions for perspective and visuo-tactile feedback and may have improved the learning accuracy of posture. The findings of this study are expected to contribute to the design of effective perspective and tactile presentation in VR-based motor learning.
{"title":"Effect of Perspective and Visuo-Tactile Feedback in Virtual Reality-Based Posture Learning","authors":"Ryo Hanashima, Takumi Tokuda, M. Mochimaru, J. Ohyama","doi":"10.20965/ijat.2023.p0248","DOIUrl":"https://doi.org/10.20965/ijat.2023.p0248","url":null,"abstract":"Posture learning is required in rehabilitation and in sports such as yoga and martial arts. Virtual reality (VR) systems are being used to learn posture by superimposing the postures of the learner and instructor in cyberspace using avatars. In this study, we examined whether the presented perspective of the avatar (first-person vs. third-person perspective) and visuo-tactile feedback (tactile correct feedback + visual feedback vs. tactile incorrect feedback + visual feedback vs. visual only feedback) are effective for the posture learning. The results of an experiment (N = 24) suggested that use of the third-person perspective may result in accurate learning of the head position as compared with first-person perspective. Visuo-tactile feedback was found to improve the subjective rating on the ease of learning, while the presentation method in which tactile feedback is given when body position is correct was found to be more effective than tactile feedback given when body position is incorrect. The sense of agency was maintained at a high level under all conditions for perspective and visuo-tactile feedback and may have improved the learning accuracy of posture. The findings of this study are expected to contribute to the design of effective perspective and tactile presentation in VR-based motor learning.","PeriodicalId":13583,"journal":{"name":"Int. J. Autom. Technol.","volume":"144 1","pages":"248-261"},"PeriodicalIF":0.0,"publicationDate":"2023-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75844114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-05DOI: 10.20965/ijat.2023.p0237
Shino Matsuura, Kazuhiko Hirata, Hiroaki Kimura, Yoshitaka Iwamoto, Makoto Takahashi, Y. Endo, M. Tada, Tsubasa Maruyama, Y. Kurita
An appropriate physical functionality status assessment is necessary after rehabilitation to determine the level of assistance required by the patient and the efficacy of rehabilitation. The effectiveness of rehabilitation can be determined by computing a functional independence measure (FIM) score. The FIM score measurement process evaluates the amount of assistance associated with activities of daily living; however, it is influenced by evaluator subjectivity and can vary for the same patient assessed by different evaluators. Furthermore, it is time-consuming and laborious because of the large number of component items. Therefore, a new evaluation system that is easily implementable and based on objective criteria is needed. Several machine learning techniques have been suggested for evaluating the progress of rehabilitation in an objective manner, and their efficacy has been proven. However, the FIM score includes complex movement items, necessitating the evaluation of factors from multiple angles. In this study, a method for estimating FIM values using machine learning was investigated to evaluate the effectiveness of rehabilitation objectively. A simple exercise measurement experiment was conducted, and a musculoskeletal model was used to analyze the data to obtain movement and other mechanical indices, and these were subsequently used as features of machine learning. Based on the FIM values, an estimation experiment was conducted in three groups: independent, modified independent, and assisted groups. The statistical approaches of random forest and logistic regression were used in conjunction with a support vector machine for FIM estimation. The highest accuracy was estimated to be approximately 0.9. However, the accuracy varied with each method and item; the lowest accuracy was approximately 0.3. Statistical analysis showed clear differences in the indicators, with significant differences between the groups. These differences were considered to increase the accuracy of FIM estimation. Additionally, the accuracy of some items was improved by changing the feature values used. The best results were obtained when only the joint angle was used for two items, joint torque and muscle strength were used for seven items, and all indicators were used for two items. This suggests that a comprehensive evaluation, including that of joint torque and muscle strength, is effective for estimating FIM score.
{"title":"Motion Measurement and Analysis for Functional Independence Measure","authors":"Shino Matsuura, Kazuhiko Hirata, Hiroaki Kimura, Yoshitaka Iwamoto, Makoto Takahashi, Y. Endo, M. Tada, Tsubasa Maruyama, Y. Kurita","doi":"10.20965/ijat.2023.p0237","DOIUrl":"https://doi.org/10.20965/ijat.2023.p0237","url":null,"abstract":"An appropriate physical functionality status assessment is necessary after rehabilitation to determine the level of assistance required by the patient and the efficacy of rehabilitation. The effectiveness of rehabilitation can be determined by computing a functional independence measure (FIM) score. The FIM score measurement process evaluates the amount of assistance associated with activities of daily living; however, it is influenced by evaluator subjectivity and can vary for the same patient assessed by different evaluators. Furthermore, it is time-consuming and laborious because of the large number of component items. Therefore, a new evaluation system that is easily implementable and based on objective criteria is needed. Several machine learning techniques have been suggested for evaluating the progress of rehabilitation in an objective manner, and their efficacy has been proven. However, the FIM score includes complex movement items, necessitating the evaluation of factors from multiple angles. In this study, a method for estimating FIM values using machine learning was investigated to evaluate the effectiveness of rehabilitation objectively. A simple exercise measurement experiment was conducted, and a musculoskeletal model was used to analyze the data to obtain movement and other mechanical indices, and these were subsequently used as features of machine learning. Based on the FIM values, an estimation experiment was conducted in three groups: independent, modified independent, and assisted groups. The statistical approaches of random forest and logistic regression were used in conjunction with a support vector machine for FIM estimation. The highest accuracy was estimated to be approximately 0.9. However, the accuracy varied with each method and item; the lowest accuracy was approximately 0.3. Statistical analysis showed clear differences in the indicators, with significant differences between the groups. These differences were considered to increase the accuracy of FIM estimation. Additionally, the accuracy of some items was improved by changing the feature values used. The best results were obtained when only the joint angle was used for two items, joint torque and muscle strength were used for seven items, and all indicators were used for two items. This suggests that a comprehensive evaluation, including that of joint torque and muscle strength, is effective for estimating FIM score.","PeriodicalId":13583,"journal":{"name":"Int. J. Autom. Technol.","volume":"48 1","pages":"237-247"},"PeriodicalIF":0.0,"publicationDate":"2023-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86564758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-05DOI: 10.20965/ijat.2023.p0205
M. Tada, T. Inamura
With the advancement of information technologies such as the Internet of Things and artificial intelligence, cyber-physical systems are being introduced into society. At the core of these systems is the digital twin, a computer model (twin) of a physical entity built in cyberspace for simulation-based prediction. Currently, the digital twin mainly targets artificial objects, such as aircraft engines and factories. However, if it can be extended to humans, it could lead to the realization of human-machine cooperative systems and health promotion services, thereby solving social issues stemming from the aging of society. However, humans are the weakest link in the system, and many technical problems remain to be solved, such as realizing the measurement, modeling, and prediction of human behavior, if the human digital twin is to become reality. This special issue contains 9 papers on developing essential technologies for the human digital twin and constructing human-machine systems for specific applications. The topics covered include learning algorithms, motion measurement and analysis techniques, human perception, system development, and platform software for system development. These clearly show that cross-disciplinary efforts are essential to the realization of the human digital twin. We thank the authors of the papers submitted for this special issue. We are confident that the information provided by the authors is suggestive and informative for both specialists and non-specialists alike. We also sincerely appreciate the efforts of the reviewers. Their contributions helped to make this special issue possible. We hope that this special issue will catalyze sharing across the boundaries of research fields along the path to realizing a human digital twin.
{"title":"Editorial: Human Digital Twin Technology","authors":"M. Tada, T. Inamura","doi":"10.20965/ijat.2023.p0205","DOIUrl":"https://doi.org/10.20965/ijat.2023.p0205","url":null,"abstract":"With the advancement of information technologies such as the Internet of Things and artificial intelligence, cyber-physical systems are being introduced into society. At the core of these systems is the digital twin, a computer model (twin) of a physical entity built in cyberspace for simulation-based prediction.\u0000 Currently, the digital twin mainly targets artificial objects, such as aircraft engines and factories. However, if it can be extended to humans, it could lead to the realization of human-machine cooperative systems and health promotion services, thereby solving social issues stemming from the aging of society. However, humans are the weakest link in the system, and many technical problems remain to be solved, such as realizing the measurement, modeling, and prediction of human behavior, if the human digital twin is to become reality.\u0000 This special issue contains 9 papers on developing essential technologies for the human digital twin and constructing human-machine systems for specific applications. The topics covered include learning algorithms, motion measurement and analysis techniques, human perception, system development, and platform software for system development. These clearly show that cross-disciplinary efforts are essential to the realization of the human digital twin.\u0000 We thank the authors of the papers submitted for this special issue. We are confident that the information provided by the authors is suggestive and informative for both specialists and non-specialists alike. We also sincerely appreciate the efforts of the reviewers. Their contributions helped to make this special issue possible. We hope that this special issue will catalyze sharing across the boundaries of research fields along the path to realizing a human digital twin.","PeriodicalId":13583,"journal":{"name":"Int. J. Autom. Technol.","volume":"2014 1","pages":"205"},"PeriodicalIF":0.0,"publicationDate":"2023-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82738642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, labor shortages and workforce aging have become problematic. One solution to this problem is introducing humanoid robots to perform human tasks. This research is aimed at analyzing the posture and preparation posture of the torso and elbow in human motions for improving the motion planning of humanoid robots utilizing the redundant part. We analyze the frontal tilt angle of the torso, time of backward bending the torso, and lateral tilt angle of the elbow when humans apply force. In the experiments, we focused on picking and placing heavy objects on the shelf, which requires maintaining balance and exerting force, and confirmed the differences in movements by changing the weight of the objects. An optical motion capture system and a six-axis load cell were used for measuring the motion. The subjects were asked to approach the shelf from a distance of 1 m and move a heavy object from the upper to the middle height of the shelf. As a result, the heavier the object, the earlier the preparation posture of a backward bending of the torso before grasping. The heavier the object, the larger the tilt angle of the torso forward. Significant differences were observed between the time of backward bending and picking the object, distances from the shoulder to wrist joint, and tilt angle of the torso. We identified the posture and time required to bend the torso for holding a heavier object in front of the shelf, considering the dynamic stability of the Zero-Moment Point.
{"title":"Motion Analysis of the Torso and the Elbow During Picking and Placing Heavy Object on Shelf","authors":"Kaito Hara, Satoki Tsuichihara, Yasutake Takahashi","doi":"10.20965/ijat.2023.p0226","DOIUrl":"https://doi.org/10.20965/ijat.2023.p0226","url":null,"abstract":"In recent years, labor shortages and workforce aging have become problematic. One solution to this problem is introducing humanoid robots to perform human tasks. This research is aimed at analyzing the posture and preparation posture of the torso and elbow in human motions for improving the motion planning of humanoid robots utilizing the redundant part. We analyze the frontal tilt angle of the torso, time of backward bending the torso, and lateral tilt angle of the elbow when humans apply force. In the experiments, we focused on picking and placing heavy objects on the shelf, which requires maintaining balance and exerting force, and confirmed the differences in movements by changing the weight of the objects. An optical motion capture system and a six-axis load cell were used for measuring the motion. The subjects were asked to approach the shelf from a distance of 1 m and move a heavy object from the upper to the middle height of the shelf. As a result, the heavier the object, the earlier the preparation posture of a backward bending of the torso before grasping. The heavier the object, the larger the tilt angle of the torso forward. Significant differences were observed between the time of backward bending and picking the object, distances from the shoulder to wrist joint, and tilt angle of the torso. We identified the posture and time required to bend the torso for holding a heavier object in front of the shelf, considering the dynamic stability of the Zero-Moment Point.","PeriodicalId":13583,"journal":{"name":"Int. J. Autom. Technol.","volume":"26 1","pages":"226-236"},"PeriodicalIF":0.0,"publicationDate":"2023-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82809517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}