Visual place recognition from a 3D laser LiDAR is one of the most active research areas in robotics. Especially, learning and recognition of scene descriptors, such as scan context descriptors that map 3D point clouds to 2D point clouds, is one of the promising research directions. Although the scan-context descriptor has a sufficiently high recognition performance, it is still expensive image data and cannot be handled with low-capacity non-deep models. In this paper, we explore the task of compressing the scan context descriptor model while maintaining its recognition performance. To this end, the proposed approach slightly modifies the off-the-shelf classifier model of convolutional neural networks (CNN) from its basis, by replacing the SoftMax part with a support vector machine (SVM). Experiments with publicly available NCLT dataset validate the effectiveness of the proposed approach.
{"title":"Improved Visual Robot Place Recognition of Scan-Context Descriptors by Combining with CNN and SVM","authors":"Minying Ye, Kanji Tanaka","doi":"10.20965/jrm.2023.p1622","DOIUrl":"https://doi.org/10.20965/jrm.2023.p1622","url":null,"abstract":"Visual place recognition from a 3D laser LiDAR is one of the most active research areas in robotics. Especially, learning and recognition of scene descriptors, such as scan context descriptors that map 3D point clouds to 2D point clouds, is one of the promising research directions. Although the scan-context descriptor has a sufficiently high recognition performance, it is still expensive image data and cannot be handled with low-capacity non-deep models. In this paper, we explore the task of compressing the scan context descriptor model while maintaining its recognition performance. To this end, the proposed approach slightly modifies the off-the-shelf classifier model of convolutional neural networks (CNN) from its basis, by replacing the SoftMax part with a support vector machine (SVM). Experiments with publicly available NCLT dataset validate the effectiveness of the proposed approach.","PeriodicalId":51661,"journal":{"name":"Journal of Robotics and Mechatronics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138957197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the trash-collection challenge of the Nakanoshima Robot Challenge, an autonomous robot must collect trash (bottles, cans, and bentos) scattered in a defined area within a time limit. A method for collecting the trash is to use machine learning to recognize the objects, move to the target location, and grasp the objects. An autonomous robot can achieve the target position and posture by rotating on the spot at the starting point, moving in a straight line, and rotating on the spot at the destination, but the rotation requires stopping and starting. To achieve faster movement, we implemented a smooth movement approach without sequential stops using a spline curve. When using the training data previously generated by the authors in their laboratory for object recognition, the robot could not correctly recognize objects in the environment of the robot competition, where strong sunlight shines through glass, because of the varying brightness and darkness. To solve this problem, we added our newly generated training data to YOLO, an image-recognition algorithm based on deep learning, and performed machine learning to achieve object recognition under various conditions.
{"title":"Application of Object Grasping Using Dual-Arm Autonomous Mobile Robot—Path Planning by Spline Curve and Object Recognition by YOLO—","authors":"Naoya Mukai, Masato Suzuki, Tomokazu Takahashi, Yasushi Mae, Yasuhiko Arai, S. Aoyagi","doi":"10.20965/jrm.2023.p1524","DOIUrl":"https://doi.org/10.20965/jrm.2023.p1524","url":null,"abstract":"In the trash-collection challenge of the Nakanoshima Robot Challenge, an autonomous robot must collect trash (bottles, cans, and bentos) scattered in a defined area within a time limit. A method for collecting the trash is to use machine learning to recognize the objects, move to the target location, and grasp the objects. An autonomous robot can achieve the target position and posture by rotating on the spot at the starting point, moving in a straight line, and rotating on the spot at the destination, but the rotation requires stopping and starting. To achieve faster movement, we implemented a smooth movement approach without sequential stops using a spline curve. When using the training data previously generated by the authors in their laboratory for object recognition, the robot could not correctly recognize objects in the environment of the robot competition, where strong sunlight shines through glass, because of the varying brightness and darkness. To solve this problem, we added our newly generated training data to YOLO, an image-recognition algorithm based on deep learning, and performed machine learning to achieve object recognition under various conditions.","PeriodicalId":51661,"journal":{"name":"Journal of Robotics and Mechatronics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138957371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A self-localization method that can seamlessly switch positions and attitudes estimated using normal distributions transform (NDT) scan matching and a real-time kinematic global navigation satellite system (GNSS) is successfully developed. One of the issues encountered in this method is the sharing of global coordinates among the different estimation methods. Therefore, the three-dimensional environmental maps utilized in the NDT scan matching are created based on the planar Cartesian coordinate system used in the GNSS to obtain accurate information regarding the location, shape, and size of the actual terrain and geographic features. Consequently, seamlessly switching between different methods enables mobile robots to stably obtain accurate estimated positions and attitudes. An autonomous driving experiment is conducted using this self-localization method in the Tsukuba Challenge 2022, and the mobile robot completed a designated course involving more than 2 km in an urban area.
{"title":"Experimental Study of Seamless Switch Between GNSS- and LiDAR-Based Self-Localization","authors":"T. Hasegawa, Haruki Miyoshi, Shin’ichi Yuta","doi":"10.20965/jrm.2023.p1514","DOIUrl":"https://doi.org/10.20965/jrm.2023.p1514","url":null,"abstract":"A self-localization method that can seamlessly switch positions and attitudes estimated using normal distributions transform (NDT) scan matching and a real-time kinematic global navigation satellite system (GNSS) is successfully developed. One of the issues encountered in this method is the sharing of global coordinates among the different estimation methods. Therefore, the three-dimensional environmental maps utilized in the NDT scan matching are created based on the planar Cartesian coordinate system used in the GNSS to obtain accurate information regarding the location, shape, and size of the actual terrain and geographic features. Consequently, seamlessly switching between different methods enables mobile robots to stably obtain accurate estimated positions and attitudes. An autonomous driving experiment is conducted using this self-localization method in the Tsukuba Challenge 2022, and the mobile robot completed a designated course involving more than 2 km in an urban area.","PeriodicalId":51661,"journal":{"name":"Journal of Robotics and Mechatronics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138957921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In cooperative transport systems, multiple robots work together to transport objects that are difficult to transport with a single robot. In recent years, multi-robot systems that cooperate to transport objects have been researched. However, during the transfer of objects, misalignment occurs between the ideal and actual grasp positions. In an automatic transport system, a grasping error can cause an error in the trajectory of the object, significantly reducing the transport efficiency. In this paper, a control system that allows robust cooperative transport control using a model error compensator is proposed for a leader–follower system in which the transported object is the virtual leader and the followers are ideally arranged. This system adds robustness to the operation of a conventional cooperative transport system by using the ideal formation of robots. The effectiveness of the proposed method was evaluated through cooperative transport experiments using two ideal formations for passing through a narrow entrance. The cooperative transport system could not pass through the narrow entrance using the conventional method; however, the system using the compensator passed through the narrow entrance smoothly.
{"title":"Robust Cooperative Transport System with Model Error Compensator Using Multiple Robots with Suction Cups","authors":"N. Matsunaga, Kazuhi Murata, Hiroshi Okajima","doi":"10.20965/jrm.2023.p1583","DOIUrl":"https://doi.org/10.20965/jrm.2023.p1583","url":null,"abstract":"In cooperative transport systems, multiple robots work together to transport objects that are difficult to transport with a single robot. In recent years, multi-robot systems that cooperate to transport objects have been researched. However, during the transfer of objects, misalignment occurs between the ideal and actual grasp positions. In an automatic transport system, a grasping error can cause an error in the trajectory of the object, significantly reducing the transport efficiency. In this paper, a control system that allows robust cooperative transport control using a model error compensator is proposed for a leader–follower system in which the transported object is the virtual leader and the followers are ideally arranged. This system adds robustness to the operation of a conventional cooperative transport system by using the ideal formation of robots. The effectiveness of the proposed method was evaluated through cooperative transport experiments using two ideal formations for passing through a narrow entrance. The cooperative transport system could not pass through the narrow entrance using the conventional method; however, the system using the compensator passed through the narrow entrance smoothly.","PeriodicalId":51661,"journal":{"name":"Journal of Robotics and Mechatronics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138954051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neng Chen, S. Suga, Masato Suzuki, Tomokazu Takahashi, Yasushi Mae, Yasuhiko Arai, S. Aoyagi
Many teams participating in robotic competitions achieve localization using a 2D map plotted using adaptive Monte Carlo localization, a robot operating system (ROS) open-source software program. However, outdoor environments often include nonlevel terrain such as slopes. In the indoor environment of multilevel structures, the data representing different levels overlap on the map. These factors can lead to localization failures. To resolve this problem, we develop a software by combining HDL localization, which is an ROS open-source software, with our own program, and use it to achieve localization based on a 3D map. Furthermore, the authors observe the erroneous recognition of a slope as a forward obstacle during a competition event. To resolve this, we propose a method to correct erroneous recognition of obstacles using a 2D laser range finder and 3D map and confirm its validity in an experiment carried out on a slope on a university campus.
{"title":"Proposal for Navigation System Using Three-Dimensional Maps—Self-Localization Using a 3D Map and Slope Detection Using a 2D Laser Range Finder and 3D Map","authors":"Neng Chen, S. Suga, Masato Suzuki, Tomokazu Takahashi, Yasushi Mae, Yasuhiko Arai, S. Aoyagi","doi":"10.20965/jrm.2023.p1604","DOIUrl":"https://doi.org/10.20965/jrm.2023.p1604","url":null,"abstract":"Many teams participating in robotic competitions achieve localization using a 2D map plotted using adaptive Monte Carlo localization, a robot operating system (ROS) open-source software program. However, outdoor environments often include nonlevel terrain such as slopes. In the indoor environment of multilevel structures, the data representing different levels overlap on the map. These factors can lead to localization failures. To resolve this problem, we develop a software by combining HDL localization, which is an ROS open-source software, with our own program, and use it to achieve localization based on a 3D map. Furthermore, the authors observe the erroneous recognition of a slope as a forward obstacle during a competition event. To resolve this, we propose a method to correct erroneous recognition of obstacles using a 2D laser range finder and 3D map and confirm its validity in an experiment carried out on a slope on a university campus.","PeriodicalId":51661,"journal":{"name":"Journal of Robotics and Mechatronics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139169480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kazuma Yagi, Yitao Ho, A. Nagata, Takayuki Kiga, Masato Suzuki, Tomokazu Takahashi, Kazuyo Tsuzuki, S. Aoyagi, Yasuhiko Arai, Yasushi Mae
This paper proposes a method for the recognition of the opened/closed states of automatic sliding glass doors to allow for automatic robot-controlled movement from outdoors to indoors and vice versa by a robot. The proposed method uses an RGB-D camera as a sensor for extraction of the automatic sliding glass doors region and image recognition to determine whether the door is opened or closed. The RGB-D camera measures the distance between the opened or moving door frames, thereby facilitating outdoor to indoor movement and vice versa. Several automatic sliding glass doors under different experimental conditions are experimentally investigated to demonstrate the effectiveness of the proposed method.
{"title":"Detection and Measurement of Opening and Closing Automatic Sliding Glass Doors","authors":"Kazuma Yagi, Yitao Ho, A. Nagata, Takayuki Kiga, Masato Suzuki, Tomokazu Takahashi, Kazuyo Tsuzuki, S. Aoyagi, Yasuhiko Arai, Yasushi Mae","doi":"10.20965/jrm.2023.p1503","DOIUrl":"https://doi.org/10.20965/jrm.2023.p1503","url":null,"abstract":"This paper proposes a method for the recognition of the opened/closed states of automatic sliding glass doors to allow for automatic robot-controlled movement from outdoors to indoors and vice versa by a robot. The proposed method uses an RGB-D camera as a sensor for extraction of the automatic sliding glass doors region and image recognition to determine whether the door is opened or closed. The RGB-D camera measures the distance between the opened or moving door frames, thereby facilitating outdoor to indoor movement and vice versa. Several automatic sliding glass doors under different experimental conditions are experimentally investigated to demonstrate the effectiveness of the proposed method.","PeriodicalId":51661,"journal":{"name":"Journal of Robotics and Mechatronics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139170733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ryota Jitsukawa, Hiroshi Kobayashi, Kenta Matsumoto, T. Hashimoto
Musculoskeletal disorders are common occupational diseases that have become a major social problem. Mechanization has been promoted as a solution to this problem. However, several tasks still require manual labor, such as fruit harvesting in orchards, making the introduction of machinery difficult in many cases. Recently, from the viewpoint of worker protection and ergonomics, various wearable robots for work support have attracted attention. In Europe and the US, there has been much development of arm-lifting assistive devices that support upward work while holding tools in the hands for industrial applications. However, most of the devices currently on the market are expensive compared to their assistive capabilities. Against this background, we developed three types of arm-lifting assistive devices with different concepts (an exoskeleton arm-lifting assistive device utilizing a gas spring, an exoskeleton arm-lifting assistive device utilizing McKibben-type artificial muscles, and an arm-lifting assistive suit utilizing rubber) to develop inexpensive, high-power devices. Furthermore, comparative verification of the assist effectiveness of each device was conducted.
{"title":"Development and Evaluation of Arm Lifting Assist Devices","authors":"Ryota Jitsukawa, Hiroshi Kobayashi, Kenta Matsumoto, T. Hashimoto","doi":"10.20965/jrm.2023.p1675","DOIUrl":"https://doi.org/10.20965/jrm.2023.p1675","url":null,"abstract":"Musculoskeletal disorders are common occupational diseases that have become a major social problem. Mechanization has been promoted as a solution to this problem. However, several tasks still require manual labor, such as fruit harvesting in orchards, making the introduction of machinery difficult in many cases. Recently, from the viewpoint of worker protection and ergonomics, various wearable robots for work support have attracted attention. In Europe and the US, there has been much development of arm-lifting assistive devices that support upward work while holding tools in the hands for industrial applications. However, most of the devices currently on the market are expensive compared to their assistive capabilities. Against this background, we developed three types of arm-lifting assistive devices with different concepts (an exoskeleton arm-lifting assistive device utilizing a gas spring, an exoskeleton arm-lifting assistive device utilizing McKibben-type artificial muscles, and an arm-lifting assistive suit utilizing rubber) to develop inexpensive, high-power devices. Furthermore, comparative verification of the assist effectiveness of each device was conducted.","PeriodicalId":51661,"journal":{"name":"Journal of Robotics and Mechatronics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138994283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Semantic segmentation, which provides pixel-wise class labels for an input image, is expected to improve the movement performance of autonomous robots significantly. However, it is difficult to train a good classifier for target applications; public large-scale datasets are often unsuitable. Actually, a classifier trained using Cityscapes is not enough accurate for the Tsukuba Challenge. To generate an appropriate dataset for the target environment, we attempt to construct a semi-automatic method using a colored point cloud obtained with a 3D scanner. Although some degree of accuracy is achieved, it is not practical. Hence, we propose a novel method that creates images with shadows by rendering them in the 3D space to improve the classification accuracy of actual images with shadows, for which existing methods do not output appropriate results. Experimental results using datasets captured around the Tsukuba City Hall demonstrate that the proposed method was superior when appropriate constraints were applied for shadow generation; the mIoU was improved from 0.358 to 0.491 when testing images were obtained at different locations.
{"title":"Dataset Creation for Semantic Segmentation Using Colored Point Clouds Considering Shadows on Traversable Area","authors":"Marin Wada, Yuriko Ueda, Junya Morioka, Miho Adachi, Ryusuke Miyamoto","doi":"10.20965/jrm.2023.p1406","DOIUrl":"https://doi.org/10.20965/jrm.2023.p1406","url":null,"abstract":"Semantic segmentation, which provides pixel-wise class labels for an input image, is expected to improve the movement performance of autonomous robots significantly. However, it is difficult to train a good classifier for target applications; public large-scale datasets are often unsuitable. Actually, a classifier trained using Cityscapes is not enough accurate for the Tsukuba Challenge. To generate an appropriate dataset for the target environment, we attempt to construct a semi-automatic method using a colored point cloud obtained with a 3D scanner. Although some degree of accuracy is achieved, it is not practical. Hence, we propose a novel method that creates images with shadows by rendering them in the 3D space to improve the classification accuracy of actual images with shadows, for which existing methods do not output appropriate results. Experimental results using datasets captured around the Tsukuba City Hall demonstrate that the proposed method was superior when appropriate constraints were applied for shadow generation; the mIoU was improved from 0.358 to 0.491 when testing images were obtained at different locations.","PeriodicalId":51661,"journal":{"name":"Journal of Robotics and Mechatronics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139168715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tomohiro Umetani, Seo Takeda, Ryusei Yamamoto, Yuki Shirakata
This paper describes a study of the simple system integration of a mobile robot in the Nakanoshima Robot Challenge 2022. To improve the operability of the robot at the start of its journey, we studied the solution to the problem of initial localization during the experimental run and the setting of virtual obstacles on the map to be used by the mobile robot. This method reduces the amount of time and manual operation required to estimate the initial position and orientation of a mobile robot in mobile robot experiments. In this study, a mobile robot is implemented using open-source products such as robot operating system (ROS) and i-Cart mini. Experimental runs in the Extra Challenge of the Nakanoshima Robot Challenge 2022 demonstrate the feasibility of the method.
本文介绍了在 "中之岛机器人挑战赛2022 "中对移动机器人的简单系统集成进行的研究。为了提高机器人在旅程开始时的可操作性,我们研究了实验运行过程中初始定位问题的解决方案,以及在地图上设置虚拟障碍物供移动机器人使用的方法。这种方法减少了移动机器人实验中估计移动机器人初始位置和方向所需的时间和人工操作。本研究使用机器人操作系统(ROS)和 i-Cart mini 等开源产品实现了移动机器人。在 2022 年中之岛机器人挑战赛额外挑战中的实验运行证明了该方法的可行性。
{"title":"Simplified System Integration of Robust Mobile Robot for Initial Pose Estimation for the Nakanoshima Robot Challenge","authors":"Tomohiro Umetani, Seo Takeda, Ryusei Yamamoto, Yuki Shirakata","doi":"10.20965/jrm.2023.p1532","DOIUrl":"https://doi.org/10.20965/jrm.2023.p1532","url":null,"abstract":"This paper describes a study of the simple system integration of a mobile robot in the Nakanoshima Robot Challenge 2022. To improve the operability of the robot at the start of its journey, we studied the solution to the problem of initial localization during the experimental run and the setting of virtual obstacles on the map to be used by the mobile robot. This method reduces the amount of time and manual operation required to estimate the initial position and orientation of a mobile robot in mobile robot experiments. In this study, a mobile robot is implemented using open-source products such as robot operating system (ROS) and i-Cart mini. Experimental runs in the Extra Challenge of the Nakanoshima Robot Challenge 2022 demonstrate the feasibility of the method.","PeriodicalId":51661,"journal":{"name":"Journal of Robotics and Mechatronics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138953456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In fruit cultivation, viticulture requires the longest working hours in extended arm postures, much of which is carried out in standing postures to accumulate fatigue on arms, shoulders, and legs: a tough working environment. In this study, we propose a power assist system to assist its users in their extended arm work while they move in vineyards. The proposed system largely consists of a mobile robot, a power assist device for work, and a control system. The mobile robot is structured with a tracked vehicle for rough terrain arranged on its left and right sides so that the users can sit between the two vehicles and be assisted by the power assist device for work installed on it. The power assist device for work with a single linear actuator utilizing a linkage mechanism has the function to retain users’ hand attitude angles while assisting the flexion and extension movements of their shoulder, elbow, and carpometacarpal joints. Then, we verify by simulations the effects that the arrangement and lengths of links will have on the carpometacarpal joints’ trajectories as well as on the hand attitude angles. Finally, in order to check the effectiveness of the proposed power assist device for work, we conducted the evaluation experiments for assumed grape-harvesting work and gibberellin treatments. As a result, we proved its work assisting effects from the muscle activity states as well as its applicability to other kinds of work by altering its linkage structure and hand support part.
{"title":"Development of Mobility Type Upper Limb Power Assist System —Mechanism and Design of Power Assist Device—","authors":"Hiroyuki Inoue, Hiroshi Shimura","doi":"10.20965/jrm.2023.p1629","DOIUrl":"https://doi.org/10.20965/jrm.2023.p1629","url":null,"abstract":"In fruit cultivation, viticulture requires the longest working hours in extended arm postures, much of which is carried out in standing postures to accumulate fatigue on arms, shoulders, and legs: a tough working environment. In this study, we propose a power assist system to assist its users in their extended arm work while they move in vineyards. The proposed system largely consists of a mobile robot, a power assist device for work, and a control system. The mobile robot is structured with a tracked vehicle for rough terrain arranged on its left and right sides so that the users can sit between the two vehicles and be assisted by the power assist device for work installed on it. The power assist device for work with a single linear actuator utilizing a linkage mechanism has the function to retain users’ hand attitude angles while assisting the flexion and extension movements of their shoulder, elbow, and carpometacarpal joints. Then, we verify by simulations the effects that the arrangement and lengths of links will have on the carpometacarpal joints’ trajectories as well as on the hand attitude angles. Finally, in order to check the effectiveness of the proposed power assist device for work, we conducted the evaluation experiments for assumed grape-harvesting work and gibberellin treatments. As a result, we proved its work assisting effects from the muscle activity states as well as its applicability to other kinds of work by altering its linkage structure and hand support part.","PeriodicalId":51661,"journal":{"name":"Journal of Robotics and Mechatronics","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139170346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}