A robot hand-arm that can perform various tasks with the unaffected arm could ease the daily lives of patients with a single upper-limb dysfunction. A smooth interaction between robot and patient is desirable since their other arm functions normally. If the robot can move in response to the user's intentions and cooperate with the unaffected arm, even without detailed operation, it can effectively assist with daily tasks. This study aims to propose and develop a cybernic robot hand-arm with the following features: 1) input of user intention via bioelectrical signals from the paralyzed arm, the unaffected arm's motion, and voice; 2) autonomous control of support movements; 3) a control system that integrates voluntary and autonomous control by combining 1) and 2) to thus allow smooth work support in cooperation with the unaffected arm, reflecting intention as a part of the body; and 4) a learning function to provide work support across various tasks in daily use. We confirmed the feasibility and usefulness of the proposed system through a pilot study involving three patients. The system learned to support new tasks by working with the user through an operating function that does not require the involvement of the unaffected arm. The system divides the support actions into movement phases and learns the phase-shift conditions from the sensor information about the user's intention. After learning, the system autonomously performs learned support actions through voluntary phase shifts based on input about the user's intention via bioelectrical signals, the unaffected arm's motion, and by voice, enabling smooth collaborative movement with the unaffected arm. Experiments with patients demonstrated that the system could learn and provide smooth work support in cooperation with the unaffected arm to successfully complete tasks they find difficult. Additionally, the questionnaire subjectively confirmed that cooperative work according to the user's intention was achieved and that work time was within a feasible range for daily life. Furthermore, it was observed that participants who used bioelectrical signals from their paralyzed arm perceived the system as part of their body. We thus confirmed the feasibility and usefulness of various cooperative task supports using the proposed method.
{"title":"Cybernic robot hand-arm that realizes cooperative work as a new hand-arm for people with a single upper-limb dysfunction.","authors":"Hiroaki Toyama, Hiroaki Kawamoto, Yoshiyuki Sankai","doi":"10.3389/frobt.2024.1455582","DOIUrl":"10.3389/frobt.2024.1455582","url":null,"abstract":"<p><p>A robot hand-arm that can perform various tasks with the unaffected arm could ease the daily lives of patients with a single upper-limb dysfunction. A smooth interaction between robot and patient is desirable since their other arm functions normally. If the robot can move in response to the user's intentions and cooperate with the unaffected arm, even without detailed operation, it can effectively assist with daily tasks. This study aims to propose and develop a cybernic robot hand-arm with the following features: 1) input of user intention via bioelectrical signals from the paralyzed arm, the unaffected arm's motion, and voice; 2) autonomous control of support movements; 3) a control system that integrates voluntary and autonomous control by combining 1) and 2) to thus allow smooth work support in cooperation with the unaffected arm, reflecting intention as a part of the body; and 4) a learning function to provide work support across various tasks in daily use. We confirmed the feasibility and usefulness of the proposed system through a pilot study involving three patients. The system learned to support new tasks by working with the user through an operating function that does not require the involvement of the unaffected arm. The system divides the support actions into movement phases and learns the phase-shift conditions from the sensor information about the user's intention. After learning, the system autonomously performs learned support actions through voluntary phase shifts based on input about the user's intention via bioelectrical signals, the unaffected arm's motion, and by voice, enabling smooth collaborative movement with the unaffected arm. Experiments with patients demonstrated that the system could learn and provide smooth work support in cooperation with the unaffected arm to successfully complete tasks they find difficult. Additionally, the questionnaire subjectively confirmed that cooperative work according to the user's intention was achieved and that work time was within a feasible range for daily life. Furthermore, it was observed that participants who used bioelectrical signals from their paralyzed arm perceived the system as part of their body. We thus confirmed the feasibility and usefulness of various cooperative task supports using the proposed method.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1455582"},"PeriodicalIF":2.9,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11535860/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142584643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-21eCollection Date: 2024-01-01DOI: 10.3389/frobt.2024.1453194
Dalia Braverman-Jaiven, Luigi Manfredi
Inflammatory bowel disease (IBD) causes chronic inflammation of the colon and digestive tract, and it can be classified as Crohn's disease (CD) and Ulcerative colitis (UC). IBD is more prevalent in Europe and North America, however, since the beginning of the 21st century it has been increasing in South America, Asia, and Africa, leading to its consideration as a worldwide problem. Optical colonoscopy is one of the crucial tests in diagnosing and assessing the progression and prognosis of IBD, as it allows a real-time optical visualization of the colonic wall and ileum and allows for the collection of tissue samples. The accuracy of colonoscopy procedures depends on the expertise and ability of the endoscopists. Therefore, algorithms based on Deep Learning (DL) and Convolutional Neural Networks (CNN) for colonoscopy images and videos are growing in popularity, especially for the detection and classification of colorectal polyps. The performance of this system is dependent on the quality and quantity of the data used for training. There are several datasets publicly available for endoscopy images and videos, but most of them are solely specialized in polyps. The use of DL algorithms to detect IBD is still in its inception, most studies are based on assessing the severity of UC. As artificial intelligence (AI) grows in popularity there is a growing interest in the use of these algorithms for diagnosing and classifying the IBDs and managing their progression. To tackle this, more annotated colonoscopy images and videos will be required for the training of new and more reliable AI algorithms. This article discusses the current challenges in the early detection of IBD, focusing on the available AI algorithms, and databases, and the challenges ahead to improve the detection rate.
{"title":"Advancements in the use of AI in the diagnosis and management of inflammatory bowel disease.","authors":"Dalia Braverman-Jaiven, Luigi Manfredi","doi":"10.3389/frobt.2024.1453194","DOIUrl":"10.3389/frobt.2024.1453194","url":null,"abstract":"<p><p>Inflammatory bowel disease (IBD) causes chronic inflammation of the colon and digestive tract, and it can be classified as Crohn's disease (CD) and Ulcerative colitis (UC). IBD is more prevalent in Europe and North America, however, since the beginning of the 21st century it has been increasing in South America, Asia, and Africa, leading to its consideration as a worldwide problem. Optical colonoscopy is one of the crucial tests in diagnosing and assessing the progression and prognosis of IBD, as it allows a real-time optical visualization of the colonic wall and ileum and allows for the collection of tissue samples. The accuracy of colonoscopy procedures depends on the expertise and ability of the endoscopists. Therefore, algorithms based on Deep Learning (DL) and Convolutional Neural Networks (CNN) for colonoscopy images and videos are growing in popularity, especially for the detection and classification of colorectal polyps. The performance of this system is dependent on the quality and quantity of the data used for training. There are several datasets publicly available for endoscopy images and videos, but most of them are solely specialized in polyps. The use of DL algorithms to detect IBD is still in its inception, most studies are based on assessing the severity of UC. As artificial intelligence (AI) grows in popularity there is a growing interest in the use of these algorithms for diagnosing and classifying the IBDs and managing their progression. To tackle this, more annotated colonoscopy images and videos will be required for the training of new and more reliable AI algorithms. This article discusses the current challenges in the early detection of IBD, focusing on the available AI algorithms, and databases, and the challenges ahead to improve the detection rate.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1453194"},"PeriodicalIF":2.9,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11532194/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142577084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-18eCollection Date: 2024-01-01DOI: 10.3389/frobt.2024.1450266
Ranjai Baidya, Heon Jeong
The use of autonomous Unmanned Aerial Vehicles (UAVs) has been increasing, and the autonomy of these systems and their capabilities in dealing with uncertainties is crucial. Autonomous landing is pivotal for the success of an autonomous mission of UAVs. This paper presents an autonomous landing system for quadrotor UAVs with the ability to perform smooth landing even in undesirable conditions like obstruction by obstacles in and around the designated landing area and inability to identify or the absence of a visual marker establishing the designated landing area. We have integrated algorithms like version 5 of You Only Look Once (YOLOv5), DeepSORT, Euclidean distance transform, and Proportional-Integral-Derivative (PID) controller to strengthen the robustness of the overall system. While the YOLOv5 model is trained to identify the visual marker of the landing area and some common obstacles like people, cars, and trees, the DeepSORT algorithm keeps track of the identified objects. Similarly, using the detection of the identified objects and Euclidean distance transform, an open space without any obstacles to land could be identified if necessary. Finally, the PID controller generates appropriate movement values for the UAV using the visual cues of the target landing area and the obstacles. To warrant the validity of the overall system without risking the safety of the involved people, initial tests are performed, and a software-based simulation is performed before executing the tests in real life. A full-blown hardware system with an autonomous landing system is then built and tested in real life. The designed system is tested in various scenarios to verify the effectiveness of the system. The code is available at this repository: https://github.com/rnjbdya/Vision-based-UAV-autonomous-landing.
{"title":"Simulation and real-life implementation of UAV autonomous landing system based on object recognition and tracking for safe landing in uncertain environments.","authors":"Ranjai Baidya, Heon Jeong","doi":"10.3389/frobt.2024.1450266","DOIUrl":"https://doi.org/10.3389/frobt.2024.1450266","url":null,"abstract":"<p><p>The use of autonomous Unmanned Aerial Vehicles (UAVs) has been increasing, and the autonomy of these systems and their capabilities in dealing with uncertainties is crucial. Autonomous landing is pivotal for the success of an autonomous mission of UAVs. This paper presents an autonomous landing system for quadrotor UAVs with the ability to perform smooth landing even in undesirable conditions like obstruction by obstacles in and around the designated landing area and inability to identify or the absence of a visual marker establishing the designated landing area. We have integrated algorithms like version 5 of You Only Look Once (YOLOv5), DeepSORT, Euclidean distance transform, and Proportional-Integral-Derivative (PID) controller to strengthen the robustness of the overall system. While the YOLOv5 model is trained to identify the visual marker of the landing area and some common obstacles like people, cars, and trees, the DeepSORT algorithm keeps track of the identified objects. Similarly, using the detection of the identified objects and Euclidean distance transform, an open space without any obstacles to land could be identified if necessary. Finally, the PID controller generates appropriate movement values for the UAV using the visual cues of the target landing area and the obstacles. To warrant the validity of the overall system without risking the safety of the involved people, initial tests are performed, and a software-based simulation is performed before executing the tests in real life. A full-blown hardware system with an autonomous landing system is then built and tested in real life. The designed system is tested in various scenarios to verify the effectiveness of the system. The code is available at this repository: https://github.com/rnjbdya/Vision-based-UAV-autonomous-landing.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1450266"},"PeriodicalIF":2.9,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11551718/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142630038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-18eCollection Date: 2024-01-01DOI: 10.3389/frobt.2024.1454923
Zara Mirmalek, Nicole A Raineault
Conducting sea-going ocean science no longer needs to be limited to the number of berths on a ship given that telecommunications, computing, and networking technologies onboard ships have become familiar mechanisms for expanding scientists' reach from onshore. The oceanographic community routinely works with remotely operated vehicles (ROVs) and pilots to access real-time video and data from the deep sea, while onboard a ship. The extension of using an ROV and its host vessel's live-streaming capabilities has been popularized for almost 3 decades as a telepresence technology. Telepresence-enabled vessels with ROVs have been employed for science, education, and outreach, giving a greater number of communities viewing access to ocean science. However, the slower development of technologies and social processes enabling sustained real-time involvement between scientists on-ship and onshore undermines the potential for broader access, which limits the possibility of increasing inclusivity and discoveries through a diversity of knowledge and capabilities. This article reviews ocean scientists' use of telepresence for ROV-based deep-sea research and funded studies of telepresence capabilities. The authors summarize these studies findings and conditions that lead to defining the use of telepresence-enabled vessels for "remote science at sea." Authors define remote science at sea as a type of ocean expedition, an additional capability, not a replacement for all practices by which scientists conduct ocean research. Remote science for ocean research is an expedition at-sea directed by a distributed science team working together from at least two locations (on-ship and onshore) to complete their science objectives for which primary data is acquired by robotic technologies, with connectivity supported by a high-bandwidth satellite and the telepresence-enabled ship's technologies to support the science team actively engaged before, during, and after dives across worksites. The growth of productive ocean expeditions with remote science is met with social, technical, and logistical challenges that impede the ability of remote scientists to succeed. In this article, authors review telepresence-enabled ocean science, define and situate the adjoined model of remote science at sea, and some infrastructural, technological and social considerations for conducting and further developing remote science at sea.
{"title":"Remote science at sea with remotely operated vehicles.","authors":"Zara Mirmalek, Nicole A Raineault","doi":"10.3389/frobt.2024.1454923","DOIUrl":"10.3389/frobt.2024.1454923","url":null,"abstract":"<p><p>Conducting sea-going ocean science no longer needs to be limited to the number of berths on a ship given that telecommunications, computing, and networking technologies onboard ships have become familiar mechanisms for expanding scientists' reach from onshore. The oceanographic community routinely works with remotely operated vehicles (ROVs) and pilots to access real-time video and data from the deep sea, while onboard a ship. The extension of using an ROV and its host vessel's live-streaming capabilities has been popularized for almost 3 decades as a telepresence technology. Telepresence-enabled vessels with ROVs have been employed for science, education, and outreach, giving a greater number of communities viewing access to ocean science. However, the slower development of technologies and social processes enabling sustained real-time involvement between scientists on-ship and onshore undermines the potential for broader access, which limits the possibility of increasing inclusivity and discoveries through a diversity of knowledge and capabilities. This article reviews ocean scientists' use of telepresence for ROV-based deep-sea research and funded studies of telepresence capabilities. The authors summarize these studies findings and conditions that lead to defining the use of telepresence-enabled vessels for \"remote science at sea.\" Authors define remote science at sea as a type of ocean expedition, an additional capability, not a replacement for all practices by which scientists conduct ocean research. Remote science for ocean research is an expedition at-sea directed by a distributed science team working together from at least two locations (on-ship and onshore) to complete their science objectives for which primary data is acquired by robotic technologies, with connectivity supported by a high-bandwidth satellite and the telepresence-enabled ship's technologies to support the science team actively engaged before, during, and after dives across worksites. The growth of productive ocean expeditions with remote science is met with social, technical, and logistical challenges that impede the ability of remote scientists to succeed. In this article, authors review telepresence-enabled ocean science, define and situate the adjoined model of remote science at sea, and some infrastructural, technological and social considerations for conducting and further developing remote science at sea.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1454923"},"PeriodicalIF":2.9,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11527704/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142570039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-17eCollection Date: 2024-01-01DOI: 10.3389/frobt.2024.1346714
Shiva Hanifi, Elisa Maiettini, Maria Lombardi, Lorenzo Natale
This research report introduces a learning system designed to detect the object that humans are gazing at, using solely visual feedback. By incorporating face detection, human attention prediction, and online object detection, the system enables the robot to perceive and interpret human gaze accurately, thereby facilitating the establishment of joint attention with human partners. Additionally, a novel dataset collected with the humanoid robot iCub is introduced, comprising more than 22,000 images from ten participants gazing at different annotated objects. This dataset serves as a benchmark for human gaze estimation in table-top human-robot interaction (HRI) contexts. In this work, we use it to assess the proposed pipeline's performance and examine each component's effectiveness. Furthermore, the developed system is deployed on the iCub and showcases its functionality. The results demonstrate the potential of the proposed approach as a first step to enhancing social awareness and responsiveness in social robotics. This advancement can enhance assistance and support in collaborative scenarios, promoting more efficient human-robot collaborations.
{"title":"A pipeline for estimating human attention toward objects with on-board cameras on the iCub humanoid robot.","authors":"Shiva Hanifi, Elisa Maiettini, Maria Lombardi, Lorenzo Natale","doi":"10.3389/frobt.2024.1346714","DOIUrl":"10.3389/frobt.2024.1346714","url":null,"abstract":"<p><p>This research report introduces a learning system designed to detect the object that humans are gazing at, using solely visual feedback. By incorporating face detection, human attention prediction, and online object detection, the system enables the robot to perceive and interpret human gaze accurately, thereby facilitating the establishment of joint attention with human partners. Additionally, a novel dataset collected with the humanoid robot iCub is introduced, comprising more than 22,000 images from ten participants gazing at different annotated objects. This dataset serves as a benchmark for human gaze estimation in table-top human-robot interaction (HRI) contexts. In this work, we use it to assess the proposed pipeline's performance and examine each component's effectiveness. Furthermore, the developed system is deployed on the iCub and showcases its functionality. The results demonstrate the potential of the proposed approach as a first step to enhancing social awareness and responsiveness in social robotics. This advancement can enhance assistance and support in collaborative scenarios, promoting more efficient human-robot collaborations.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1346714"},"PeriodicalIF":2.9,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11524796/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142559143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-17eCollection Date: 2024-01-01DOI: 10.3389/frobt.2024.1441312
Siavash Mahmoudi, Amirreza Davar, Pouya Sohrabipour, Ramesh Bahadur Bist, Yang Tao, Dongyi Wang
Imitation learning (IL), a burgeoning frontier in machine learning, holds immense promise across diverse domains. In recent years, its integration into robotics has sparked significant interest, offering substantial advancements in autonomous control processes. This paper presents an exhaustive insight focusing on the implementation of imitation learning techniques in agricultural robotics. The survey rigorously examines varied research endeavors utilizing imitation learning to address pivotal agricultural challenges. Methodologically, this survey comprehensively investigates multifaceted aspects of imitation learning applications in agricultural robotics. The survey encompasses the identification of agricultural tasks that can potentially be addressed through imitation learning, detailed analysis of specific models and frameworks, and a thorough assessment of performance metrics employed in the surveyed studies. Additionally, it includes a comparative analysis between imitation learning techniques and conventional control methodologies in the realm of robotics. The findings derived from this survey unveil profound insights into the applications of imitation learning in agricultural robotics. These methods are highlighted for their potential to significantly improve task execution in dynamic and high-dimensional action spaces prevalent in agricultural settings, such as precision farming. Despite promising advancements, the survey discusses considerable challenges in data quality, environmental variability, and computational constraints that IL must overcome. The survey also addresses the ethical and social implications of implementing such technologies, emphasizing the need for robust policy frameworks to manage the societal impacts of automation. These findings hold substantial implications, showcasing the potential of imitation learning to revolutionize processes in agricultural robotics. This research significantly contributes to envisioning innovative applications and tools within the agricultural robotics domain, promising heightened productivity and efficiency in robotic agricultural systems. It underscores the potential for remarkable enhancements in various agricultural processes, signaling a transformative trajectory for the sector, particularly in the realm of robotics and autonomous systems.
{"title":"Leveraging imitation learning in agricultural robotics: a comprehensive survey and comparative analysis.","authors":"Siavash Mahmoudi, Amirreza Davar, Pouya Sohrabipour, Ramesh Bahadur Bist, Yang Tao, Dongyi Wang","doi":"10.3389/frobt.2024.1441312","DOIUrl":"10.3389/frobt.2024.1441312","url":null,"abstract":"<p><p>Imitation learning (IL), a burgeoning frontier in machine learning, holds immense promise across diverse domains. In recent years, its integration into robotics has sparked significant interest, offering substantial advancements in autonomous control processes. This paper presents an exhaustive insight focusing on the implementation of imitation learning techniques in agricultural robotics. The survey rigorously examines varied research endeavors utilizing imitation learning to address pivotal agricultural challenges. Methodologically, this survey comprehensively investigates multifaceted aspects of imitation learning applications in agricultural robotics. The survey encompasses the identification of agricultural tasks that can potentially be addressed through imitation learning, detailed analysis of specific models and frameworks, and a thorough assessment of performance metrics employed in the surveyed studies. Additionally, it includes a comparative analysis between imitation learning techniques and conventional control methodologies in the realm of robotics. The findings derived from this survey unveil profound insights into the applications of imitation learning in agricultural robotics. These methods are highlighted for their potential to significantly improve task execution in dynamic and high-dimensional action spaces prevalent in agricultural settings, such as precision farming. Despite promising advancements, the survey discusses considerable challenges in data quality, environmental variability, and computational constraints that IL must overcome. The survey also addresses the ethical and social implications of implementing such technologies, emphasizing the need for robust policy frameworks to manage the societal impacts of automation. These findings hold substantial implications, showcasing the potential of imitation learning to revolutionize processes in agricultural robotics. This research significantly contributes to envisioning innovative applications and tools within the agricultural robotics domain, promising heightened productivity and efficiency in robotic agricultural systems. It underscores the potential for remarkable enhancements in various agricultural processes, signaling a transformative trajectory for the sector, particularly in the realm of robotics and autonomous systems.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1441312"},"PeriodicalIF":2.9,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11524802/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142559144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Soft robots have been increasingly utilized as sophisticated tools in physical rehabilitation, particularly for assisting patients with neuromotor impairments. However, many soft robotics for rehabilitation applications are characterized by limitations such as slow response times, restricted range of motion, and low output force. There are also limited studies on the precise position and force control of wearable soft actuators. Furthermore, not many studies articulate how bellow-structured actuator designs quantitatively contribute to the robots' capability. This study introduces a paradigm of upper limb soft actuator design. This paradigm comprises two actuators: the Lobster-Inspired Silicone Pneumatic Robot (LISPER) for the elbow and the Scallop-Shaped Pneumatic Robot (SCASPER) for the shoulder. LISPER is characterized by higher bandwidth, increased output force/torque, and high linearity. SCASPER is characterized by high output force/torque and simplified fabrication processes. Comprehensive analytical models that describe the relationship between pressure, bending angles, and output force for both actuators were presented so the geometric configuration of the actuators can be set to modify the range of motion and output forces. The preliminary test on a dummy arm is conducted to test the capability of the actuators.
{"title":"Novel bio-inspired soft actuators for upper-limb exoskeletons: design, fabrication and feasibility study.","authors":"Haiyun Zhang, Gabrielle Naquila, Junghyun Bae, Zonghuan Wu, Ashwin Hingwe, Ashish Deshpande","doi":"10.3389/frobt.2024.1451231","DOIUrl":"10.3389/frobt.2024.1451231","url":null,"abstract":"<p><p>Soft robots have been increasingly utilized as sophisticated tools in physical rehabilitation, particularly for assisting patients with neuromotor impairments. However, many soft robotics for rehabilitation applications are characterized by limitations such as slow response times, restricted range of motion, and low output force. There are also limited studies on the precise position and force control of wearable soft actuators. Furthermore, not many studies articulate how bellow-structured actuator designs quantitatively contribute to the robots' capability. This study introduces a paradigm of upper limb soft actuator design. This paradigm comprises two actuators: the Lobster-Inspired Silicone Pneumatic Robot (LISPER) for the elbow and the Scallop-Shaped Pneumatic Robot (SCASPER) for the shoulder. LISPER is characterized by higher bandwidth, increased output force/torque, and high linearity. SCASPER is characterized by high output force/torque and simplified fabrication processes. Comprehensive analytical models that describe the relationship between pressure, bending angles, and output force for both actuators were presented so the geometric configuration of the actuators can be set to modify the range of motion and output forces. The preliminary test on a dummy arm is conducted to test the capability of the actuators.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1451231"},"PeriodicalIF":2.9,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11521781/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142548289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-15eCollection Date: 2024-01-01DOI: 10.3389/frobt.2024.1505171
Floris Ernst, Jonas Osburg, Ludger Tüshaus
[This corrects the article DOI: 10.3389/frobt.2024.1405169.].
[此处更正了文章 DOI:10.3389/frobt.2024.1405169]。
{"title":"Corrigendum: SonoBox: development of a robotic ultrasound tomograph for the ultrasound diagnosis of paediatric forearm fractures.","authors":"Floris Ernst, Jonas Osburg, Ludger Tüshaus","doi":"10.3389/frobt.2024.1505171","DOIUrl":"https://doi.org/10.3389/frobt.2024.1505171","url":null,"abstract":"<p><p>[This corrects the article DOI: 10.3389/frobt.2024.1405169.].</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1505171"},"PeriodicalIF":2.9,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11518681/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142548288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-15eCollection Date: 2024-01-01DOI: 10.3389/frobt.2024.1414853
Maya Dimitrova, Neda Chehlarova, Anastas Madzharov, Aleksandar Krastev, Ivan Chavdarov
A mini-review of the literature, supporting the view on the psychophysical origins of some user acceptance effects of cyber-physical systems (CPSs), is presented and discussed in this paper. Psychophysics implies the existence of a lawful functional dependence between some aspect/dimension of the stimulation from the environment, entering the senses of the human, and the psychological effect that is being produced by this stimulation, as reflected in the subjective responses. Several psychophysical models are discussed in this mini-review, aiming to support the view that the observed effects of reactance to a robot or the uncanny valley phenomenon are essentially the same subjective effects of different intensity. Justification is provided that human responses to technologically and socially ambiguous stimuli obey some regularity, which can be considered a lawful dependence in a psychophysical sense. The main conclusion is based on the evidence that psychophysics can provide useful and helpful, as well as parsimonious, design recommendations for scenarios with CPSs for social applications.
{"title":"Psychophysics of user acceptance of social cyber-physical systems.","authors":"Maya Dimitrova, Neda Chehlarova, Anastas Madzharov, Aleksandar Krastev, Ivan Chavdarov","doi":"10.3389/frobt.2024.1414853","DOIUrl":"https://doi.org/10.3389/frobt.2024.1414853","url":null,"abstract":"<p><p>A mini-review of the literature, supporting the view on the psychophysical origins of some user acceptance effects of cyber-physical systems (CPSs), is presented and discussed in this paper. Psychophysics implies the existence of a lawful functional dependence between some aspect/dimension of the stimulation from the environment, entering the senses of the human, and the psychological effect that is being produced by this stimulation, as reflected in the subjective responses. Several psychophysical models are discussed in this mini-review, aiming to support the view that the observed effects of reactance to a robot or the uncanny valley phenomenon are essentially the same subjective effects of different intensity. Justification is provided that human responses to technologically and socially ambiguous stimuli obey some regularity, which can be considered a lawful dependence in a psychophysical sense. The main conclusion is based on the evidence that psychophysics can provide useful and helpful, as well as parsimonious, design recommendations for scenarios with CPSs for social applications.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1414853"},"PeriodicalIF":2.9,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11519208/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142548290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Effective weed management is a significant challenge in agronomic crops which necessitates innovative solutions to reduce negative environmental impacts and minimize crop damage. Traditional methods often rely on indiscriminate herbicide application, which lacks precision and sustainability. To address this critical need, this study demonstrated an AI-enabled robotic system, Weeding robot, designed for targeted weed management. Palmer amaranth (Amaranthus palmeri S. Watson) was selected as it is the most troublesome weed in Nebraska. We developed the full stack (vision, hardware, software, robotic platform, and AI model) for precision spraying using YOLOv7, a state-of-the-art object detection deep learning technique. The Weeding robot achieved an average of 60.4% precision and 62% recall in real-time weed identification and spot spraying with the developed gantry-based sprayer system. The Weeding robot successfully identified Palmer amaranth across diverse growth stages in controlled outdoor conditions. This study demonstrates the potential of AI-enabled robotic systems for targeted weed management, offering a more precise and sustainable alternative to traditional herbicide application methods.
有效管理杂草是农艺作物面临的一项重大挑战,需要创新的解决方案来减少对环境的负面影响,并最大限度地减少对作物的损害。传统方法往往依赖于不加区分地施用除草剂,缺乏精确性和可持续性。为了满足这一关键需求,本研究展示了一种人工智能机器人系统--除草机器人,旨在进行有针对性的杂草管理。之所以选择帕尔默苋(Amaranthus palmeri S. Watson),是因为它是内布拉斯加州最棘手的杂草。我们利用最先进的物体检测深度学习技术 YOLOv7 开发了用于精确喷洒的全套堆栈(视觉、硬件、软件、机器人平台和人工智能模型)。除草机器人在使用所开发的龙门式喷雾器系统进行实时杂草识别和定点喷洒时,平均精确率达到 60.4%,召回率达到 62%。除草机器人在受控室外条件下成功识别了不同生长阶段的帕尔默苋。这项研究展示了人工智能机器人系统在有针对性地管理杂草方面的潜力,为传统除草剂施用方法提供了更精确、更可持续的替代方案。
{"title":"Targeted weed management of Palmer amaranth using robotics and deep learning (YOLOv7).","authors":"Amlan Balabantaray, Shaswati Behera, CheeTown Liew, Nipuna Chamara, Mandeep Singh, Amit J Jhala, Santosh Pitla","doi":"10.3389/frobt.2024.1441371","DOIUrl":"10.3389/frobt.2024.1441371","url":null,"abstract":"<p><p>Effective weed management is a significant challenge in agronomic crops which necessitates innovative solutions to reduce negative environmental impacts and minimize crop damage. Traditional methods often rely on indiscriminate herbicide application, which lacks precision and sustainability. To address this critical need, this study demonstrated an AI-enabled robotic system, Weeding robot, designed for targeted weed management. Palmer amaranth (<i>Amaranthus palmeri S. Watson</i>) was selected as it is the most troublesome weed in Nebraska. We developed the full stack (vision, hardware, software, robotic platform, and AI model) for precision spraying using YOLOv7, a state-of-the-art object detection deep learning technique. The Weeding robot achieved an average of 60.4% precision and 62% recall in real-time weed identification and spot spraying with the developed gantry-based sprayer system. The Weeding robot successfully identified Palmer amaranth across diverse growth stages in controlled outdoor conditions. This study demonstrates the potential of AI-enabled robotic systems for targeted weed management, offering a more precise and sustainable alternative to traditional herbicide application methods.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1441371"},"PeriodicalIF":2.9,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11513266/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142524080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}