Some human-machine systems are designed so that machines (robots) gather and deliver data to remotely located operators (humans) through an interface to aid them in classification. The performance of a human as a (binary) classifier-in-the-loop is characterized by probabilities of correctly classifying objects (or points of interest) as a true target or a false target. These two probabilities depend on the time spent collecting information at a point of interest (POI), known as dwell time. The information gain associated with collecting information at a POI is then a function of dwell time and discounted by the revisit time, i.e., the duration between consecutive revisits to the same POI, to ensure that the vehicle covers all POIs in a timely manner. The objective of the routing problem for classification is to route the vehicles optimally, which is a discrete problem, and determine the optimal dwell time at each POI, which is a continuous optimization problem, to maximize the total discounted information gain while visiting every POI at least once. Due to the coupled discrete and continuous problem, which makes the problem hard to solve, we make a simplifying assumption that the information gain is discounted exponentially by the revisit time; this assumption enables one to decouple the problem of routing with the problem of determining optimal dwell time at each POI for a single vehicle problem. For the multi-vehicle problem, since the problem involves task partitioning between vehicles in addition to routing and dwell time computation, we provide a fast heuristic to obtain high-quality feasible solutions.
有些人机系统是这样设计的:机器(机器人)通过一个界面收集数据并传送给远程操作员(人类),以帮助他们进行分类。人类作为(二进制)环路分类器的性能以正确将物体(或兴趣点)分类为真目标或假目标的概率为特征。这两种概率取决于在兴趣点(POI)收集信息所花费的时间,即停留时间。因此,与在兴趣点收集信息相关的信息增益是停留时间的函数,并通过重访时间(即连续重访同一兴趣点之间的持续时间)进行折现,以确保车辆及时覆盖所有兴趣点。分类路由问题的目标是对车辆进行最优路由(这是一个离散问题),并确定在每个 POI 的最优停留时间(这是一个连续优化问题),以便在至少访问每个 POI 一次的同时使总折现信息增益最大化。由于离散问题和连续问题耦合在一起,导致问题难以解决,因此我们做了一个简化假设,即信息增益按重访时间指数折现;对于单车问题,这一假设使我们能够将路由问题与确定每个 POI 的最佳停留时间问题解耦。对于多车辆问题,由于该问题除了路由和停留时间计算外,还涉及车辆间的任务分工,因此我们提供了一种快速启发式方法,以获得高质量的可行解。
{"title":"UAV Routing for Enhancing the Performance of a Classifier-in-the-loop","authors":"Deepak Prakash Kumar, Pranav Rajbhandari, Loy McGuire, Swaroop Darbha, Donald Sofge","doi":"10.1007/s10846-024-02169-1","DOIUrl":"https://doi.org/10.1007/s10846-024-02169-1","url":null,"abstract":"<p>Some human-machine systems are designed so that machines (robots) gather and deliver data to remotely located operators (humans) through an interface to aid them in classification. The performance of a human as a (binary) classifier-in-the-loop is characterized by probabilities of correctly classifying objects (or points of interest) as a true target or a false target. These two probabilities depend on the time spent collecting information at a point of interest (POI), known as dwell time. The information gain associated with collecting information at a POI is then a function of dwell time and discounted by the revisit time, i.e., the duration between consecutive revisits to the same POI, to ensure that the vehicle covers all POIs in a timely manner. The objective of the routing problem for classification is to route the vehicles optimally, which is a discrete problem, and determine the optimal dwell time at each POI, which is a continuous optimization problem, to maximize the total discounted information gain while visiting every POI at least once. Due to the coupled discrete and continuous problem, which makes the problem hard to solve, we make a simplifying assumption that the information gain is discounted exponentially by the revisit time; this assumption enables one to decouple the problem of routing with the problem of determining optimal dwell time at each POI for a single vehicle problem. For the multi-vehicle problem, since the problem involves task partitioning between vehicles in addition to routing and dwell time computation, we provide a fast heuristic to obtain high-quality feasible solutions.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"16 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142253380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual Simultaneous Localization and Mapping (VSLAM) technology can provide reliable visual localization and mapping capabilities for critical tasks. Existing VSLAM can extract accurate feature points in static environments for matching and pose estimation, and then build environmental map. However, in dynamic environments, the feature points extracted by the VSLAM system will become inaccurate points as the object moves, which not only leads to tracking failure but also seriously affects the accuracy of the environmental map. To alleviate these challenges, we propose a dynamic target-aware optical flow tracking method based on YOLOv8. Firstly, we use YOLOv8 to identify moving targets in the environment, and propose a method to eliminate dynamic points in the dynamic contour region. Secondly, we use the optical flow mask method to identify dynamic feature points outside the target detection object frame. Thirdly, we comprehensively eliminate the dynamic feature points. Finally, we combine the geometric and semantic information of static map points to construct the semantic map of the environment. We used ATE (Absolute Trajectory Error) and RPE (Relative Pose Error) as evaluation metrics and compared the original method with our method on the TUM dataset. The accuracy of our method is significantly improved, especially 96.92% on walking_xyz dataset. The experimental results show that our proposed method can significantly improve the overall performance of VSLAM systems under high dynamic environments.
{"title":"DFT-VSLAM: A Dynamic Optical Flow Tracking VSLAM Method","authors":"Dupeng Cai, Shijiang Li, Wenlu Qi, Kunkun Ding, Junlin Lu, Guangfeng Liu, Zhuhua Hu","doi":"10.1007/s10846-024-02171-7","DOIUrl":"https://doi.org/10.1007/s10846-024-02171-7","url":null,"abstract":"<p>Visual Simultaneous Localization and Mapping (VSLAM) technology can provide reliable visual localization and mapping capabilities for critical tasks. Existing VSLAM can extract accurate feature points in static environments for matching and pose estimation, and then build environmental map. However, in dynamic environments, the feature points extracted by the VSLAM system will become inaccurate points as the object moves, which not only leads to tracking failure but also seriously affects the accuracy of the environmental map. To alleviate these challenges, we propose a dynamic target-aware optical flow tracking method based on YOLOv8. Firstly, we use YOLOv8 to identify moving targets in the environment, and propose a method to eliminate dynamic points in the dynamic contour region. Secondly, we use the optical flow mask method to identify dynamic feature points outside the target detection object frame. Thirdly, we comprehensively eliminate the dynamic feature points. Finally, we combine the geometric and semantic information of static map points to construct the semantic map of the environment. We used ATE (Absolute Trajectory Error) and RPE (Relative Pose Error) as evaluation metrics and compared the original method with our method on the TUM dataset. The accuracy of our method is significantly improved, especially 96.92% on walking_xyz dataset. The experimental results show that our proposed method can significantly improve the overall performance of VSLAM systems under high dynamic environments.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"206 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142253425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-13DOI: 10.1007/s10846-024-02160-w
Aleksandra Loskutova, Daniel Roozbahani, Marjan Alizadeh, Heikki Handroos
Robots are increasingly present in everyday life, replacing human involvement in various domains. In situations involving danger or life-threatening conditions, it is safer to deploy robots instead of humans. However, there are still numerous applications where human intervention remains indispensable. The strategy to control a robot can be developed based on intelligent adaptive programmed algorithms or by harnessing the physiological signals of the robot operator, such as body movements, brain EEG, and muscle EMG which is a more intuitive approach. This study focuses on creating a control platform for a 3-finger gripper, utilizing Electromyography (EMG) signals derived from the operator’s forearm muscles. The developed platform consisted of a Robotiq three-finger gripper, a Delsys Trigno wireless EMG, as well as an NI CompactRIO data acquisition platform. The control process was developed using NI LabVIEW software, which extracts, processes, and analyzes the EMG signals, which are subsequently transformed into control signals to operate the robotic gripper in real-time. The system operates by transmitting the EMG signals from the operator's forearm muscles to the robotic gripper once they surpass a user-defined threshold. To evaluate the system's performance, a comprehensive set of regressive tests was conducted on the forearm muscles of three different operators based on four distinct case scenarios. Despite of the gripper’s structural design weakness to perform pinching, however, the results demonstrated an impressive average success rate of 95% for tasks involving the opening and closing of the gripper to perform grasping. This success rate was consistent across scenarios that included alterations to the scissor configuration of the gripper.
机器人越来越多地出现在日常生活中,取代人类参与各个领域的工作。在涉及危险或危及生命的情况下,使用机器人代替人类更为安全。然而,在许多应用中,人类的干预仍然不可或缺。控制机器人的策略可以基于智能自适应编程算法,也可以利用机器人操作员的生理信号,如身体运动、大脑脑电图和肌肉肌电图,这是一种更直观的方法。本研究的重点是利用操作员前臂肌肉的肌电图(EMG)信号,为三指抓手创建一个控制平台。开发的平台由 Robotiq 三指机械手、Delsys Trigno 无线 EMG 以及 NI CompactRIO 数据采集平台组成。控制过程使用 NI LabVIEW 软件开发,该软件可提取、处理和分析肌电信号,然后将其转化为控制信号,从而实时操作机器人抓手。一旦操作员前臂肌肉的肌电信号超过用户定义的阈值,系统就会将其传输给机器人抓手。为了评估该系统的性能,我们根据四种不同的情况对三名不同操作员的前臂肌肉进行了全面的回归测试。尽管抓手的结构设计在进行捏合时存在缺陷,但结果显示,在涉及打开和关闭抓手以进行抓取的任务中,平均成功率达到了令人印象深刻的 95%。这一成功率在包括改变机械手剪刀结构的各种情况下都是一致的。
{"title":"Design and Development of a Robust Control Platform for a 3-Finger Robotic Gripper Using EMG-Derived Hand Muscle Signals in NI LabVIEW","authors":"Aleksandra Loskutova, Daniel Roozbahani, Marjan Alizadeh, Heikki Handroos","doi":"10.1007/s10846-024-02160-w","DOIUrl":"https://doi.org/10.1007/s10846-024-02160-w","url":null,"abstract":"<p>Robots are increasingly present in everyday life, replacing human involvement in various domains. In situations involving danger or life-threatening conditions, it is safer to deploy robots instead of humans. However, there are still numerous applications where human intervention remains indispensable. The strategy to control a robot can be developed based on intelligent adaptive programmed algorithms or by harnessing the physiological signals of the robot operator, such as body movements, brain EEG, and muscle EMG which is a more intuitive approach. This study focuses on creating a control platform for a 3-finger gripper, utilizing Electromyography (EMG) signals derived from the operator’s forearm muscles. The developed platform consisted of a Robotiq three-finger gripper, a Delsys Trigno wireless EMG, as well as an NI CompactRIO data acquisition platform. The control process was developed using NI LabVIEW software, which extracts, processes, and analyzes the EMG signals, which are subsequently transformed into control signals to operate the robotic gripper in real-time. The system operates by transmitting the EMG signals from the operator's forearm muscles to the robotic gripper once they surpass a user-defined threshold. To evaluate the system's performance, a comprehensive set of regressive tests was conducted on the forearm muscles of three different operators based on four distinct case scenarios. Despite of the gripper’s structural design weakness to perform pinching, however, the results demonstrated an impressive average success rate of 95% for tasks involving the opening and closing of the gripper to perform grasping. This success rate was consistent across scenarios that included alterations to the scissor configuration of the gripper.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"11 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142189633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-07DOI: 10.1007/s10846-024-02154-8
Junxiao Wang, Peng Wu, Xiaoming Zhang, Renjie Xu, Tao Wang
An object’s six-degree-of-freedom (6DoF) pose information has great importance in various fields. Existing methods of pose estimation usually detect two-dimensional (2D)-three-dimensional (3D) feature point pairs, and directly estimates the pose information through Perspective-n-Point (PnP) algorithms. However, this approach ignores the spatial association between pixels, making it difficult to obtain high-precision results. In order to apply pose estimation based on deep learning methods to real-world scenarios, we hope to design a method that is robust enough in more complex scenarios. Therefore, we introduce a method for 3D object pose estimation from color images based on farthest point sampling (FPS) and object 3D bounding box. This method detects the 2D projection of 3D feature points through a convolutional neural network, matches it with the 3D model of the object, and then uses the PnP algorithm to restore the feature point pair to the object pose. Due to the global nature of the bounding box, this approach can be considered effective even in partially occluded or complex environments. In addition, we propose a heatmap suppression method based on weighted coordinates to further improve the prediction accuracy of feature points and the accuracy of the PnP algorithm in solving the pose position. Compared with other algorithms, this method has higher accuracy and better robustness. Our method yielded 93.8% of the ADD(-s) metrics on the Linemod dataset and 47.7% of the ADD(-s) metrics on the Occlusion Linemod dataset. These results show that our method is more effective than existing methods in pose estimation of large objects.
{"title":"Six-Degree-of-Freedom Pose Estimation Method for Multi-Source Feature Points Based on Fully Convolutional Neural Network","authors":"Junxiao Wang, Peng Wu, Xiaoming Zhang, Renjie Xu, Tao Wang","doi":"10.1007/s10846-024-02154-8","DOIUrl":"https://doi.org/10.1007/s10846-024-02154-8","url":null,"abstract":"<p>An object’s six-degree-of-freedom (6DoF) pose information has great importance in various fields. Existing methods of pose estimation usually detect two-dimensional (2D)-three-dimensional (3D) feature point pairs, and directly estimates the pose information through Perspective-n-Point (PnP) algorithms. However, this approach ignores the spatial association between pixels, making it difficult to obtain high-precision results. In order to apply pose estimation based on deep learning methods to real-world scenarios, we hope to design a method that is robust enough in more complex scenarios. Therefore, we introduce a method for 3D object pose estimation from color images based on farthest point sampling (FPS) and object 3D bounding box. This method detects the 2D projection of 3D feature points through a convolutional neural network, matches it with the 3D model of the object, and then uses the PnP algorithm to restore the feature point pair to the object pose. Due to the global nature of the bounding box, this approach can be considered effective even in partially occluded or complex environments. In addition, we propose a heatmap suppression method based on weighted coordinates to further improve the prediction accuracy of feature points and the accuracy of the PnP algorithm in solving the pose position. Compared with other algorithms, this method has higher accuracy and better robustness. Our method yielded 93.8% of the ADD(-s) metrics on the Linemod dataset and 47.7% of the ADD(-s) metrics on the Occlusion Linemod dataset. These results show that our method is more effective than existing methods in pose estimation of large objects.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"3 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142224690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-07DOI: 10.1007/s10846-024-02165-5
Hui Bi, Jian Zhang, Xiaowei Wang, Shuangyin Liu, Zhijia Zhao, Tao Zou
In this study, we propose an adaptive neural network (NN) control approach for a 2-DOF helicopter system characterized by finite-time prescribed performance and input saturation. Initially, the NN is utilized to estimate the system’s uncertainty. Subsequently, a novel performance function with finite-time attributes is formulated to ensure that the system’s tracking error converges to a narrow margin within a predefined time span. Furthermore, adaptive parameters are integrated to address the inherent input saturation within the system. The boundedness of the system is then demonstrated through stability analysis employing the Lyapunov function. Finally, the effectiveness of the control strategy delineated in this investigation is validated through simulations and experiments.
{"title":"Neural Network-based Adaptive Finite-time Control for 2-DOF Helicopter Systems with Prescribed Performance and Input Saturation","authors":"Hui Bi, Jian Zhang, Xiaowei Wang, Shuangyin Liu, Zhijia Zhao, Tao Zou","doi":"10.1007/s10846-024-02165-5","DOIUrl":"https://doi.org/10.1007/s10846-024-02165-5","url":null,"abstract":"<p>In this study, we propose an adaptive neural network (NN) control approach for a 2-DOF helicopter system characterized by finite-time prescribed performance and input saturation. Initially, the NN is utilized to estimate the system’s uncertainty. Subsequently, a novel performance function with finite-time attributes is formulated to ensure that the system’s tracking error converges to a narrow margin within a predefined time span. Furthermore, adaptive parameters are integrated to address the inherent input saturation within the system. The boundedness of the system is then demonstrated through stability analysis employing the Lyapunov function. Finally, the effectiveness of the control strategy delineated in this investigation is validated through simulations and experiments.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"2011 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142189634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-05DOI: 10.1007/s10846-024-02166-4
Leihui Li, Xingyu Yang, Riwei Wang, Xuping Zhang
Hand-eye calibration, a fundamental task in vision-based robotic systems, is commonly equipped with collaborative robots, especially for robotic applications in small and medium-sized enterprises (SMEs). Most approaches to hand-eye calibration rely on external markers or human assistance. We proposed a novel methodology that addresses the hand-eye calibration problem using the robot base as a reference, eliminating the need for external calibration objects or human intervention. Using point clouds of the robot base, a transformation matrix from the coordinate frame of the camera to the robot base is established as “I=AXB.” To this end, we exploit learning-based 3D detection and registration algorithms to estimate the location and orientation of the robot base. The robustness and accuracy of the method are quantified by ground-truth-based evaluation, and the accuracy result is compared with other 3D vision-based calibration methods. To assess the feasibility of our methodology, we carried out experiments utilizing a low-cost structured light scanner across varying joint configurations and groups of experiments. The proposed hand-eye calibration method achieved a translation deviation of 0.930 mm and a rotation deviation of 0.265 degrees according to the experimental results. Additionally, the 3D reconstruction experiments demonstrated a rotation error of 0.994 degrees and a position error of 1.697 mm. Moreover, our method offers the potential to be completed in 1 second, which is the fastest compared to other 3D hand-eye calibration methods. We conduct indoor 3D reconstruction and robotic grasping experiments based on our hand-eye calibration method. Related code is released at https://github.com/leihui6/LRBO.
{"title":"Automatic Robot Hand-Eye Calibration Enabled by Learning-Based 3D Vision","authors":"Leihui Li, Xingyu Yang, Riwei Wang, Xuping Zhang","doi":"10.1007/s10846-024-02166-4","DOIUrl":"https://doi.org/10.1007/s10846-024-02166-4","url":null,"abstract":"<p>Hand-eye calibration, a fundamental task in vision-based robotic systems, is commonly equipped with collaborative robots, especially for robotic applications in small and medium-sized enterprises (SMEs). Most approaches to hand-eye calibration rely on external markers or human assistance. We proposed a novel methodology that addresses the hand-eye calibration problem using the robot base as a reference, eliminating the need for external calibration objects or human intervention. Using point clouds of the robot base, a transformation matrix from the coordinate frame of the camera to the robot base is established as “<b>I</b>=<b>AXB</b>.” To this end, we exploit learning-based 3D detection and registration algorithms to estimate the location and orientation of the robot base. The robustness and accuracy of the method are quantified by ground-truth-based evaluation, and the accuracy result is compared with other 3D vision-based calibration methods. To assess the feasibility of our methodology, we carried out experiments utilizing a low-cost structured light scanner across varying joint configurations and groups of experiments. The proposed hand-eye calibration method achieved a translation deviation of 0.930 mm and a rotation deviation of 0.265 degrees according to the experimental results. Additionally, the 3D reconstruction experiments demonstrated a rotation error of 0.994 degrees and a position error of 1.697 mm. Moreover, our method offers the potential to be completed in 1 second, which is the fastest compared to other 3D hand-eye calibration methods. We conduct indoor 3D reconstruction and robotic grasping experiments based on our hand-eye calibration method. Related code is released at https://github.com/leihui6/LRBO.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"35 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142189637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-05DOI: 10.1007/s10846-024-02161-9
Magdalena Kossek, Margareta Stefanovic
Privacy-preserving communication in cooperative control is essential for effective operations of various systems where sensitive information needs to be protected. This includes systems such as smart grids, traffic management systems, autonomous vehicle networks, healthcare systems, financial networks, and social networks. Recent privacy-preserving cooperative control literature is categorized and discussed in this paper. Advantages and disadvantages of differential privacy and encryption-based privacy-preserving protocols are described. The objective of this work is to examine and analyze existing research and knowledge related to the preservation of privacy in the context of cooperative control. This paper aims to identify and present a range of approaches, techniques, and methodologies that have been proposed or employed to address privacy concerns in multi-agent systems. It seeks to explore the current challenges, limitations, and gaps in the existing literature. It also aims to consolidate the findings from various studies to provide an overview of privacy-preserving cooperative control in multi-agent systems. The goal is to assist in the development of novel privacy-preserving mechanisms for cooperative control.
{"title":"Survey of Recent Results in Privacy-Preserving Mechanisms for Multi-Agent Systems","authors":"Magdalena Kossek, Margareta Stefanovic","doi":"10.1007/s10846-024-02161-9","DOIUrl":"https://doi.org/10.1007/s10846-024-02161-9","url":null,"abstract":"<p>Privacy-preserving communication in cooperative control is essential for effective operations of various systems where sensitive information needs to be protected. This includes systems such as smart grids, traffic management systems, autonomous vehicle networks, healthcare systems, financial networks, and social networks. Recent privacy-preserving cooperative control literature is categorized and discussed in this paper. Advantages and disadvantages of differential privacy and encryption-based privacy-preserving protocols are described. The objective of this work is to examine and analyze existing research and knowledge related to the preservation of privacy in the context of cooperative control. This paper aims to identify and present a range of approaches, techniques, and methodologies that have been proposed or employed to address privacy concerns in multi-agent systems. It seeks to explore the current challenges, limitations, and gaps in the existing literature. It also aims to consolidate the findings from various studies to provide an overview of privacy-preserving cooperative control in multi-agent systems. The goal is to assist in the development of novel privacy-preserving mechanisms for cooperative control.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"35 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142189636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-03DOI: 10.1007/s10846-024-02142-y
Jayden Dongwoo Lee, Sukjae Im, Lamsu Kim, Hyungjoo Ahn, Hyochoong Bang
This paper presents a data-driven fault detection and isolation (FDI) for a multirotor system using Koopman operator and Luenberger observer. Koopman operator is an infinite-dimensional linear operator that can transform nonlinear dynamical systems into linear ones. Using this transformation, our aim is to apply the linear fault detection method to the nonlinear system. Initially, a Koopman operator-based linear model is derived to represent the multirotor system, considering factors like non-diagonal inertial tensor, center of gravity variations, aerodynamic effects, and actuator dynamics. Various candidate lifting functions are evaluated for prediction performance and compared using the root mean square error to identify the most suitable one. Subsequently, a Koopman operator-based Luenberger observer is proposed using the lifted linear model to generate residuals for identifying faulty actuators. Simulation and experimental results demonstrate the effectiveness of the proposed observer in detecting actuator faults such as bias and loss of effectiveness, without the need for an explicitly defined fault dataset.
{"title":"Data-Driven Fault Detection and Isolation for Multirotor System Using Koopman Operator","authors":"Jayden Dongwoo Lee, Sukjae Im, Lamsu Kim, Hyungjoo Ahn, Hyochoong Bang","doi":"10.1007/s10846-024-02142-y","DOIUrl":"https://doi.org/10.1007/s10846-024-02142-y","url":null,"abstract":"<p>This paper presents a data-driven fault detection and isolation (FDI) for a multirotor system using Koopman operator and Luenberger observer. Koopman operator is an infinite-dimensional linear operator that can transform nonlinear dynamical systems into linear ones. Using this transformation, our aim is to apply the linear fault detection method to the nonlinear system. Initially, a Koopman operator-based linear model is derived to represent the multirotor system, considering factors like non-diagonal inertial tensor, center of gravity variations, aerodynamic effects, and actuator dynamics. Various candidate lifting functions are evaluated for prediction performance and compared using the root mean square error to identify the most suitable one. Subsequently, a Koopman operator-based Luenberger observer is proposed using the lifted linear model to generate residuals for identifying faulty actuators. Simulation and experimental results demonstrate the effectiveness of the proposed observer in detecting actuator faults such as bias and loss of effectiveness, without the need for an explicitly defined fault dataset.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"30 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142189650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-30DOI: 10.1007/s10846-024-02158-4
Vinícius Vigolo, Lucas A. O. Rodrigues, Antonio Carlos Valdiero, Daniel A. L. da Cruz, Rogerio S. Gonçalves
The use of robotic devices for gait neurological rehabilitation is growing, however, the available options are scarce, expensive, and with high complexity of construction and control. In this way, this paper presents the HOPE-G, a novel gait rehabilitation robot consisting of an active bodyweight support system and a dual belt treadmill servo-pneumatic module. This paper focuses on the development of the dual belt treadmill servo-pneumatic module, which has tipper movement to remove the physical barrier of the patient during the swing phase of the human gait rehabilitation. The mathematical models of the servo-pneumatic system and the treadmill module are provided. An impedance controller was designed to provide a compliant walking surface for the patient. Simulation and test rig results demonstrate the servo-pneumatic system’s capability to meet the application requirements and effectively control the surface stiffness. Therefore, it is evidenced that pneumatic systems have shock absorption capabilities, making them a cost-effective solution for application in human rehabilitation tasks.
{"title":"HOPE-G: A Dual Belt Treadmill Servo-Pneumatic System for Gait Rehabilitation","authors":"Vinícius Vigolo, Lucas A. O. Rodrigues, Antonio Carlos Valdiero, Daniel A. L. da Cruz, Rogerio S. Gonçalves","doi":"10.1007/s10846-024-02158-4","DOIUrl":"https://doi.org/10.1007/s10846-024-02158-4","url":null,"abstract":"<p>The use of robotic devices for gait neurological rehabilitation is growing, however, the available options are scarce, expensive, and with high complexity of construction and control. In this way, this paper presents the HOPE-G, a novel gait rehabilitation robot consisting of an active bodyweight support system and a dual belt treadmill servo-pneumatic module. This paper focuses on the development of the dual belt treadmill servo-pneumatic module, which has tipper movement to remove the physical barrier of the patient during the swing phase of the human gait rehabilitation. The mathematical models of the servo-pneumatic system and the treadmill module are provided. An impedance controller was designed to provide a compliant walking surface for the patient. Simulation and test rig results demonstrate the servo-pneumatic system’s capability to meet the application requirements and effectively control the surface stiffness. Therefore, it is evidenced that pneumatic systems have shock absorption capabilities, making them a cost-effective solution for application in human rehabilitation tasks.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"11 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142189651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the rise in customized product demands, the production of small batches with a wide variety of products is becoming more common. A high degree of flexibility is required from operators to manage changes in volumes and products, which has led to the use of Human-Robot Collaboration (HRC) systems for custom manufacturing. However, this variety introduces complexity that affects production time, cost, and quality. To address this issue, multivariate control charts are used as diagnostic tools to evaluate the stability of several parameters related to both product/process and human well-being in HRC systems. These key parameters monitored include assembly time, quality control time, total defects, and operator stress, providing a more holistic view of system performance. Real-time monitoring of process performance along with human-related factors, which is rarely considered in statistical process control, provides comprehensive stability control over all customized product variants produced in the HRC system. The proposed approach includes defining the parameters to be monitored, constructing control charts, collecting data after product variant assembly, and verifying that the set of parameters is under control via control charts. This increases the system's responsiveness to both process inefficiencies and human well-being. The procedure can be automated by embedding control chart routines in the software of the HRC system or its digital twin, without adding additional tasks to the operator's workload. Its practicality and effectiveness are evidenced in custom electronic board assembly, highlighting its role in optimizing HRC system performance.
{"title":"Real-Time Monitoring of Human and Process Performance Parameters in Collaborative Assembly Systems using Multivariate Control Charts","authors":"Elisa Verna, Stefano Puttero, Gianfranco Genta, Maurizio Galetto","doi":"10.1007/s10846-024-02162-8","DOIUrl":"https://doi.org/10.1007/s10846-024-02162-8","url":null,"abstract":"<p>With the rise in customized product demands, the production of small batches with a wide variety of products is becoming more common. A high degree of flexibility is required from operators to manage changes in volumes and products, which has led to the use of Human-Robot Collaboration (HRC) systems for custom manufacturing. However, this variety introduces complexity that affects production time, cost, and quality. To address this issue, multivariate control charts are used as diagnostic tools to evaluate the stability of several parameters related to both product/process and human well-being in HRC systems. These key parameters monitored include assembly time, quality control time, total defects, and operator stress, providing a more holistic view of system performance. Real-time monitoring of process performance along with human-related factors, which is rarely considered in statistical process control, provides comprehensive stability control over all customized product variants produced in the HRC system. The proposed approach includes defining the parameters to be monitored, constructing control charts, collecting data after product variant assembly, and verifying that the set of parameters is under control via control charts. This increases the system's responsiveness to both process inefficiencies and human well-being. The procedure can be automated by embedding control chart routines in the software of the HRC system or its digital twin, without adding additional tasks to the operator's workload. Its practicality and effectiveness are evidenced in custom electronic board assembly, highlighting its role in optimizing HRC system performance.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"11 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142189654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}