Ensuring secure communication and seamless accessibility remains a primary challenge in controlling robots remotely. The authors propose a novel approach that leverages open-source instant messaging platforms to overcome the complexities and reduce costs associated with implementing a secure and user-centred communication system for remote robot control named Robot Control System using Instant Communication (ROSIC). By leveraging features, such as real-time messaging, group chats, end-to-end encryption and cross-platform support inherent in the majority of instant messenger platforms, we have developed middleware that establishes a secure and efficient communication system over the Internet. By using instant messaging as the communication interface between users and robots, ROSIC caters to non-technical users, making it easier for them to control robots. The architecture of ROSIC enables various scenarios for robot control, including one user controlling multiple robots, multiple users controlling one robot, multiple robots controlled by multiple users, and one user controlling one robot. Furthermore, ROSIC facilitates the interaction of multiple robots, enabling them to interoperate and function collaboratively as a swarm system by providing a unified communication platform that allows for seamless exchange of data and commands. Telegram was specifically chosen as the instant messaging platform by the authors due to its open-source nature, robust encryption, compatibility across multiple platforms and interactive communication capabilities through channels and groups. Notably, the ROSIC is designed to communicate effectively with robot operating system (ROS)-based robots to enhance our ability to control them remotely.
{"title":"ROSIC: Enhancing secure and accessible robot control through open-source instant messaging platforms","authors":"Rasoul Sadeghian, Shahrooz Shahin, Sina Sareh","doi":"10.1049/csy2.12112","DOIUrl":"10.1049/csy2.12112","url":null,"abstract":"<p>Ensuring secure communication and seamless accessibility remains a primary challenge in controlling robots remotely. The authors propose a novel approach that leverages open-source instant messaging platforms to overcome the complexities and reduce costs associated with implementing a secure and user-centred communication system for remote robot control named Robot Control System using Instant Communication (ROSIC). By leveraging features, such as real-time messaging, group chats, end-to-end encryption and cross-platform support inherent in the majority of instant messenger platforms, we have developed middleware that establishes a secure and efficient communication system over the Internet. By using instant messaging as the communication interface between users and robots, ROSIC caters to non-technical users, making it easier for them to control robots. The architecture of ROSIC enables various scenarios for robot control, including one user controlling multiple robots, multiple users controlling one robot, multiple robots controlled by multiple users, and one user controlling one robot. Furthermore, ROSIC facilitates the interaction of multiple robots, enabling them to interoperate and function collaboratively as a swarm system by providing a unified communication platform that allows for seamless exchange of data and commands. Telegram was specifically chosen as the instant messaging platform by the authors due to its open-source nature, robust encryption, compatibility across multiple platforms and interactive communication capabilities through channels and groups. Notably, the ROSIC is designed to communicate effectively with robot operating system (ROS)-based robots to enhance our ability to control them remotely.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":"6 2","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.12112","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140329018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tingjun Lei, Timothy Sellers, Chaomin Luo, Lei Cao, Zhuming Bi
The widespread adoption of autonomous vehicles has generated considerable interest in their autonomous operation, with path planning emerging as a critical aspect. However, existing road infrastructure confronts challenges due to prolonged use and insufficient maintenance. Previous research on autonomous vehicle navigation has focused on determining the trajectory with the shortest distance, while neglecting road construction information, leading to potential time and energy inefficiencies in real-world scenarios involving infrastructure development. To address this issue, a digital twin-embedded multi-objective autonomous vehicle navigation is proposed under the condition of infrastructure construction. The authors propose an image processing algorithm that leverages captured images of the road construction environment to enable road extraction and modelling of the autonomous vehicle workspace. Additionally, a wavelet neural network is developed to predict real-time traffic flow, considering its inherent characteristics. Moreover, a multi-objective brainstorm optimisation (BSO)-based method for path planning is introduced, which optimises total time-cost and energy consumption objective functions. To ensure optimal trajectory planning during infrastructure construction, the algorithm incorporates a real-time updated digital twin throughout autonomous vehicle operations. The effectiveness and robustness of the proposed model are validated through simulation and comparative studies conducted in diverse scenarios involving road construction. The results highlight the improved performance and reliability of the autonomous vehicle system when equipped with the authors’ approach, demonstrating its potential for enhancing efficiency and minimising disruptions caused by road infrastructure development.
{"title":"Digital twin-based multi-objective autonomous vehicle navigation approach as applied in infrastructure construction","authors":"Tingjun Lei, Timothy Sellers, Chaomin Luo, Lei Cao, Zhuming Bi","doi":"10.1049/csy2.12110","DOIUrl":"10.1049/csy2.12110","url":null,"abstract":"<p>The widespread adoption of autonomous vehicles has generated considerable interest in their autonomous operation, with path planning emerging as a critical aspect. However, existing road infrastructure confronts challenges due to prolonged use and insufficient maintenance. Previous research on autonomous vehicle navigation has focused on determining the trajectory with the shortest distance, while neglecting road construction information, leading to potential time and energy inefficiencies in real-world scenarios involving infrastructure development. To address this issue, a digital twin-embedded multi-objective autonomous vehicle navigation is proposed under the condition of infrastructure construction. The authors propose an image processing algorithm that leverages captured images of the road construction environment to enable road extraction and modelling of the autonomous vehicle workspace. Additionally, a wavelet neural network is developed to predict real-time traffic flow, considering its inherent characteristics. Moreover, a multi-objective brainstorm optimisation (BSO)-based method for path planning is introduced, which optimises total time-cost and energy consumption objective functions. To ensure optimal trajectory planning during infrastructure construction, the algorithm incorporates a real-time updated digital twin throughout autonomous vehicle operations. The effectiveness and robustness of the proposed model are validated through simulation and comparative studies conducted in diverse scenarios involving road construction. The results highlight the improved performance and reliability of the autonomous vehicle system when equipped with the authors’ approach, demonstrating its potential for enhancing efficiency and minimising disruptions caused by road infrastructure development.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":"6 2","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.12110","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140181594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A novel system for human following using a differential robot, including an accurate 3-D human position tracking module and a novel planning strategy that ensures safety and dynamic feasibility, is proposed. The authors utilise a combination of gimbal camera and LiDAR for long-term accurate human detection. Then the planning module takes the target's future trajectory as a reference to generate a coarse path to ensure the following visibility. After that, the trajectory is optimised considering other constraints and following distance. Experiments demonstrate the robustness and efficiency of our system in complex environments, demonstrating its potential in various applications.
{"title":"An efficient and robust system for human following scenario using differential robot","authors":"Jiangchao Zhu, Changjia Ma, Chao Xu, Fei Gao","doi":"10.1049/csy2.12108","DOIUrl":"10.1049/csy2.12108","url":null,"abstract":"<p>A novel system for human following using a differential robot, including an accurate 3-D human position tracking module and a novel planning strategy that ensures safety and dynamic feasibility, is proposed. The authors utilise a combination of gimbal camera and LiDAR for long-term accurate human detection. Then the planning module takes the target's future trajectory as a reference to generate a coarse path to ensure the following visibility. After that, the trajectory is optimised considering other constraints and following distance. Experiments demonstrate the robustness and efficiency of our system in complex environments, demonstrating its potential in various applications.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":"6 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.12108","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139596543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Catastrophic natural disasters like earthquakes can cause infrastructure damage. Emergency response agencies need to assess damage precisely while repeating this process for infrastructures with different shapes and types. The authors aim for an autonomous Unmanned Aerial Vehicle (UAV) platform equipped with a 3D LiDAR sensor to comprehensively and accurately scan the infrastructure and map it with a predefined resolution r. During the inspection, the UAV needs to decide on the Next Best View (NBV) position to maximize the gathered information while avoiding collision at high speed. The authors propose solving this problem by implementing a hierarchical closed-loop control system consisting of a global planner and a local planner. The global NBV planner decides the general UAV direction based on a history of measurements from the LiDAR sensor, and the local planner considers the UAV dynamics and enables the UAV to fly at high speed with the latest LiDAR measurements. The proposed system is validated through the Regional Scale Autonomous Swarm Damage Assessment simulator, which is built by the authors. Through extensive testing in three unique and highly constrained infrastructure environments, the autonomous UAV inspection system successfully explored and mapped the infrastructures, demonstrating its versatility and applicability across various shapes of infrastructure.
{"title":"An autonomous Unmanned Aerial Vehicle exploration platform with a hierarchical control method for post-disaster infrastructures","authors":"Xin Peng, Gaofeng Su, Raja Sengupta","doi":"10.1049/csy2.12107","DOIUrl":"10.1049/csy2.12107","url":null,"abstract":"<p>Catastrophic natural disasters like earthquakes can cause infrastructure damage. Emergency response agencies need to assess damage precisely while repeating this process for infrastructures with different shapes and types. The authors aim for an autonomous Unmanned Aerial Vehicle (UAV) platform equipped with a 3D LiDAR sensor to comprehensively and accurately scan the infrastructure and map it with a predefined resolution <i>r</i>. During the inspection, the UAV needs to decide on the Next Best View (NBV) position to maximize the gathered information while avoiding collision at high speed. The authors propose solving this problem by implementing a hierarchical closed-loop control system consisting of a global planner and a local planner. The global NBV planner decides the general UAV direction based on a history of measurements from the LiDAR sensor, and the local planner considers the UAV dynamics and enables the UAV to fly at high speed with the latest LiDAR measurements. The proposed system is validated through the Regional Scale Autonomous Swarm Damage Assessment simulator, which is built by the authors. Through extensive testing in three unique and highly constrained infrastructure environments, the autonomous UAV inspection system successfully explored and mapped the infrastructures, demonstrating its versatility and applicability across various shapes of infrastructure.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":"6 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.12107","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139601248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pang, B., et al.: Chinese personalised text-to-speech synthesis for robot human-machine interaction. IET Cyber-Syst. Robot. e12098 (2023). https://doi.org/10.1049/csy2.12098
Incorrect grant number was used for the funder name “National Key Research and Development Plan of China” in the funding and acknowledgement sections. The correct grant number is 2020AAA0108900.
We apologize for this error.
Pang, B., et al:用于机器人人机交互的中文个性化文本到语音合成。IET Cyber-Syst.e12098 (2023)。在资助和致谢部分,https://doi.org/10.1049/csy2.12098Incorrect 资助编号被用于资助方名称 "中国国家重点研发计划"。正确的资助编号是 2020AAA0108900。我们对此深表歉意。正确的基金号是 2020AAAA0108900。对此错误,我们深表歉意。
{"title":"Correction to Chinese personalised text-to-speech synthesis for robot human–machine interaction","authors":"","doi":"10.1049/csy2.12109","DOIUrl":"10.1049/csy2.12109","url":null,"abstract":"<p>Pang, B., et al.: Chinese personalised text-to-speech synthesis for robot human-machine interaction. IET Cyber-Syst. Robot. e12098 (2023). https://doi.org/10.1049/csy2.12098</p><p>Incorrect grant number was used for the funder name “National Key Research and Development Plan of China” in the funding and acknowledgement sections. The correct grant number is 2020AAA0108900.</p><p>We apologize for this error.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":"6 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.12109","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139419779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Drones have increasingly collaborated with human workers in some workspaces, such as warehouses. The failure of a drone flight may bring potential risks to human beings' life safety during some aerial tasks. One of the most common flight failures is triggered by damaged propellers. To quickly detect physical damage to propellers, recognise risky flights, and provide early warnings to surrounding human workers, a new and comprehensive fault diagnosis framework is presented that uses only the audio caused by propeller rotation without accessing any flight data. The diagnosis framework includes three components: leverage convolutional neural networks, transfer learning, and Bayesian optimisation. Particularly, the audio signal from an actual flight is collected and transferred into time–frequency spectrograms. First, a convolutional neural network-based diagnosis model that utilises these spectrograms is developed to identify whether there is any broken propeller involved in a specific drone flight. Additionally, the authors employ Monte Carlo dropout sampling to obtain the inconsistency of diagnostic results and compute the mean probability score vector's entropy (uncertainty) as another factor to diagnose the drone flight. Next, to reduce data dependence on different drone types, the convolutional neural network-based diagnosis model is further augmented by transfer learning. That is, the knowledge of a well-trained diagnosis model is refined by using a small set of data from a different drone. The modified diagnosis model has the ability to detect the broken propeller of the second drone. Thirdly, to reduce the hyperparameters' tuning efforts and reinforce the robustness of the network, Bayesian optimisation takes advantage of the observed diagnosis model performances to construct a Gaussian process model that allows the acquisition function to choose the optimal network hyperparameters. The proposed diagnosis framework is validated via real experimental flight tests and has a reasonably high diagnosis accuracy.
在仓库等一些工作场所,无人机与人类工人的合作越来越多。在一些空中任务中,无人机飞行故障可能会给人类的生命安全带来潜在风险。螺旋桨损坏是最常见的飞行故障之一。为了快速检测螺旋桨的物理损坏,识别风险飞行,并向周围的人类工作人员发出预警,本文提出了一种全新的综合故障诊断框架,该框架仅使用螺旋桨旋转时产生的音频,而无需访问任何飞行数据。诊断框架包括三个部分:卷积神经网络杠杆、迁移学习和贝叶斯优化。特别是,从实际飞行中收集音频信号并将其转换成时频频谱图。首先,利用这些频谱图开发出基于卷积神经网络的诊断模型,以识别特定无人机飞行中是否存在螺旋桨破损的情况。此外,作者还采用蒙特卡洛丢弃采样(Monte Carlo dropout sampling)来获取诊断结果的不一致性,并计算平均概率分数向量的熵(不确定性),作为诊断无人机飞行的另一个因素。接下来,为了减少对不同无人机类型的数据依赖,基于卷积神经网络的诊断模型通过迁移学习得到了进一步增强。也就是说,通过使用来自不同无人机的少量数据集来完善训练有素的诊断模型的知识。修改后的诊断模型能够检测出第二架无人机螺旋桨的破损情况。第三,为了减少超参数的调整工作并增强网络的鲁棒性,贝叶斯优化法利用观察到的诊断模型性能构建了一个高斯过程模型,该模型允许获取函数选择最优网络超参数。所提出的诊断框架通过实际飞行实验进行了验证,具有相当高的诊断准确性。
{"title":"An audio-based risky flight detection framework for quadrotors","authors":"Wansong Liu, Chang Liu, Seyedomid Sajedi, Hao Su, Xiao Liang, Minghui Zheng","doi":"10.1049/csy2.12105","DOIUrl":"10.1049/csy2.12105","url":null,"abstract":"<p>Drones have increasingly collaborated with human workers in some workspaces, such as warehouses. The failure of a drone flight may bring potential risks to human beings' life safety during some aerial tasks. One of the most common flight failures is triggered by damaged propellers. To quickly detect physical damage to propellers, recognise risky flights, and provide early warnings to surrounding human workers, a new and comprehensive fault diagnosis framework is presented that uses only the audio caused by propeller rotation without accessing any flight data. The diagnosis framework includes three components: leverage convolutional neural networks, transfer learning, and Bayesian optimisation. Particularly, the audio signal from an actual flight is collected and transferred into time–frequency spectrograms. First, a convolutional neural network-based diagnosis model that utilises these spectrograms is developed to identify whether there is any broken propeller involved in a specific drone flight. Additionally, the authors employ Monte Carlo dropout sampling to obtain the inconsistency of diagnostic results and compute the mean probability score vector's entropy (uncertainty) as another factor to diagnose the drone flight. Next, to reduce data dependence on different drone types, the convolutional neural network-based diagnosis model is further augmented by transfer learning. That is, the knowledge of a well-trained diagnosis model is refined by using a small set of data from a different drone. The modified diagnosis model has the ability to detect the broken propeller of the second drone. Thirdly, to reduce the hyperparameters' tuning efforts and reinforce the robustness of the network, Bayesian optimisation takes advantage of the observed diagnosis model performances to construct a Gaussian process model that allows the acquisition function to choose the optimal network hyperparameters. The proposed diagnosis framework is validated via real experimental flight tests and has a reasonably high diagnosis accuracy.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":"6 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.12105","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139435296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The authors investigate the trajectory tracking control problem of an upper limb rehabilitation robot system with unknown dynamics. To address the system's uncertainties and improve the tracking accuracy of the rehabilitation robot, an adaptive neural full-state feedback control is proposed. The neural network is utilised to approximate the dynamics that are not fully modelled and adapt to the interaction between the upper limb rehabilitation robot and the patient. By incorporating a high-gain observer, unmeasurable state information is integrated into the output feedback control. Taking into consideration the issue of joint position constraints during the actual rehabilitation training process, an adaptive neural full-state and output feedback control scheme with output constraint is further designed. From the perspective of safety in human–robot interaction during rehabilitation training, log-type barrier Lyapunov function is introduced in the output constraint controller to ensure that the output remains within the predefined constraint region. The stability of the closed-loop system is proved by Lyapunov stability theory. The effectiveness of the proposed control scheme is validated by applying it to an upper limb rehabilitation robot through simulations.
{"title":"Adaptive neural tracking control for upper limb rehabilitation robot with output constraints","authors":"Zibin Zhang, Pengbo Cui, Aimin An","doi":"10.1049/csy2.12104","DOIUrl":"https://doi.org/10.1049/csy2.12104","url":null,"abstract":"<p>The authors investigate the trajectory tracking control problem of an upper limb rehabilitation robot system with unknown dynamics. To address the system's uncertainties and improve the tracking accuracy of the rehabilitation robot, an adaptive neural full-state feedback control is proposed. The neural network is utilised to approximate the dynamics that are not fully modelled and adapt to the interaction between the upper limb rehabilitation robot and the patient. By incorporating a high-gain observer, unmeasurable state information is integrated into the output feedback control. Taking into consideration the issue of joint position constraints during the actual rehabilitation training process, an adaptive neural full-state and output feedback control scheme with output constraint is further designed. From the perspective of safety in human–robot interaction during rehabilitation training, log-type barrier Lyapunov function is introduced in the output constraint controller to ensure that the output remains within the predefined constraint region. The stability of the closed-loop system is proved by Lyapunov stability theory. The effectiveness of the proposed control scheme is validated by applying it to an upper limb rehabilitation robot through simulations.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":"5 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.12104","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139047594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Mitchell, Paul Dominick Emor Baniqued, Abdul Zahid, Andrew West, Bahman Nouri Rahmat Abadi, Barry Lennox, Bin Liu, Burak Kizilkaya, David Flynn, David John Francis, Erwin Jose Lopez Pulgarin, Guodong Zhao, Hasan Kivrak, Jamie Rowland Douglas Blanche, Jennifer David, Jingyan Wang, Joseph Bolarinwa, Kanzhong Yao, Keir Groves, Liyuan Qi, Mahmoud A. Shawky, Manuel Giuliani, Melissa Sandison, Olaoluwa Popoola, Ognjen Marjanovic, Paul Bremner, Samuel Thomas Harper, Shivoh Nandakumar, Simon Watson, Subham Agrawal, Theodore Lim, Thomas Johnson, Wasim Ahmad, Xiangmin Xu, Zhen Meng, Zhengyi Jiang
Nuclear facilities have a regulatory requirement to measure radiation levels within Post Operational Clean Out (POCO) around nuclear facilities each year, resulting in a trend towards robotic deployments to gain an improved understanding during nuclear decommissioning phases. The UK Nuclear Decommissioning Authority supports the view that human-in-the-loop (HITL) robotic deployments are a solution to improve procedures and reduce risks within radiation characterisation of nuclear sites. The authors present a novel implementation of a Cyber-Physical System (CPS) deployed in an analogue nuclear environment, comprised of a multi-robot (MR) team coordinated by a HITL operator through a digital twin interface. The development of the CPS created efficient partnerships across systems including robots, digital systems and human. This was presented as a multi-staged mission within an inspection scenario for the heterogeneous Symbiotic Multi-Robot Fleet (SMuRF). Symbiotic interactions were achieved across the SMuRF where robots utilised automated collaborative governance to work together, where a single robot would face challenges in full characterisation of radiation. Key contributions include the demonstration of symbiotic autonomy and query-based learning of an autonomous mission supporting scalable autonomy and autonomy as a service. The coordination of the CPS was a success and displayed further challenges and improvements related to future MR fleets.
{"title":"Lessons learned: Symbiotic autonomous robot ecosystem for nuclear environments","authors":"Daniel Mitchell, Paul Dominick Emor Baniqued, Abdul Zahid, Andrew West, Bahman Nouri Rahmat Abadi, Barry Lennox, Bin Liu, Burak Kizilkaya, David Flynn, David John Francis, Erwin Jose Lopez Pulgarin, Guodong Zhao, Hasan Kivrak, Jamie Rowland Douglas Blanche, Jennifer David, Jingyan Wang, Joseph Bolarinwa, Kanzhong Yao, Keir Groves, Liyuan Qi, Mahmoud A. Shawky, Manuel Giuliani, Melissa Sandison, Olaoluwa Popoola, Ognjen Marjanovic, Paul Bremner, Samuel Thomas Harper, Shivoh Nandakumar, Simon Watson, Subham Agrawal, Theodore Lim, Thomas Johnson, Wasim Ahmad, Xiangmin Xu, Zhen Meng, Zhengyi Jiang","doi":"10.1049/csy2.12103","DOIUrl":"https://doi.org/10.1049/csy2.12103","url":null,"abstract":"<p>Nuclear facilities have a regulatory requirement to measure radiation levels within Post Operational Clean Out (POCO) around nuclear facilities each year, resulting in a trend towards robotic deployments to gain an improved understanding during nuclear decommissioning phases. The UK Nuclear Decommissioning Authority supports the view that human-in-the-loop (HITL) robotic deployments are a solution to improve procedures and reduce risks within radiation characterisation of nuclear sites. The authors present a novel implementation of a Cyber-Physical System (CPS) deployed in an analogue nuclear environment, comprised of a multi-robot (MR) team coordinated by a HITL operator through a digital twin interface. The development of the CPS created efficient partnerships across systems including robots, digital systems and human. This was presented as a multi-staged mission within an inspection scenario for the heterogeneous Symbiotic Multi-Robot Fleet (SMuRF). Symbiotic interactions were achieved across the SMuRF where robots utilised automated collaborative governance to work together, where a single robot would face challenges in full characterisation of radiation. Key contributions include the demonstration of symbiotic autonomy and query-based learning of an autonomous mission supporting scalable autonomy and autonomy as a service. The coordination of the CPS was a success and displayed further challenges and improvements related to future MR fleets.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":"5 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.12103","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139047599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A deep reinforcement learning (DRL) method based on the deep deterministic policy gradient (DDPG) algorithm is proposed to address the problems of a mismatch between the needed training samples and the actual training samples during the training of intelligence, the overestimation and underestimation of the existence of Q-values, and the insufficient dynamism of the intelligence policy exploration. This method introduces the Actor-Critic Off-Policy Correction (AC-Off-POC) reinforcement learning framework and an improved double Q-value learning method, which enables the value function network in the target task to provide a more accurate evaluation of the policy network and converge to the optimal policy more quickly and stably to obtain higher value returns. The method is applied to multiple MuJoCo tasks on the Open AI Gym simulation platform. The experimental results show that it is better than the DDPG algorithm based solely on the different policy correction framework (AC-Off-POC) and the conventional DRL algorithm. The value of returns and stability of the double-Q-network off-policy correction algorithm for the deep deterministic policy gradient (DCAOP-DDPG) proposed by the authors are significantly higher than those of other DRL algorithms.
{"title":"Off-policy correction algorithm for double Q network based on deep reinforcement learning","authors":"Qingbo Zhang, Manlu Liu, Heng Wang, Weimin Qian, Xinglang Zhang","doi":"10.1049/csy2.12102","DOIUrl":"https://doi.org/10.1049/csy2.12102","url":null,"abstract":"<p>A deep reinforcement learning (DRL) method based on the deep deterministic policy gradient (DDPG) algorithm is proposed to address the problems of a mismatch between the needed training samples and the actual training samples during the training of intelligence, the overestimation and underestimation of the existence of Q-values, and the insufficient dynamism of the intelligence policy exploration. This method introduces the Actor-Critic Off-Policy Correction (AC-Off-POC) reinforcement learning framework and an improved double Q-value learning method, which enables the value function network in the target task to provide a more accurate evaluation of the policy network and converge to the optimal policy more quickly and stably to obtain higher value returns. The method is applied to multiple MuJoCo tasks on the Open AI Gym simulation platform. The experimental results show that it is better than the DDPG algorithm based solely on the different policy correction framework (AC-Off-POC) and the conventional DRL algorithm. The value of returns and stability of the double-Q-network off-policy correction algorithm for the deep deterministic policy gradient (DCAOP-DDPG) proposed by the authors are significantly higher than those of other DRL algorithms.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":"5 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.12102","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139041971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhicong Zhang, Wenyu Zhang, Donglin Zhu, Yi Xu, Changjun Zhou
Solder joint quality inspection is a crucial step in the qualification inspection of printed circuit board (PCB) components, and efficient and accurate inspection methods will greatly improve its production efficiency. In this paper, we propose a PCB solder joint quality detection algorithm based on a lightweight classification network. First, the Select Joint segmentation method was used to obtain the solder joint information, and colour space conversion was used to locate the solder joint. The mask method, contour detection, and box line method were combined to complete the extraction of solder joint information. Then, by combining the respective characteristics of convolutional neural network and Transformer and introducing Cross-covariance attention to reduce the computational complexity and resource consumption of the model and evenly distribute the global view mutual information in the whole training process, a new lightweight network model MobileXT is proposed to complete defect classification. Only 16.4% of the Vision Transformer computing resources used in this model can achieve an average accuracy improvement of 31%. Additionally, the network is trained and validated using a dataset of 1804 solder joint images constructed from 93 PCB images and two external datasets to evaluate MobileXT performance. The proposed method achieves more efficient localization of the solder joint information and more accurate classification of weld joint defects, and the lightweight model design is more appropriate for industrial edge device deployments.
{"title":"Printed circuit board solder joint quality inspection based on lightweight classification network","authors":"Zhicong Zhang, Wenyu Zhang, Donglin Zhu, Yi Xu, Changjun Zhou","doi":"10.1049/csy2.12101","DOIUrl":"https://doi.org/10.1049/csy2.12101","url":null,"abstract":"<p>Solder joint quality inspection is a crucial step in the qualification inspection of printed circuit board (PCB) components, and efficient and accurate inspection methods will greatly improve its production efficiency. In this paper, we propose a PCB solder joint quality detection algorithm based on a lightweight classification network. First, the Select Joint segmentation method was used to obtain the solder joint information, and colour space conversion was used to locate the solder joint. The mask method, contour detection, and box line method were combined to complete the extraction of solder joint information. Then, by combining the respective characteristics of convolutional neural network and Transformer and introducing Cross-covariance attention to reduce the computational complexity and resource consumption of the model and evenly distribute the global view mutual information in the whole training process, a new lightweight network model MobileXT is proposed to complete defect classification. Only 16.4% of the Vision Transformer computing resources used in this model can achieve an average accuracy improvement of 31%. Additionally, the network is trained and validated using a dataset of 1804 solder joint images constructed from 93 PCB images and two external datasets to evaluate MobileXT performance. The proposed method achieves more efficient localization of the solder joint information and more accurate classification of weld joint defects, and the lightweight model design is more appropriate for industrial edge device deployments.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":"5 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.12101","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134806524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}