Pub Date : 2024-07-10DOI: 10.1007/s10846-024-02132-0
Dumrongsak Kijdech, Supachai Vongbunyong
Hot forging is one of the common manufacturing processes for producing brass workpieces. However forging produces flash which is a thin metal part around the desired part formed with an excessive material. Using robots with vision system to manipulate this workpiece has encountered several challenging issues, e.g. the uncertain shape of flash, color, reflection of brass surface, different lighting condition, and the uncertainty surrounding the position and orientation of the workpiece. In this research, Mask region-based convolutional neural network together with image processing is used to resolve these issues. The depth camera can provide images for visual detection. Machine learning Mask region-based convolutional neural network model was trained with color images and the position of the object is determined by the depth image. A dual arm 7 degree of freedom collaborative robot with proposed grasping strategy is used to grasp the workpiece that can be in inappropriate position and pose. Eventually, experiments were conducted to assess the visual detection process and the grasp planning of the robot.
{"title":"Manipulation of a Complex Object Using Dual-Arm Robot with Mask R-CNN and Grasping Strategy","authors":"Dumrongsak Kijdech, Supachai Vongbunyong","doi":"10.1007/s10846-024-02132-0","DOIUrl":"https://doi.org/10.1007/s10846-024-02132-0","url":null,"abstract":"<p>Hot forging is one of the common manufacturing processes for producing brass workpieces. However forging produces flash which is a thin metal part around the desired part formed with an excessive material. Using robots with vision system to manipulate this workpiece has encountered several challenging issues, e.g. the uncertain shape of flash, color, reflection of brass surface, different lighting condition, and the uncertainty surrounding the position and orientation of the workpiece. In this research, Mask region-based convolutional neural network together with image processing is used to resolve these issues. The depth camera can provide images for visual detection. Machine learning Mask region-based convolutional neural network model was trained with color images and the position of the object is determined by the depth image. A dual arm 7 degree of freedom collaborative robot with proposed grasping strategy is used to grasp the workpiece that can be in inappropriate position and pose. Eventually, experiments were conducted to assess the visual detection process and the grasp planning of the robot.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"62 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141570668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article presents ‘Teledrive’, a telepresence robotic system with embodied AI features that empowers an operator to navigate the telerobot in any unknown remote place with minimal human intervention. We conceive Teledrive in the context of democratizing remote ‘care-giving’ for elderly citizens as well as for isolated patients, affected by contagious diseases. In particular, this paper focuses on the problem of navigating to a rough target area (like ‘bedroom’ or ‘kitchen’) rather than pre-specified point destinations. This ushers in a unique ‘AreaGoal’ based navigation feature, which has not been explored in depth in the contemporary solutions. Further, we describe an edge computing-based software system built on a WebRTC-based communication framework to realize the aforementioned scheme through an easy-to-use speech-based human-robot interaction. Moreover, to enhance the ease of operation for the remote caregiver, we incorporate a ‘person following’ feature, whereby a robot follows a person on the move in its premises as directed by the operator. Moreover, the system presented is loosely coupled with specific robot hardware, unlike the existing solutions. We have evaluated the efficacy of the proposed system through baseline experiments, user study, and real-life deployment.
{"title":"Teledrive: An Embodied AI Based Telepresence System","authors":"Snehasis Banerjee, Sayan Paul, Ruddradev Roychoudhury, Abhijan Bhattacharya, Chayan Sarkar, Ashis Sau, Pradip Pramanick, Brojeshwar Bhowmick","doi":"10.1007/s10846-024-02124-0","DOIUrl":"https://doi.org/10.1007/s10846-024-02124-0","url":null,"abstract":"<p>This article presents ‘Teledrive’, a telepresence robotic system with embodied AI features that empowers an operator to navigate the telerobot in any unknown remote place with minimal human intervention. We conceive Teledrive in the context of democratizing remote ‘care-giving’ for elderly citizens as well as for isolated patients, affected by contagious diseases. In particular, this paper focuses on the problem of navigating to a rough target area (like ‘bedroom’ or ‘kitchen’) rather than pre-specified point destinations. This ushers in a unique ‘AreaGoal’ based navigation feature, which has not been explored in depth in the contemporary solutions. Further, we describe an edge computing-based software system built on a WebRTC-based communication framework to realize the aforementioned scheme through an easy-to-use speech-based human-robot interaction. Moreover, to enhance the ease of operation for the remote caregiver, we incorporate a ‘person following’ feature, whereby a robot follows a person on the move in its premises as directed by the operator. Moreover, the system presented is loosely coupled with specific robot hardware, unlike the existing solutions. We have evaluated the efficacy of the proposed system through baseline experiments, user study, and real-life deployment.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"37 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141570669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-09DOI: 10.1007/s10846-024-02118-y
Luka Antonyshyn, Sidney Givigi
Sparse rewards and sample efficiency are open areas of research in the field of reinforcement learning. These problems are especially important when considering applications of reinforcement learning to robotics and other cyber-physical systems. This is so because in these domains many tasks are goal-based and naturally expressed with binary successes and failures, action spaces are large and continuous, and real interactions with the environment are limited. In this work, we propose Deep Value-and-Predictive-Model Control (DVPMC), a model-based predictive reinforcement learning algorithm for continuous control that uses system identification, value function approximation and sampling-based optimization to select actions. The algorithm is evaluated on a dense reward and a sparse reward task. We show that it can match the performance of a predictive control approach to the dense reward problem, and outperforms model-free and model-based learning algorithms on the sparse reward task on the metrics of sample efficiency and performance. We verify the performance of an agent trained in simulation using DVPMC on a real robot playing the reach-avoid game. Video of the experiment can be found here: https://youtu.be/0Q274kcfn4c.
{"title":"Deep Model-Based Reinforcement Learning for Predictive Control of Robotic Systems with Dense and Sparse Rewards","authors":"Luka Antonyshyn, Sidney Givigi","doi":"10.1007/s10846-024-02118-y","DOIUrl":"https://doi.org/10.1007/s10846-024-02118-y","url":null,"abstract":"<p>Sparse rewards and sample efficiency are open areas of research in the field of reinforcement learning. These problems are especially important when considering applications of reinforcement learning to robotics and other cyber-physical systems. This is so because in these domains many tasks are goal-based and naturally expressed with binary successes and failures, action spaces are large and continuous, and real interactions with the environment are limited. In this work, we propose Deep Value-and-Predictive-Model Control (DVPMC), a model-based predictive reinforcement learning algorithm for continuous control that uses system identification, value function approximation and sampling-based optimization to select actions. The algorithm is evaluated on a dense reward and a sparse reward task. We show that it can match the performance of a predictive control approach to the dense reward problem, and outperforms model-free and model-based learning algorithms on the sparse reward task on the metrics of sample efficiency and performance. We verify the performance of an agent trained in simulation using DVPMC on a real robot playing the reach-avoid game. Video of the experiment can be found here: https://youtu.be/0Q274kcfn4c.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"30 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141570673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Electric vehicles (EVs) have been initiated as a preference for decarbonizing road transport. Accurate charging load prediction is essential for the construction of EV charging facilities systematically and for the coordination of EV energy demand with the requisite peak power supply. It is noted that the charging load of EVs exhibits high complexity and randomness due to temporal and spatial uncertainties. Therefore, this paper proposes a SEDformer-based charging road prediction method to capture the spatio-temporal characteristics of charging load data. As a deep learning model, SEDformer comprises multiple encoders and a single decoder. In particular, the proposed model includes a Temporal Encoder Block based on the self-attention mechanism and a Spatial Encoder Block based on the channel attention mechanism with sequence decomposition, followed by an aggregated decoder for information fusion. It is shown that the proposed method outperforms various baseline models on a real-world dataset from Palo Alto, U.S., demonstrating its superiority in addressing spatio-temporal data-driven load forecasting problems.
{"title":"Multi-Encoder Spatio-Temporal Feature Fusion Network for Electric Vehicle Charging Load Prediction","authors":"Yufan Chen, Mengqin Wang, Yanling Wei, Xueliang Huang, Shan Gao","doi":"10.1007/s10846-024-02125-z","DOIUrl":"https://doi.org/10.1007/s10846-024-02125-z","url":null,"abstract":"<p>Electric vehicles (EVs) have been initiated as a preference for decarbonizing road transport. Accurate charging load prediction is essential for the construction of EV charging facilities systematically and for the coordination of EV energy demand with the requisite peak power supply. It is noted that the charging load of EVs exhibits high complexity and randomness due to temporal and spatial uncertainties. Therefore, this paper proposes a SEDformer-based charging road prediction method to capture the spatio-temporal characteristics of charging load data. As a deep learning model, SEDformer comprises multiple encoders and a single decoder. In particular, the proposed model includes a Temporal Encoder Block based on the self-attention mechanism and a Spatial Encoder Block based on the channel attention mechanism with sequence decomposition, followed by an aggregated decoder for information fusion. It is shown that the proposed method outperforms various baseline models on a real-world dataset from Palo Alto, U.S., demonstrating its superiority in addressing spatio-temporal data-driven load forecasting problems.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"13 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141570775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-09DOI: 10.1007/s10846-024-02052-z
Calvin Coopmans, Stockton Slack, Nathan Schwemmer, Chase Vance, A. J. Beckwith, Daniel J. Robinson
As small, uncrewed systems (sUAS) grow in popularity and in number, larger and larger drone aircraft will become more common–up to the FAA limit of 55 pound gross take-off weight (GTOW) and beyond. Due to their larger payload capabilities, longer flight time, and better safety systems, autonomous systems that maximize CFR 14 Part 107 flight drone operations regulations will become more common, especially for operations such as imagery or other data collection which scale well with longer flight times and larger flight areas. In this new paper, a unique all-electric 55-pound VTOL transition fixed-wing sUAS specifically engineered for scientific data collection named “GreatBlue” is presented, along with systems, communications, scientific payload, data collection and processing, package delivery payload, ground control station, and mission simulation system. Able to fly for up to 2.5 hours while collecting multispectral remotely-sensed imagery, the unique GreatBlue system is shown, along with a package delivery flight example, flight data from two scientific data collection flights over California almond fields and a Utah Reservoir are shown including flight plan vs. as-flown.
{"title":"GreatBlue: a 55-Pound Vertical-Takeoff-and-Landing Fixed-Wing sUAS for Science; Systems, Communication, Simulation, Data Processing, Payloads, Package Delivery, and Mission Flight Performance","authors":"Calvin Coopmans, Stockton Slack, Nathan Schwemmer, Chase Vance, A. J. Beckwith, Daniel J. Robinson","doi":"10.1007/s10846-024-02052-z","DOIUrl":"https://doi.org/10.1007/s10846-024-02052-z","url":null,"abstract":"<p>As small, uncrewed systems (sUAS) grow in popularity and in number, larger and larger drone aircraft will become more common–up to the FAA limit of 55 pound gross take-off weight (GTOW) and beyond. Due to their larger payload capabilities, longer flight time, and better safety systems, autonomous systems that maximize CFR 14 Part 107 flight drone operations regulations will become more common, especially for operations such as imagery or other data collection which scale well with longer flight times and larger flight areas. In this new paper, a unique all-electric 55-pound VTOL transition fixed-wing sUAS specifically engineered for scientific data collection named “GreatBlue” is presented, along with systems, communications, scientific payload, data collection and processing, package delivery payload, ground control station, and mission simulation system. Able to fly for up to 2.5 hours while collecting multispectral remotely-sensed imagery, the unique GreatBlue system is shown, along with a package delivery flight example, flight data from two scientific data collection flights over California almond fields and a Utah Reservoir are shown including flight plan vs. as-flown.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"37 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141570674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-09DOI: 10.1007/s10846-024-02127-x
Chih-Yung Huang, Yu-Hsiang Shao
During the movement of a robotic arm, collisions can easily occur if the arm directly grasps at multiple tightly stacked objects, thereby leading to grasp failures or machine damage. Grasp success can be improved through the rearrangement or movement of objects to clear space for grasping. This paper presents a high-performance deep Q-learning framework that can help robotic arms to learn synchronized push and grasp tasks. In this framework, a grasp quality network is used for precisely identifying stable grasp positions on objects to expedite model convergence and solve the problem of sparse rewards caused during training because of grasp failures. Furthermore, a novel reward function is proposed for effectively evaluating whether a pushing action is effective. The proposed framework achieved grasp success rates of 92% and 89% in simulations and real-world experiments, respectively. Furthermore, only 200 training steps were required to achieve a grasp success rate of 80%, which indicates the suitability of the proposed framework for rapid deployment in industrial settings.
{"title":"Integration of Deep Q-Learning with a Grasp Quality Network for Robot Grasping in Cluttered Environments","authors":"Chih-Yung Huang, Yu-Hsiang Shao","doi":"10.1007/s10846-024-02127-x","DOIUrl":"https://doi.org/10.1007/s10846-024-02127-x","url":null,"abstract":"<p>During the movement of a robotic arm, collisions can easily occur if the arm directly grasps at multiple tightly stacked objects, thereby leading to grasp failures or machine damage. Grasp success can be improved through the rearrangement or movement of objects to clear space for grasping. This paper presents a high-performance deep Q-learning framework that can help robotic arms to learn synchronized push and grasp tasks. In this framework, a grasp quality network is used for precisely identifying stable grasp positions on objects to expedite model convergence and solve the problem of sparse rewards caused during training because of grasp failures. Furthermore, a novel reward function is proposed for effectively evaluating whether a pushing action is effective. The proposed framework achieved grasp success rates of 92% and 89% in simulations and real-world experiments, respectively. Furthermore, only 200 training steps were required to achieve a grasp success rate of 80%, which indicates the suitability of the proposed framework for rapid deployment in industrial settings.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"29 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141570670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-09DOI: 10.1007/s10846-024-02071-w
E. Öykü Kurtpınar
Benefiting from the rapid advancements in Unmanned Aircraft Systems (UAS) technology with enhanced tracking and data collection capabilities, law enforcement authorities re-discovered air as a dimension where state power can be exercised in a more affordable, accessible, and compact way. On the other hand, during law enforcement operations, UAS can collect various types of data that can be personal or sensitive, threatening the right to privacy and data protection of the data subjects. Risks include challenges related to data security, bulk data collection, the diminished transparency and fairness resulting from the inconspicuous nature of UAS, as well as ethical concerns intertwined with privacy and data protection. Upon examination of the legal framework including the General Data Protection Regulation the Law Enforcement Directive, various Aviation rules, and the new proposal for the Artificial Intelligence Act, it becomes apparent that the EU legal framework’s adequacy in safeguarding privacy and data protection against law enforcement use of UAS is context-dependent, varying across use cases. The current framework lacks clarity, leading to arbitrary application and limited protection for data subjects. Enforcement of safeguards is insufficient, and the Aviation Regulations, applicable to law enforcement UAS, require member states' opt-in, which has not occurred as of the authors’ knowledge. The Artificial Intelligence Act addresses UAS operations but focuses on market risks rather than obligations imposed on law enforcement authorities. Consequently, the existing framework is rendered inadequate for medium to high-risk law enforcement operations, leaving individuals vulnerable and insufficiently protected against intrusive UAS surveillance. Rectifying this involves addressing the enforcement gap and making the necessary amendments to relevant regulatory aspects. Additionally, the implementation of specific technical measures and steps to foster effective cooperation among stakeholders in UAS deployment for law enforcement is imperative.
{"title":"Privacy’s Sky-High Battle: The Use of Unmanned Aircraft Systems for Law Enforcement in the European Union","authors":"E. Öykü Kurtpınar","doi":"10.1007/s10846-024-02071-w","DOIUrl":"https://doi.org/10.1007/s10846-024-02071-w","url":null,"abstract":"<p>Benefiting from the rapid advancements in Unmanned Aircraft Systems (UAS) technology with enhanced tracking and data collection capabilities, law enforcement authorities re-discovered air as a dimension where state power can be exercised in a more affordable, accessible, and compact way. On the other hand, during law enforcement operations, UAS can collect various types of data that can be personal or sensitive, threatening the right to privacy and data protection of the data subjects. Risks include challenges related to data security, bulk data collection, the diminished transparency and fairness resulting from the inconspicuous nature of UAS, as well as ethical concerns intertwined with privacy and data protection. Upon examination of the legal framework including the General Data Protection Regulation the Law Enforcement Directive, various Aviation rules, and the new proposal for the Artificial Intelligence Act, it becomes apparent that the EU legal framework’s adequacy in safeguarding privacy and data protection against law enforcement use of UAS is context-dependent, varying across use cases. The current framework lacks clarity, leading to arbitrary application and limited protection for data subjects. Enforcement of safeguards is insufficient, and the Aviation Regulations, applicable to law enforcement UAS, require member states' opt-in, which has not occurred as of the authors’ knowledge. The Artificial Intelligence Act addresses UAS operations but focuses on market risks rather than obligations imposed on law enforcement authorities. Consequently, the existing framework is rendered inadequate for medium to high-risk law enforcement operations, leaving individuals vulnerable and insufficiently protected against intrusive UAS surveillance. Rectifying this involves addressing the enforcement gap and making the necessary amendments to relevant regulatory aspects. Additionally, the implementation of specific technical measures and steps to foster effective cooperation among stakeholders in UAS deployment for law enforcement is imperative.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"27 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141570671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-09DOI: 10.1007/s10846-024-02128-w
Chuyao Wang, Nabil Aouf
Automatic 3D object detection using monocular cameras presents significant challenges in the context of autonomous driving. Precise labeling of 3D object scales requires accurate spatial information, which is difficult to obtain from a single image due to the inherent lack of depth information in monocular images, compared to LiDAR data. In this paper, we propose a novel approach to address this issue by enhancing deep neural networks with depth information for monocular 3D object detection. The proposed method comprises three key components: 1)Feature Enhancement Pyramid Module: We extend the conventional Feature Pyramid Networks (FPN) by introducing a feature enhancement pyramid network. This module fuses feature maps from the original pyramid and captures contextual correlations across multiple scales. To increase the connectivity between low-level and high-level features, additional pathways are incorporated. 2)Auxiliary Dense Depth Estimator: We introduce an auxiliary dense depth estimator that generates dense depth maps to enhance the spatial perception capabilities of the deep network model without adding computational burden. 3)Augmented Center Depth Regression: To aid center depth estimation, we employ additional bounding box vertex depth regression based on geometry. Our experimental results demonstrate the superiority of the proposed technique over existing competitive methods reported in the literature. The approach showcases remarkable performance improvements in monocular 3D object detection, making it a promising solution for autonomous driving applications.
{"title":"Depth-Enhanced Deep Learning Approach For Monocular Camera Based 3D Object Detection","authors":"Chuyao Wang, Nabil Aouf","doi":"10.1007/s10846-024-02128-w","DOIUrl":"https://doi.org/10.1007/s10846-024-02128-w","url":null,"abstract":"<p>Automatic 3D object detection using monocular cameras presents significant challenges in the context of autonomous driving. Precise labeling of 3D object scales requires accurate spatial information, which is difficult to obtain from a single image due to the inherent lack of depth information in monocular images, compared to LiDAR data. In this paper, we propose a novel approach to address this issue by enhancing deep neural networks with depth information for monocular 3D object detection. The proposed method comprises three key components: 1)Feature Enhancement Pyramid Module: We extend the conventional Feature Pyramid Networks (FPN) by introducing a feature enhancement pyramid network. This module fuses feature maps from the original pyramid and captures contextual correlations across multiple scales. To increase the connectivity between low-level and high-level features, additional pathways are incorporated. 2)Auxiliary Dense Depth Estimator: We introduce an auxiliary dense depth estimator that generates dense depth maps to enhance the spatial perception capabilities of the deep network model without adding computational burden. 3)Augmented Center Depth Regression: To aid center depth estimation, we employ additional bounding box vertex depth regression based on geometry. Our experimental results demonstrate the superiority of the proposed technique over existing competitive methods reported in the literature. The approach showcases remarkable performance improvements in monocular 3D object detection, making it a promising solution for autonomous driving applications.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"13 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141570675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-09DOI: 10.1007/s10846-024-02105-3
Luping Zhang
China as the world’s prominent market for Unmanned Aircraft Systems (UAS) has just passed a new regulation on UAS. The new regulation is expected to form a new pattern of rule of law on UAS in China. With the need for harmonisation of laws internationally, this article highlights the three aspects out of China’s new UAS legislation against an international setting.
{"title":"China’s New Pattern of Rule of Law on UAS","authors":"Luping Zhang","doi":"10.1007/s10846-024-02105-3","DOIUrl":"https://doi.org/10.1007/s10846-024-02105-3","url":null,"abstract":"<p>China as the world’s prominent market for Unmanned Aircraft Systems (UAS) has just passed a new regulation on UAS. The new regulation is expected to form a new pattern of rule of law on UAS in China. With the need for harmonisation of laws internationally, this article highlights the three aspects out of China’s new UAS legislation against an international setting.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"43 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141570672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-09DOI: 10.1007/s10846-024-02116-0
Erfan Salehi, Ali Aghagolzadeh, Reshad Hosseini
Mobile robots and autonomous systems rely on advanced guidance modules which often incorporate cameras to enable key functionalities. These modules are equipped with visual odometry (VO) and visual simultaneous localization and mapping (VSLAM) algorithms that work by analyzing changes between successive frames captured by cameras. VO/VSLAM-based systems are critical backbones for autonomous vehicles, virtual reality, structure from motion, and other robotic operations. VO/VSLAM systems encounter difficulties when implementing real-time applications in outdoor environments with restricted hardware and software platforms. While many VO systems target achieving high accuracy and speed, they often exhibit high degree of complexity and limited robustness. To overcome these challenges, this paper aims to propose a new VO system called Stereo-RIVO that balances accuracy, speed, and computational cost. Furthermore, this algorithm is based on a new data association module which consists of two primary components: a scene-matching process that achieves exceptional precision without feature extraction and a key-frame detection technique based on a model of scene movement. The performance of this proposed VO system has been tested extensively for all sequences of KITTI and UTIAS datasets for analyzing efficiency for outdoor dynamic and indoor static environments, respectively. The results of these tests indicate that the proposed Stereo-RIVO outperforms other state-of-the-art methods in terms of robustness, accuracy, and speed. Our implementation code of stereo-RIVO is available at: https://github.com/salehierfan/Stereo-RIVO.
{"title":"Stereo-RIVO: Stereo-Robust Indirect Visual Odometry","authors":"Erfan Salehi, Ali Aghagolzadeh, Reshad Hosseini","doi":"10.1007/s10846-024-02116-0","DOIUrl":"https://doi.org/10.1007/s10846-024-02116-0","url":null,"abstract":"<p>Mobile robots and autonomous systems rely on advanced guidance modules which often incorporate cameras to enable key functionalities. These modules are equipped with visual odometry (VO) and visual simultaneous localization and mapping (VSLAM) algorithms that work by analyzing changes between successive frames captured by cameras. VO/VSLAM-based systems are critical backbones for autonomous vehicles, virtual reality, structure from motion, and other robotic operations. VO/VSLAM systems encounter difficulties when implementing real-time applications in outdoor environments with restricted hardware and software platforms. While many VO systems target achieving high accuracy and speed, they often exhibit high degree of complexity and limited robustness. To overcome these challenges, this paper aims to propose a new VO system called Stereo-RIVO that balances accuracy, speed, and computational cost. Furthermore, this algorithm is based on a new data association module which consists of two primary components: a scene-matching process that achieves exceptional precision without feature extraction and a key-frame detection technique based on a model of scene movement. The performance of this proposed VO system has been tested extensively for all sequences of KITTI and UTIAS datasets for analyzing efficiency for outdoor dynamic and indoor static environments, respectively. The results of these tests indicate that the proposed Stereo-RIVO outperforms other state-of-the-art methods in terms of robustness, accuracy, and speed. Our implementation code of stereo-RIVO is available at: https://github.com/salehierfan/Stereo-RIVO.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"11 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141570676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}