首页 > 最新文献

2021 IEEE International Conference on Autonomous Systems (ICAS)最新文献

英文 中文
Simultaneous Calibration of Positions, Orientations, and Time Offsets, Among Multiple Microphone Arrays 同时校准的位置,方向,和时间偏移,在多个麦克风阵列
Pub Date : 2021-08-11 DOI: 10.1109/ICAS49788.2021.9551166
Chishio Sugiyama, Katsutoshi Itoyama, Kenji Nishida, K. Nakadai
This paper examines estimation of positions, orientations, and time offsets among multiple microphone arrays and resultant sound 10-cation. Conventional methods have limitations including requiring multiple steps for calibration, assuming synchronization between multiple microphone arrays, and necessity of a priori information, which results in convergence to a local optimal solution and large convergence time. Accordingly, we propose a novel calibration method that simultaneously optimizes positions and orientations of microphone arrays and the time offsets between them. Numerical simulations achieved accurate and fast calibration of microphone parameters without falling into a local optimum solution even when using asynchronous microphone arrays.
本文研究了多个麦克风阵列之间的位置,方向和时间偏移的估计以及由此产生的声音10-阳离子。传统方法存在校准步骤多、假设多个传声器阵列同步、需要先验信息等局限性,导致其收敛到局部最优解,收敛时间长。因此,我们提出了一种新的校准方法,同时优化麦克风阵列的位置和方向以及它们之间的时间偏移。通过数值模拟,即使采用异步麦克风阵列,也能准确、快速地校准麦克风参数,而不会陷入局部最优解。
{"title":"Simultaneous Calibration of Positions, Orientations, and Time Offsets, Among Multiple Microphone Arrays","authors":"Chishio Sugiyama, Katsutoshi Itoyama, Kenji Nishida, K. Nakadai","doi":"10.1109/ICAS49788.2021.9551166","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551166","url":null,"abstract":"This paper examines estimation of positions, orientations, and time offsets among multiple microphone arrays and resultant sound 10-cation. Conventional methods have limitations including requiring multiple steps for calibration, assuming synchronization between multiple microphone arrays, and necessity of a priori information, which results in convergence to a local optimal solution and large convergence time. Accordingly, we propose a novel calibration method that simultaneously optimizes positions and orientations of microphone arrays and the time offsets between them. Numerical simulations achieved accurate and fast calibration of microphone parameters without falling into a local optimum solution even when using asynchronous microphone arrays.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114584759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Gesture Learning For Self-Driving Cars 自动驾驶汽车的手势学习
Pub Date : 2021-08-11 DOI: 10.1109/ICAS49788.2021.9551186
Ethan Shaotran, Jonathan J. Cruz, V. Reddi
Human-computer interaction (HCI) is crucial for safety as autonomous vehicles (AVs) become commonplace. Yet, little effort has been put toward ensuring that AVs understand human communications on the road. In this paper, we present Gesture Learning for Advanced Driver Assistance Systems (GLADAS), a deep learning-based self-driving car hand gesture recognition system developed and evaluated using virtual simulation. We focus on gestures as they are a natural and common way for pedestrians to interact with drivers. We challenge the system to perform in typical, everyday driving interactions with humans. Our results provide a baseline performance of 94.56% accuracy and 85.91% F1 score, promising statistics that surpass human performance and motivate the need for further research into human-AV interaction.
随着自动驾驶汽车(av)的普及,人机交互(HCI)对安全至关重要。然而,在确保自动驾驶汽车理解人类在道路上的交流方面,几乎没有付出任何努力。在本文中,我们介绍了用于高级驾驶员辅助系统(GLADAS)的手势学习,这是一种基于深度学习的自动驾驶汽车手势识别系统,使用虚拟仿真开发和评估。我们专注于手势,因为这是行人与司机互动的一种自然而常见的方式。我们挑战这个系统,让它在日常驾驶中与人类进行典型的互动。我们的研究结果提供了94.56%的准确率和85.91%的F1分数的基线性能,有希望的统计数据超过了人类的表现,并激发了进一步研究人类与av相互作用的需求。
{"title":"Gesture Learning For Self-Driving Cars","authors":"Ethan Shaotran, Jonathan J. Cruz, V. Reddi","doi":"10.1109/ICAS49788.2021.9551186","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551186","url":null,"abstract":"Human-computer interaction (HCI) is crucial for safety as autonomous vehicles (AVs) become commonplace. Yet, little effort has been put toward ensuring that AVs understand human communications on the road. In this paper, we present Gesture Learning for Advanced Driver Assistance Systems (GLADAS), a deep learning-based self-driving car hand gesture recognition system developed and evaluated using virtual simulation. We focus on gestures as they are a natural and common way for pedestrians to interact with drivers. We challenge the system to perform in typical, everyday driving interactions with humans. Our results provide a baseline performance of 94.56% accuracy and 85.91% F1 score, promising statistics that surpass human performance and motivate the need for further research into human-AV interaction.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126683532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Observational Learning: Imitation Through an Adaptive Probabilistic Approach 观察学习:通过自适应概率方法进行模仿
Pub Date : 2021-08-11 DOI: 10.1109/ICAS49788.2021.9551152
Sheida Nozari, L. Marcenaro, David Martín, C. Regazzoni
This paper proposes an adaptive method to enable imitation learning from expert demonstrations in a multi-agent context. The proposed system employs the inverse reinforcement learning method to a coupled Dynamic Bayesian Network to facilitate dynamic learning in an interactive system. This method studies the interaction at both discrete and continuous levels by identifying inter-relationships between the objects to facilitate the prediction of an expert agent. We evaluate the learning procedure in the scene of learner agent based on probabilistic reward function. Our goal is to estimate policies that predict matched trajectories with the observed one by minimizing the Kullback-Leiber divergence. The reward policies provide a probabilistic dynamic structure to minimise the abnormalities.
本文提出了一种自适应方法来实现多智能体环境下专家演示的模仿学习。该系统将逆强化学习方法应用于一个耦合的动态贝叶斯网络,以促进交互式系统的动态学习。该方法通过识别对象之间的相互关系来研究离散级和连续级的相互作用,以方便专家代理的预测。我们基于概率奖励函数来评估学习智能体场景下的学习过程。我们的目标是通过最小化Kullback-Leiber散度来估计预测与观测轨迹匹配的政策。奖励政策提供了一个概率动态结构,以尽量减少异常。
{"title":"Observational Learning: Imitation Through an Adaptive Probabilistic Approach","authors":"Sheida Nozari, L. Marcenaro, David Martín, C. Regazzoni","doi":"10.1109/ICAS49788.2021.9551152","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551152","url":null,"abstract":"This paper proposes an adaptive method to enable imitation learning from expert demonstrations in a multi-agent context. The proposed system employs the inverse reinforcement learning method to a coupled Dynamic Bayesian Network to facilitate dynamic learning in an interactive system. This method studies the interaction at both discrete and continuous levels by identifying inter-relationships between the objects to facilitate the prediction of an expert agent. We evaluate the learning procedure in the scene of learner agent based on probabilistic reward function. Our goal is to estimate policies that predict matched trajectories with the observed one by minimizing the Kullback-Leiber divergence. The reward policies provide a probabilistic dynamic structure to minimise the abnormalities.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114553170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Semantic Image Segmentation Guided By Scene Geometry 基于场景几何的语义图像分割
Pub Date : 2021-08-11 DOI: 10.1109/ICAS49788.2021.9551117
Sotirios Papadopoulos, Ioannis Mademlis, I. Pitas
Semantic image segmentation is an important functionality in various applications, such as robotic vision for autonomous cars, drones, etc. Modern Convolutional Neural Networks (CNNs) process input RGB images and predict per-pixel semantic classes. Depth maps have been successfully utilized to increase accuracy over RGB-only input. They can be used as an additional input channel complementing the RGB image, or they may be estimated by an extra neural branch under a multitask training setting. Contrary to these approaches, in this paper we explore a novel regularizer that penalizes differences between semantic and self-supervised depth predictions on presumed object boundaries during CNN training. The proposed method does not resort to multitask training (which may require a more complex CNN backbone to avoid underfitting), does not rely on RGB-D or stereoscopic 3D training data and does not require known or estimated depth maps during inference. Quantitative evaluation on a public scene parsing video dataset for autonomous driving indicates enhanced semantic segmentation accuracy with zero inference runtime overhead.
语义图像分割是各种应用中的重要功能,例如自动驾驶汽车的机器人视觉,无人机等。现代卷积神经网络(cnn)处理输入的RGB图像并预测每个像素的语义类。深度图已被成功地用于提高仅rgb输入的准确性。它们可以用作补充RGB图像的额外输入通道,也可以在多任务训练设置下由额外的神经分支进行估计。与这些方法相反,在本文中,我们探索了一种新的正则化器,该正则化器在CNN训练期间对假定对象边界进行语义和自监督深度预测之间的差异进行惩罚。该方法不需要多任务训练(这可能需要更复杂的CNN主干来避免欠拟合),不依赖于RGB-D或立体3D训练数据,在推理过程中不需要已知或估计的深度图。对自动驾驶公共场景解析视频数据集的定量评价表明,在零推理运行时开销的情况下,语义分割精度得到了提高。
{"title":"Semantic Image Segmentation Guided By Scene Geometry","authors":"Sotirios Papadopoulos, Ioannis Mademlis, I. Pitas","doi":"10.1109/ICAS49788.2021.9551117","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551117","url":null,"abstract":"Semantic image segmentation is an important functionality in various applications, such as robotic vision for autonomous cars, drones, etc. Modern Convolutional Neural Networks (CNNs) process input RGB images and predict per-pixel semantic classes. Depth maps have been successfully utilized to increase accuracy over RGB-only input. They can be used as an additional input channel complementing the RGB image, or they may be estimated by an extra neural branch under a multitask training setting. Contrary to these approaches, in this paper we explore a novel regularizer that penalizes differences between semantic and self-supervised depth predictions on presumed object boundaries during CNN training. The proposed method does not resort to multitask training (which may require a more complex CNN backbone to avoid underfitting), does not rely on RGB-D or stereoscopic 3D training data and does not require known or estimated depth maps during inference. Quantitative evaluation on a public scene parsing video dataset for autonomous driving indicates enhanced semantic segmentation accuracy with zero inference runtime overhead.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127934093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Building And Measuring Trust In Human-Machine Systems 在人机系统中建立和衡量信任
Pub Date : 2021-08-11 DOI: 10.1109/ICAS49788.2021.9551131
Lida Ghaemi Dizaji, Yaoping Hu
In human-machine systems (HMS), trust placed by humans on machines is a complex concept and attracts increasingly research efforts. Herein, we reviewed recent studies on building and measuring trust in HMS. The review was based on one comprehensive model of trust – IMPACTS, which has 7 features of intention, measurability, performance, adaptivity, communication, transparency, and security. The review found that, in the past 5 years, HMS fulfill the features of intention, measurability, communication, and transparency. Most of the HMS consider the feature of performance. However, all of the HMS address rarely the feature of adaptivity and neglect the feature of security due to using stand-alone simulations. These findings indicate that future work considering the features of adaptivity and/or security is imperative to foster human trust in HMS.
在人机系统(HMS)中,人对机器的信任是一个复杂的概念,引起了越来越多的研究。在此,我们回顾了最近的研究建立和测量信任的医疗服务。该评估基于一个综合信任模型-影响,该模型具有意图、可测量性、性能、适应性、沟通、透明度和安全性7个特征。回顾发现,在过去的5年里,医疗服务管理系统满足了意图、可测量性、沟通性和透明度的特征。大多数HMS考虑的是性能特性。然而,所有的HMS都很少考虑自适应特性,而由于使用独立模拟而忽略了安全性特性。这些发现表明,考虑适应性和/或安全性特征的未来工作对于培养人类对HMS的信任是必不可少的。
{"title":"Building And Measuring Trust In Human-Machine Systems","authors":"Lida Ghaemi Dizaji, Yaoping Hu","doi":"10.1109/ICAS49788.2021.9551131","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551131","url":null,"abstract":"In human-machine systems (HMS), trust placed by humans on machines is a complex concept and attracts increasingly research efforts. Herein, we reviewed recent studies on building and measuring trust in HMS. The review was based on one comprehensive model of trust – IMPACTS, which has 7 features of intention, measurability, performance, adaptivity, communication, transparency, and security. The review found that, in the past 5 years, HMS fulfill the features of intention, measurability, communication, and transparency. Most of the HMS consider the feature of performance. However, all of the HMS address rarely the feature of adaptivity and neglect the feature of security due to using stand-alone simulations. These findings indicate that future work considering the features of adaptivity and/or security is imperative to foster human trust in HMS.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"151 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117337235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Towards Three-Dimensional Active Incoherent Millimeter-Wave Imaging 三维有源非相干毫米波成像研究
Pub Date : 2021-08-11 DOI: 10.1109/ICAS49788.2021.9551121
Stavros Vakalis, J. Nanzer
Active incoherent millimeter-wave (AIM) imaging is a new technique that combines aspects of passive millimeter-wave imaging and noise radar to obtain high-speed imagery. Using an interferometric receiving array combined with small set of uncorrelated noise transmitters, measurements of the Fourier transform domain of the scene can be rapidly obtained, and scene images can be generated quickly via two-dimensional inverse Fourier transform. Previously, AIM imaging provided two-dimensional reconstructions of the scene. In this work we explore the use of active millimeter-wave imaging for automotive sensing by investigating array feasible layouts for automobiles, and a new technique to impart range estimation to obtain three-dimensional imaging information.
有源非相干毫米波成像是一种将无源毫米波成像与噪声雷达相结合以获得高速成像的新技术。采用小组不相关噪声发射机组合的干涉接收阵列,可以快速获得场景的傅里叶变换域测量值,并通过二维傅里叶反变换快速生成场景图像。以前,AIM成像提供了场景的二维重建。在这项工作中,我们通过研究汽车阵列的可行布局,以及一种赋予距离估计以获得三维成像信息的新技术,探索了有源毫米波成像在汽车传感中的应用。
{"title":"Towards Three-Dimensional Active Incoherent Millimeter-Wave Imaging","authors":"Stavros Vakalis, J. Nanzer","doi":"10.1109/ICAS49788.2021.9551121","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551121","url":null,"abstract":"Active incoherent millimeter-wave (AIM) imaging is a new technique that combines aspects of passive millimeter-wave imaging and noise radar to obtain high-speed imagery. Using an interferometric receiving array combined with small set of uncorrelated noise transmitters, measurements of the Fourier transform domain of the scene can be rapidly obtained, and scene images can be generated quickly via two-dimensional inverse Fourier transform. Previously, AIM imaging provided two-dimensional reconstructions of the scene. In this work we explore the use of active millimeter-wave imaging for automotive sensing by investigating array feasible layouts for automobiles, and a new technique to impart range estimation to obtain three-dimensional imaging information.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133703400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Learning Robust Features for 3D Object Pose Estimation 学习三维物体姿态估计的鲁棒特征
Pub Date : 2021-08-11 DOI: 10.1109/ICAS49788.2021.9551126
Christos Papaioannidis, I. Pitas
Object pose estimation remains an open and important task for autonomous systems, allowing them to perceive and interact with the surrounding environment. To this end, this paper proposes a 3D object pose estimation method that is suitable for execution on embedded systems. Specifically, a novel multi-task objective function is proposed, in order to train a Convolutional Neural Network (CNN) to extract pose-related features from RGB images, which are subsequently utilized in a Nearest-Neighbor (NN) search-based post-processing step to obtain the final 3D object poses. By utilizing a symmetry-aware term and unit quaternions in the proposed objective function, our method yielded more robust and discriminative features, thus, increasing 3D object pose estimation accuracy when compared to state-of-the-art. In addition, the employed feature extraction network utilizes a lightweight CNN architecture, allowing execution on hardware with limited computational capabilities. Finally, we demonstrate that the proposed method is also able to successfully generalize to previously unseen objects, without the need for extra training.
物体姿态估计仍然是自主系统的一个开放和重要的任务,使它们能够感知周围环境并与之交互。为此,本文提出了一种适合在嵌入式系统上执行的三维物体姿态估计方法。具体而言,提出了一种新的多任务目标函数,用于训练卷积神经网络(CNN)从RGB图像中提取姿态相关特征,然后在基于最近邻(NN)搜索的后处理步骤中使用这些特征来获得最终的3D物体姿态。通过在提出的目标函数中利用对称感知项和单位四元数,我们的方法产生了更具鲁棒性和判别性的特征,因此,与最先进的方法相比,提高了3D物体姿态估计的精度。此外,所采用的特征提取网络采用轻量级CNN架构,允许在计算能力有限的硬件上执行。最后,我们证明了所提出的方法也能够成功地推广到以前未见过的对象,而无需额外的训练。
{"title":"Learning Robust Features for 3D Object Pose Estimation","authors":"Christos Papaioannidis, I. Pitas","doi":"10.1109/ICAS49788.2021.9551126","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551126","url":null,"abstract":"Object pose estimation remains an open and important task for autonomous systems, allowing them to perceive and interact with the surrounding environment. To this end, this paper proposes a 3D object pose estimation method that is suitable for execution on embedded systems. Specifically, a novel multi-task objective function is proposed, in order to train a Convolutional Neural Network (CNN) to extract pose-related features from RGB images, which are subsequently utilized in a Nearest-Neighbor (NN) search-based post-processing step to obtain the final 3D object poses. By utilizing a symmetry-aware term and unit quaternions in the proposed objective function, our method yielded more robust and discriminative features, thus, increasing 3D object pose estimation accuracy when compared to state-of-the-art. In addition, the employed feature extraction network utilizes a lightweight CNN architecture, allowing execution on hardware with limited computational capabilities. Finally, we demonstrate that the proposed method is also able to successfully generalize to previously unseen objects, without the need for extra training.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114826011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Visual Control Scheme for AUV Underwater Pipeline Tracking 一种AUV水下管道跟踪视觉控制方案
Pub Date : 2021-08-11 DOI: 10.1109/ICAS49788.2021.9551173
W. Akram, A. Casavola
Inspection of submarine cables and pipelines is nowadays more and more carried out by Autonomous Underwater Vehicles (AUVs) because of their low operative costs, much less than those pertaining to the traditional SHIP/ROV-based (Remotely Operated Vehicles) industrial practice, and for the improvements in their effectiveness due to technological and methodological progress in the field. In this paper, we discuss the design of a visual control scheme aimed at solving a pipeline tracking control problem. The presented scheme consists of autonomously generating a reference path of an underwater pipeline deployed on the seabed from the images taken by a camera mounted on the AUV in order to allow the vehicle to move parallel to the longitudinal axis of the pipeline so as to inspect its status. The robustness of the scheme is also shown by adding external disturbances to the closed-loop control systems. We present a comparative simulation study under Robot Operating System (ROS) to find out suitable solutions for the underwater pipeline tracking problem.
由于自主水下航行器(auv)的操作成本低,远低于传统的基于SHIP/ rov(远程操作车辆)的工业实践,并且由于该领域技术和方法的进步,其有效性得到了提高,因此现在越来越多地使用auv进行海底电缆和管道的检查。在本文中,我们讨论了一种可视化控制方案的设计,旨在解决管道跟踪控制问题。该方案由安装在水下航行器上的摄像机拍摄的图像自动生成部署在海床上的水下管道的参考路径,以允许车辆平行于管道的纵轴移动,从而检查其状态。通过在闭环控制系统中加入外部干扰,证明了该方案的鲁棒性。在机器人操作系统(ROS)下进行了对比仿真研究,为水下管道跟踪问题寻找合适的解决方案。
{"title":"A Visual Control Scheme for AUV Underwater Pipeline Tracking","authors":"W. Akram, A. Casavola","doi":"10.1109/ICAS49788.2021.9551173","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551173","url":null,"abstract":"Inspection of submarine cables and pipelines is nowadays more and more carried out by Autonomous Underwater Vehicles (AUVs) because of their low operative costs, much less than those pertaining to the traditional SHIP/ROV-based (Remotely Operated Vehicles) industrial practice, and for the improvements in their effectiveness due to technological and methodological progress in the field. In this paper, we discuss the design of a visual control scheme aimed at solving a pipeline tracking control problem. The presented scheme consists of autonomously generating a reference path of an underwater pipeline deployed on the seabed from the images taken by a camera mounted on the AUV in order to allow the vehicle to move parallel to the longitudinal axis of the pipeline so as to inspect its status. The robustness of the scheme is also shown by adding external disturbances to the closed-loop control systems. We present a comparative simulation study under Robot Operating System (ROS) to find out suitable solutions for the underwater pipeline tracking problem.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132889280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Perspectives on the Emerging Field of Autonomous Systems and its Theoretical Foundations 自治系统新兴领域的展望及其理论基础
Pub Date : 2021-08-11 DOI: 10.1109/ICAS49788.2021.9551191
Yingxu Wang, K. Plataniotis, Arash Mohammadi, L. Marcenaro, A. Asif, Ming Hou, Henry Leung, Marina L. Gavrilova
Autonomous systems are advanced intelligent systems and general AI technologies triggered by the transdisciplinary development in intelligence science, system science, brain science, cognitive science, robotics, computational intelligence, and intelligent mathematics. AS are driven by the increasing demands in the modern industries of cognitive computers, deep machine learning, robotics, brain-inspired systems, self-driving cars, internet of things, and intelligent appliances. This paper presents a perspective on the framework of autonomous systems and their theoretical foundations. A wide range of application paradigms of autonomous systems are explored.
自治系统是智能科学、系统科学、脑科学、认知科学、机器人、计算智能、智能数学等跨学科发展引发的先进智能系统和通用人工智能技术。认知计算机、深度机器学习、机器人、大脑启发系统、自动驾驶汽车、物联网和智能家电等现代产业日益增长的需求推动了AS的发展。本文介绍了自治系统的框架及其理论基础。探索了自主系统的广泛应用范例。
{"title":"Perspectives on the Emerging Field of Autonomous Systems and its Theoretical Foundations","authors":"Yingxu Wang, K. Plataniotis, Arash Mohammadi, L. Marcenaro, A. Asif, Ming Hou, Henry Leung, Marina L. Gavrilova","doi":"10.1109/ICAS49788.2021.9551191","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551191","url":null,"abstract":"Autonomous systems are advanced intelligent systems and general AI technologies triggered by the transdisciplinary development in intelligence science, system science, brain science, cognitive science, robotics, computational intelligence, and intelligent mathematics. AS are driven by the increasing demands in the modern industries of cognitive computers, deep machine learning, robotics, brain-inspired systems, self-driving cars, internet of things, and intelligent appliances. This paper presents a perspective on the framework of autonomous systems and their theoretical foundations. A wide range of application paradigms of autonomous systems are explored.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130859598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
First Steps Toward The Development Of Virtual Platform For Validation Of Autonomous Wheel Loader At Pulp-And-Paper Mill: Modelling, Control And Real-Time Simulation 纸浆造纸厂自动轮式装载机验证虚拟平台开发的第一步:建模、控制和实时仿真
Pub Date : 2021-08-11 DOI: 10.1109/ICAS49788.2021.9551112
Michael A. Kerr, D. Nasrallah, Tsz-Ho Kwok
The forestry industry all over the world is seeing the need for modernization of its machines toward autonomy. In this paper, we focus on a wheel loader that should operate autonomously in the yard of a pulp-and-paper mill, scooping wood chips from a pile of wood and dropping them into a hopper, which is linked to a conveyor that carries them inside the mill. The modelling of the wheel loader is elaborated first, while taking into account that it is composed of two systems: (i) the vehicle and (ii) the arm carrying the bucket. Notice that the former pertains to the category of articulated vehicles that steer using a different mechanism than the conventional Ackermann steering used in car-like vehicles. As for the latter, it is a 2DOF serial manipulator. The navigation is considered then. Finally, simulation results of the kinematics model are shown in Matlab/Simulink first, then dynamics and 3D animation are added using ROS2/Gazebo. Notice that this work is a first step toward the development of the digital twin of the wheel loader. Later, it will be used as the virtual platform for the validation of the autonomous wheel loader.
世界各地的林业行业都看到了机器向自动化方向现代化的需要。在本文中,我们关注的是一种轮式装载机,它应该在纸浆和造纸厂的院子里自动运行,从一堆木头中舀出木屑,并将它们放入料斗中,料斗与传送带相连,将它们送入工厂。首先详细阐述了轮式装载机的建模,同时考虑到它由两个系统组成:(i)车辆和(ii)搬运斗的臂。请注意,前者属于铰接车辆的类别,转向使用不同的机制,而不是传统的阿克曼转向在汽车类车辆中使用。对于后者,它是一个2自由度的串行机械手。然后考虑导航。最后,首先在Matlab/Simulink中给出了运动学模型的仿真结果,然后使用ROS2/Gazebo添加了动力学和三维动画。请注意,这项工作是开发轮式装载机数字孪生体的第一步。随后,它将被用作自动轮式装载机验证的虚拟平台。
{"title":"First Steps Toward The Development Of Virtual Platform For Validation Of Autonomous Wheel Loader At Pulp-And-Paper Mill: Modelling, Control And Real-Time Simulation","authors":"Michael A. Kerr, D. Nasrallah, Tsz-Ho Kwok","doi":"10.1109/ICAS49788.2021.9551112","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551112","url":null,"abstract":"The forestry industry all over the world is seeing the need for modernization of its machines toward autonomy. In this paper, we focus on a wheel loader that should operate autonomously in the yard of a pulp-and-paper mill, scooping wood chips from a pile of wood and dropping them into a hopper, which is linked to a conveyor that carries them inside the mill. The modelling of the wheel loader is elaborated first, while taking into account that it is composed of two systems: (i) the vehicle and (ii) the arm carrying the bucket. Notice that the former pertains to the category of articulated vehicles that steer using a different mechanism than the conventional Ackermann steering used in car-like vehicles. As for the latter, it is a 2DOF serial manipulator. The navigation is considered then. Finally, simulation results of the kinematics model are shown in Matlab/Simulink first, then dynamics and 3D animation are added using ROS2/Gazebo. Notice that this work is a first step toward the development of the digital twin of the wheel loader. Later, it will be used as the virtual platform for the validation of the autonomous wheel loader.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131350922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2021 IEEE International Conference on Autonomous Systems (ICAS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1