首页 > 最新文献

2021 IEEE International Conference on Autonomous Systems (ICAS)最新文献

英文 中文
[Copyright notice] (版权)
Pub Date : 2021-08-11 DOI: 10.1109/icas49788.2021.9551137
{"title":"[Copyright notice]","authors":"","doi":"10.1109/icas49788.2021.9551137","DOIUrl":"https://doi.org/10.1109/icas49788.2021.9551137","url":null,"abstract":"","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124432292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Information Fusion and Decision Support for Autonomous Systems 自治系统的信息融合与决策支持
Pub Date : 2021-08-11 DOI: 10.1109/ICAS49788.2021.9551161
Henry Leung
In this talk we present our works on decision support analytic for autonomous systems. Decision support analytic process multiple sensory information collected by an autonomous system such as lidar, camera, RGBD, acoustic to perform signal detection, target tracking, object recognition. As multiple sensors are involved, our system uses sensor registration, data association and fusion to combine sensory information. The next layer of the proposed decision support system orients the processed sensory information at feature and classification levels to perform situation assessment and treat evaluation. Based on the assessment, the decision support system will recommend decision. If the uncertainty is high, actions including resource allocation, planning will be used to extract or reassess the sensory information to get a recommended decision with lower uncertainty. This talk will also presents the applications of the proposed decision support analytic in four industrial projects including 1) goal-driven net-enabled distributed sensing for maritime surveillance, 2) autonomous navigation and perception of humanoid service robots, 3) distance learning for oil and gas drilling and 4) cognitive vehicles.
在这次演讲中,我们介绍了我们在自主系统决策支持分析方面的工作。决策支持分析处理由自主系统(如激光雷达、相机、RGBD、声学)收集的多个感官信息,以执行信号检测、目标跟踪、目标识别。由于涉及多个传感器,我们的系统使用传感器配准、数据关联和融合来组合传感器信息。该决策支持系统的下一层将处理后的感官信息定向到特征和分类级别,以进行态势评估和治疗评估。基于评估,决策支持系统将推荐决策。如果不确定性较高,则使用包括资源分配、规划在内的行动来提取或重新评估感官信息,以获得不确定性较低的建议决策。本次演讲还将介绍拟议的决策支持分析在四个工业项目中的应用,包括1)海上监视的目标驱动网络分布式传感,2)类人服务机器人的自主导航和感知,3)石油和天然气钻探的远程学习以及4)认知车辆。
{"title":"Information Fusion and Decision Support for Autonomous Systems","authors":"Henry Leung","doi":"10.1109/ICAS49788.2021.9551161","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551161","url":null,"abstract":"In this talk we present our works on decision support analytic for autonomous systems. Decision support analytic process multiple sensory information collected by an autonomous system such as lidar, camera, RGBD, acoustic to perform signal detection, target tracking, object recognition. As multiple sensors are involved, our system uses sensor registration, data association and fusion to combine sensory information. The next layer of the proposed decision support system orients the processed sensory information at feature and classification levels to perform situation assessment and treat evaluation. Based on the assessment, the decision support system will recommend decision. If the uncertainty is high, actions including resource allocation, planning will be used to extract or reassess the sensory information to get a recommended decision with lower uncertainty. This talk will also presents the applications of the proposed decision support analytic in four industrial projects including 1) goal-driven net-enabled distributed sensing for maritime surveillance, 2) autonomous navigation and perception of humanoid service robots, 3) distance learning for oil and gas drilling and 4) cognitive vehicles.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"158 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126021871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Collaborative Communications Between A Human And A Resilient Safety Support System 人与弹性安全支持系统之间的协作通信
Pub Date : 2021-08-11 DOI: 10.1109/ICAS49788.2021.9551108
S. Samani, Richard Jessop, Angela R. Harrivel
Successful introductory UAM integration into the NAS will be contingent on resilient safety systems that support reduced-crew flight operations. In this paper, we present a system that performs three functions: 1) monitors an operator’s physiological state; 2) assesses when the operator is experiencing anomalous states; and 3) mitigates risks by a combination of dynamic, context-based unilateral or collaborative dynamic function allocation of operational tasks. The monitoring process receives high data-rate sensor values from eye-tracking and electrocardiogram sensors. The assessment process takes these values and performs a classification that was developed using machine learning algorithms. The mitigation process invokes a collaboration protocol called DFACCto which, based on context, performs vehicle operations that the operator would otherwise routinely execute. This system has been demonstrated in a UAM flight simulator for an operator incapacitation scenario. The methods and initial results as well as relevant UAM and AAM scenarios will be described.
成功地将UAM集成到NAS中,将取决于支持减少机组人员飞行操作的弹性安全系统。在本文中,我们提出了一个具有三个功能的系统:1)监测操作员的生理状态;2)评估操作员何时经历异常状态;3)通过结合动态的、基于上下文的单边或协作的动态功能分配来降低风险。监测过程接收来自眼动追踪和心电图传感器的高数据速率传感器值。评估过程采用这些值并执行使用机器学习算法开发的分类。缓解过程调用了一个名为DFACCto的协作协议,该协议根据上下文执行操作人员通常执行的车辆操作。该系统已在UAM飞行模拟器中进行了操作员失能场景的演示。将描述方法和初步结果以及相关的UAM和AAM场景。
{"title":"Collaborative Communications Between A Human And A Resilient Safety Support System","authors":"S. Samani, Richard Jessop, Angela R. Harrivel","doi":"10.1109/ICAS49788.2021.9551108","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551108","url":null,"abstract":"Successful introductory UAM integration into the NAS will be contingent on resilient safety systems that support reduced-crew flight operations. In this paper, we present a system that performs three functions: 1) monitors an operator’s physiological state; 2) assesses when the operator is experiencing anomalous states; and 3) mitigates risks by a combination of dynamic, context-based unilateral or collaborative dynamic function allocation of operational tasks. The monitoring process receives high data-rate sensor values from eye-tracking and electrocardiogram sensors. The assessment process takes these values and performs a classification that was developed using machine learning algorithms. The mitigation process invokes a collaboration protocol called DFACCto which, based on context, performs vehicle operations that the operator would otherwise routinely execute. This system has been demonstrated in a UAM flight simulator for an operator incapacitation scenario. The methods and initial results as well as relevant UAM and AAM scenarios will be described.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"283 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131303322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leader-Follower Multi-Agent Systems: A Model Predictive Control Scheme Against Covert Attacks Leader-Follower多智能体系统:一种针对隐蔽攻击的模型预测控制方案
Pub Date : 2021-08-11 DOI: 10.1109/ICAS49788.2021.9551194
Francesco Saverio Tedesco, D. Famularo, G. Franzé
In this paper, a resilient distributed control scheme against covert attacks for constrained multi-agent networked systems is developed. The idea consists in an adequate deployment of predictive arguments with a twofold aim: detection of malicious agent behaviors and control actions implementation to mitigate as much as possible undesirable knock-on effects.
针对受约束的多智能体网络系统,提出了一种抗隐蔽攻击的弹性分布式控制方案。这个想法包括充分部署具有双重目标的预测参数:检测恶意代理行为和控制操作的实现,以尽可能多地减轻不良的连锁反应。
{"title":"Leader-Follower Multi-Agent Systems: A Model Predictive Control Scheme Against Covert Attacks","authors":"Francesco Saverio Tedesco, D. Famularo, G. Franzé","doi":"10.1109/ICAS49788.2021.9551194","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551194","url":null,"abstract":"In this paper, a resilient distributed control scheme against covert attacks for constrained multi-agent networked systems is developed. The idea consists in an adequate deployment of predictive arguments with a twofold aim: detection of malicious agent behaviors and control actions implementation to mitigate as much as possible undesirable knock-on effects.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132910767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An Open Source Motion Planning Framework for Autonomous Minimally Invasive Surgical Robots 自主微创手术机器人的开源运动规划框架
Pub Date : 2021-08-11 DOI: 10.1109/ICAS49788.2021.9551134
Aleks Attanasio, Nils Marahrens, Bruno Scaglioni, P. Valdastri
Planning and execution of autonomous tasks in minimally invasive surgical robotic are significantly more complex with respect to generic manipulators. Narrow abdominal cavities and limited entry points restrain the use of external vision systems and specialized kinematics prevent the straightforward use of standard planning algorithms. In this work, we present a novel implementation of a motion planning framework for minimally invasive surgical robots, composed of two subsystems: An arm-camera registration method only requiring the endoscopic camera and a graspable device, compatible with a 12mm trocar port, and a specialized trajectory planning algorithm, designed to generate smooth, non straight trajectories. The approach is tested on a DaVinci Research Kit obtaining an accuracy of $2.71pm 0.89$ cm in the arm-camera registration and of $1.30pm 0.39$ cm during trajectory execution. The code is organised into STORM Motion Library (STOR-MoLib), an open source library, publicly available for the research community.
在微创手术机器人中,自主任务的规划和执行要比一般的机械手复杂得多。狭窄的腹腔和有限的入口点限制了外部视觉系统的使用,专业化的运动学阻止了标准规划算法的直接使用。在这项工作中,我们提出了一种用于微创手术机器人的运动规划框架的新实现,该框架由两个子系统组成:一种臂相机配准方法,只需要内窥镜相机和一个可抓取的设备,与12mm套管针端口兼容,以及一种专门的轨迹规划算法,旨在生成光滑的非直线轨迹。该方法在达芬奇研究套件上进行了测试,在臂相机配准中获得了2.71pm 0.89$ cm的精度,在轨迹执行过程中获得了1.30pm 0.39$ cm的精度。代码被组织到STORM运动库(STOR-MoLib)中,这是一个开源库,可供研究社区公开使用。
{"title":"An Open Source Motion Planning Framework for Autonomous Minimally Invasive Surgical Robots","authors":"Aleks Attanasio, Nils Marahrens, Bruno Scaglioni, P. Valdastri","doi":"10.1109/ICAS49788.2021.9551134","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551134","url":null,"abstract":"Planning and execution of autonomous tasks in minimally invasive surgical robotic are significantly more complex with respect to generic manipulators. Narrow abdominal cavities and limited entry points restrain the use of external vision systems and specialized kinematics prevent the straightforward use of standard planning algorithms. In this work, we present a novel implementation of a motion planning framework for minimally invasive surgical robots, composed of two subsystems: An arm-camera registration method only requiring the endoscopic camera and a graspable device, compatible with a 12mm trocar port, and a specialized trajectory planning algorithm, designed to generate smooth, non straight trajectories. The approach is tested on a DaVinci Research Kit obtaining an accuracy of $2.71pm 0.89$ cm in the arm-camera registration and of $1.30pm 0.39$ cm during trajectory execution. The code is organised into STORM Motion Library (STOR-MoLib), an open source library, publicly available for the research community.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116347474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
[ICAS 2021 Front cover] [ICAS 2021年封面]
Pub Date : 2021-08-11 DOI: 10.1109/icas49788.2021.9551138
{"title":"[ICAS 2021 Front cover]","authors":"","doi":"10.1109/icas49788.2021.9551138","DOIUrl":"https://doi.org/10.1109/icas49788.2021.9551138","url":null,"abstract":"","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124857919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Blind Detection Of Radar Pulse Trains Via Self-Convolution 基于自卷积的雷达脉冲序列盲检测
Pub Date : 2021-08-11 DOI: 10.1109/ICAS49788.2021.9551181
Alex Byrley, A. Fam
This paper studies the blind detection of radar pulse trains using self-convolution. The self-convolution of a horizontally polarized pulse train with a constant pulse repetition frequency (PRF) is the same as its autocorrelation, only shifted in time, provided that the pulses are symmetric. This makes the waveform amenable to blind detection even in the presence of a constant Doppler shift. Once detected, we estimate the carrier, demodulate, and estimate the PRF of the baseband train using a logarithmic frequency domain matched filter. We derive a Neyman-Pearson self-convolution detection threshold for additive white Gaussian noise (AWGN) and conduct numerical experiments to compare the Signal-to-Noise Ratio (SNR) performance against standard matched filtering. We also illustrate the logarithmic frequency matched filter’s PRF estimation accuracy.
本文研究了利用自卷积技术对雷达脉冲序列进行盲检测。具有恒定脉冲重复频率(PRF)的水平极化脉冲序列的自卷积与其自相关相同,只是在脉冲对称的情况下发生了时间偏移。这使得即使在恒定多普勒频移的情况下,波形也可以进行盲检测。一旦检测到,我们估计载波,解调,并估计基带序列的PRF使用对数频域匹配滤波器。我们推导了加性高斯白噪声(AWGN)的Neyman-Pearson自卷积检测阈值,并进行了数值实验来比较信噪比(SNR)与标准匹配滤波的性能。我们还举例说明了对数频率匹配滤波器的PRF估计精度。
{"title":"Blind Detection Of Radar Pulse Trains Via Self-Convolution","authors":"Alex Byrley, A. Fam","doi":"10.1109/ICAS49788.2021.9551181","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551181","url":null,"abstract":"This paper studies the blind detection of radar pulse trains using self-convolution. The self-convolution of a horizontally polarized pulse train with a constant pulse repetition frequency (PRF) is the same as its autocorrelation, only shifted in time, provided that the pulses are symmetric. This makes the waveform amenable to blind detection even in the presence of a constant Doppler shift. Once detected, we estimate the carrier, demodulate, and estimate the PRF of the baseband train using a logarithmic frequency domain matched filter. We derive a Neyman-Pearson self-convolution detection threshold for additive white Gaussian noise (AWGN) and conduct numerical experiments to compare the Signal-to-Noise Ratio (SNR) performance against standard matched filtering. We also illustrate the logarithmic frequency matched filter’s PRF estimation accuracy.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124858814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
General Frameworks for Anomaly Detection Explainability: Comparative Study 异常检测可解释性的一般框架:比较研究
Pub Date : 2021-08-11 DOI: 10.1109/ICAS49788.2021.9551129
Ambareesh Ravi, Xiaozhuo Yu, Iara Santelices, F. Karray, B. Fidan
Since their inception, AutoEncoders have been very important in representational learning. They have achieved ground-breaking results in the realm of automated unsupervised anomaly detection for various critical applications. However, anomaly detection through AutoEncoders suffers from lack of transparency when it comes to decision making based on the outputs of the AutoEncoder network, especially for image-based models. Though the residual reconstruction error map from the AutoEncoder helps explaining anomalies to a certain extent, it is not a good indicator of the implicitly learnt attributes by the model. A human interpretable explanation of why an instance is anomalous not only enables the experts to fine-tune the model but also establishes and increases trust by non-expert users of the model. Convolutional AutoEncoders in particular suffer the most as there are only limited studies that focus on transparency and explainability. In this paper, aiming to bridge this gap, we explore the feasibility and compare the performances of several State-of-the-Art Explainable Artificial Intelligence (XAI) frameworks on Convolutional AutoEncoders. The paper also aims at providing the basis for future developments of reliable and trustworthy AutoEncoders for visual anomaly detection.
自问世以来,autoencoder在表征学习中一直非常重要。他们在各种关键应用的自动无监督异常检测领域取得了突破性的成果。然而,当涉及到基于AutoEncoder网络输出的决策制定时,通过AutoEncoder进行异常检测缺乏透明度,特别是对于基于图像的模型。尽管来自AutoEncoder的残差重建误差映射在一定程度上有助于解释异常,但它并不是模型隐式学习属性的良好指标。对实例异常原因的人类可解释的解释不仅使专家能够对模型进行微调,而且还建立并增加了模型的非专业用户的信任。卷积自动编码器尤其受影响最大,因为只有有限的研究关注透明度和可解释性。在本文中,为了弥补这一差距,我们探索了几种最先进的可解释人工智能(XAI)框架在卷积自编码器上的可行性并比较了它们的性能。本文还旨在为未来开发可靠、可信的自动编码器提供基础。
{"title":"General Frameworks for Anomaly Detection Explainability: Comparative Study","authors":"Ambareesh Ravi, Xiaozhuo Yu, Iara Santelices, F. Karray, B. Fidan","doi":"10.1109/ICAS49788.2021.9551129","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551129","url":null,"abstract":"Since their inception, AutoEncoders have been very important in representational learning. They have achieved ground-breaking results in the realm of automated unsupervised anomaly detection for various critical applications. However, anomaly detection through AutoEncoders suffers from lack of transparency when it comes to decision making based on the outputs of the AutoEncoder network, especially for image-based models. Though the residual reconstruction error map from the AutoEncoder helps explaining anomalies to a certain extent, it is not a good indicator of the implicitly learnt attributes by the model. A human interpretable explanation of why an instance is anomalous not only enables the experts to fine-tune the model but also establishes and increases trust by non-expert users of the model. Convolutional AutoEncoders in particular suffer the most as there are only limited studies that focus on transparency and explainability. In this paper, aiming to bridge this gap, we explore the feasibility and compare the performances of several State-of-the-Art Explainable Artificial Intelligence (XAI) frameworks on Convolutional AutoEncoders. The paper also aims at providing the basis for future developments of reliable and trustworthy AutoEncoders for visual anomaly detection.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"175 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125802601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Drone Vision and Deep Learning for Infrastructure Inspection 无人机视觉和基础设施检测的深度学习
Pub Date : 2021-08-11 DOI: 10.1109/ICAS49788.2021.9551136
I. Pitas
This lecture overviews the use of drones for infrastructure inspection and maintenance. Various types of inspection, e.g., using visual cameras, LIDAR or thermal cameras are reviewed. Drone vision plays a pivotal role in drone perception/control for infrastructure inspection and maintenance, because: a) it enhances flight safety by drone localization/mapping, obstacle detection and emergency landing detection; b) performs quality visual data acquisition, and c) allows powerful drone/human interactions, e.g., through automatic event detection and gesture control. The drone should have: a) increased multiple drone decisional autonomy and b) improved multiple drone robustness and safety mechanisms (e.g., communication robustness/safety, embedded flight regulation compliance, enhanced crowd avoidance and emergency landing mechanisms). Therefore, it must be contextually aware and adaptive. Drone vision and machine learning play a very important role towards this end, covering the following topics: a) semantic world mapping b) drone and target localization, c) drone visual analysis for target/obstacle/crowd/point of interest detection, d) 2D/3D target tracking. Finally, embedded on-drone vision (e.g., tracking) and machine learning algorithms are extremely important, as they facilitate drone autonomy, e.g., in communication-denied environments. Primary application area is electric line inspection. Line detection and tracking and drone perching are examined. Human action recognition and co-working assistance are overviewed.The lecture will offer: a) an overview of all the above plus other related topics and will stress the related algorithmic aspects, such as: b) drone localization and world mapping, c) target detection d) target tracking and 3D localization e) gesture control and co-working with humans. Some issues on embedded CNN and fast convolution computing will be overviewed as well.
本讲座概述了无人机用于基础设施检查和维护的使用。各种类型的检查,例如,使用视觉摄像机,激光雷达或热像仪进行审查。无人机视觉在基础设施检查和维护的无人机感知/控制中发挥着关键作用,因为:a)它通过无人机定位/绘图、障碍物检测和紧急着陆检测来提高飞行安全;B)执行高质量的视觉数据采集,c)允许强大的无人机/人类交互,例如,通过自动事件检测和手势控制。无人机应具有:a)增加多架无人机的决策自主权和b)改进多架无人机的鲁棒性和安全机制(例如,通信鲁棒性/安全性、嵌入式飞行法规遵从性、增强的人群规避和紧急着陆机制)。因此,它必须具有上下文意识和适应性。无人机视觉和机器学习在这方面发挥着非常重要的作用,包括以下主题:a)语义世界映射b)无人机和目标定位,c)目标/障碍物/人群/兴趣点检测的无人机视觉分析,d) 2D/3D目标跟踪。最后,嵌入式无人机视觉(例如,跟踪)和机器学习算法非常重要,因为它们促进了无人机的自主性,例如,在通信拒绝的环境中。主要应用领域为电线检测。检查线路检测和跟踪以及无人机栖息。概述了人类行为识别和协同工作协助。讲座将提供:a)概述上述所有内容以及其他相关主题,并将强调相关算法方面,例如:b)无人机定位和世界地图,c)目标检测d)目标跟踪和3D定位e)手势控制和与人类合作。本文还概述了嵌入式CNN和快速卷积计算的一些问题。
{"title":"Drone Vision and Deep Learning for Infrastructure Inspection","authors":"I. Pitas","doi":"10.1109/ICAS49788.2021.9551136","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551136","url":null,"abstract":"This lecture overviews the use of drones for infrastructure inspection and maintenance. Various types of inspection, e.g., using visual cameras, LIDAR or thermal cameras are reviewed. Drone vision plays a pivotal role in drone perception/control for infrastructure inspection and maintenance, because: a) it enhances flight safety by drone localization/mapping, obstacle detection and emergency landing detection; b) performs quality visual data acquisition, and c) allows powerful drone/human interactions, e.g., through automatic event detection and gesture control. The drone should have: a) increased multiple drone decisional autonomy and b) improved multiple drone robustness and safety mechanisms (e.g., communication robustness/safety, embedded flight regulation compliance, enhanced crowd avoidance and emergency landing mechanisms). Therefore, it must be contextually aware and adaptive. Drone vision and machine learning play a very important role towards this end, covering the following topics: a) semantic world mapping b) drone and target localization, c) drone visual analysis for target/obstacle/crowd/point of interest detection, d) 2D/3D target tracking. Finally, embedded on-drone vision (e.g., tracking) and machine learning algorithms are extremely important, as they facilitate drone autonomy, e.g., in communication-denied environments. Primary application area is electric line inspection. Line detection and tracking and drone perching are examined. Human action recognition and co-working assistance are overviewed.The lecture will offer: a) an overview of all the above plus other related topics and will stress the related algorithmic aspects, such as: b) drone localization and world mapping, c) target detection d) target tracking and 3D localization e) gesture control and co-working with humans. Some issues on embedded CNN and fast convolution computing will be overviewed as well.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"132 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126867667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Graph Convolutional Neural Network for Reliable Gait-Based Human Recognition 一种可靠的基于步态的人体识别图卷积神经网络
Pub Date : 2021-08-11 DOI: 10.1109/ICAS49788.2021.9551170
Md. Shopon, S. Yanushkevich, Yingxu Wang, M. Gavrilova
In a domain of human-machine autonomous systems, gait recognition provides unique advantages over other biometric modalities. It is an unobtrusive, widely-acceptable way of identity, gesture and activity recognition, with applications to surveillance, border control, risk prediction, military training and cybersecurity. Trustworthy and reliable person identification from videos under challenging conditions, when a subject’s walk is occluded by environmental elements, bulky clothing or a viewing angle, is addressed in this paper. It proposes a novel deep learning architecture based on Graph Convolutional Neural Network (GCNN) for accurate and reliable gait recognition from videos. The optimized feature map of the proposed GCNN architecture ensures that recognition remains accurate and invariant to viewing angle, type of clothing or other conditions.
在人机自主系统领域,步态识别比其他生物识别模式具有独特的优势。它是一种不引人注目、被广泛接受的身份、手势和活动识别方式,应用于监视、边境控制、风险预测、军事训练和网络安全。本文讨论了在具有挑战性的条件下,当受试者的行走被环境因素、笨重的衣服或视角遮挡时,如何从视频中识别出可信和可靠的人。提出了一种新的基于图卷积神经网络(GCNN)的深度学习架构,用于准确可靠的视频步态识别。优化后的GCNN结构的特征映射确保了识别的准确性和不受视角、服装类型或其他条件的影响。
{"title":"A Graph Convolutional Neural Network for Reliable Gait-Based Human Recognition","authors":"Md. Shopon, S. Yanushkevich, Yingxu Wang, M. Gavrilova","doi":"10.1109/ICAS49788.2021.9551170","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551170","url":null,"abstract":"In a domain of human-machine autonomous systems, gait recognition provides unique advantages over other biometric modalities. It is an unobtrusive, widely-acceptable way of identity, gesture and activity recognition, with applications to surveillance, border control, risk prediction, military training and cybersecurity. Trustworthy and reliable person identification from videos under challenging conditions, when a subject’s walk is occluded by environmental elements, bulky clothing or a viewing angle, is addressed in this paper. It proposes a novel deep learning architecture based on Graph Convolutional Neural Network (GCNN) for accurate and reliable gait recognition from videos. The optimized feature map of the proposed GCNN architecture ensures that recognition remains accurate and invariant to viewing angle, type of clothing or other conditions.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127851545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2021 IEEE International Conference on Autonomous Systems (ICAS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1