Pub Date : 2021-08-11DOI: 10.1109/icas49788.2021.9551137
{"title":"[Copyright notice]","authors":"","doi":"10.1109/icas49788.2021.9551137","DOIUrl":"https://doi.org/10.1109/icas49788.2021.9551137","url":null,"abstract":"","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124432292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-11DOI: 10.1109/ICAS49788.2021.9551161
Henry Leung
In this talk we present our works on decision support analytic for autonomous systems. Decision support analytic process multiple sensory information collected by an autonomous system such as lidar, camera, RGBD, acoustic to perform signal detection, target tracking, object recognition. As multiple sensors are involved, our system uses sensor registration, data association and fusion to combine sensory information. The next layer of the proposed decision support system orients the processed sensory information at feature and classification levels to perform situation assessment and treat evaluation. Based on the assessment, the decision support system will recommend decision. If the uncertainty is high, actions including resource allocation, planning will be used to extract or reassess the sensory information to get a recommended decision with lower uncertainty. This talk will also presents the applications of the proposed decision support analytic in four industrial projects including 1) goal-driven net-enabled distributed sensing for maritime surveillance, 2) autonomous navigation and perception of humanoid service robots, 3) distance learning for oil and gas drilling and 4) cognitive vehicles.
{"title":"Information Fusion and Decision Support for Autonomous Systems","authors":"Henry Leung","doi":"10.1109/ICAS49788.2021.9551161","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551161","url":null,"abstract":"In this talk we present our works on decision support analytic for autonomous systems. Decision support analytic process multiple sensory information collected by an autonomous system such as lidar, camera, RGBD, acoustic to perform signal detection, target tracking, object recognition. As multiple sensors are involved, our system uses sensor registration, data association and fusion to combine sensory information. The next layer of the proposed decision support system orients the processed sensory information at feature and classification levels to perform situation assessment and treat evaluation. Based on the assessment, the decision support system will recommend decision. If the uncertainty is high, actions including resource allocation, planning will be used to extract or reassess the sensory information to get a recommended decision with lower uncertainty. This talk will also presents the applications of the proposed decision support analytic in four industrial projects including 1) goal-driven net-enabled distributed sensing for maritime surveillance, 2) autonomous navigation and perception of humanoid service robots, 3) distance learning for oil and gas drilling and 4) cognitive vehicles.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"158 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126021871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-11DOI: 10.1109/ICAS49788.2021.9551108
S. Samani, Richard Jessop, Angela R. Harrivel
Successful introductory UAM integration into the NAS will be contingent on resilient safety systems that support reduced-crew flight operations. In this paper, we present a system that performs three functions: 1) monitors an operator’s physiological state; 2) assesses when the operator is experiencing anomalous states; and 3) mitigates risks by a combination of dynamic, context-based unilateral or collaborative dynamic function allocation of operational tasks. The monitoring process receives high data-rate sensor values from eye-tracking and electrocardiogram sensors. The assessment process takes these values and performs a classification that was developed using machine learning algorithms. The mitigation process invokes a collaboration protocol called DFACCto which, based on context, performs vehicle operations that the operator would otherwise routinely execute. This system has been demonstrated in a UAM flight simulator for an operator incapacitation scenario. The methods and initial results as well as relevant UAM and AAM scenarios will be described.
{"title":"Collaborative Communications Between A Human And A Resilient Safety Support System","authors":"S. Samani, Richard Jessop, Angela R. Harrivel","doi":"10.1109/ICAS49788.2021.9551108","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551108","url":null,"abstract":"Successful introductory UAM integration into the NAS will be contingent on resilient safety systems that support reduced-crew flight operations. In this paper, we present a system that performs three functions: 1) monitors an operator’s physiological state; 2) assesses when the operator is experiencing anomalous states; and 3) mitigates risks by a combination of dynamic, context-based unilateral or collaborative dynamic function allocation of operational tasks. The monitoring process receives high data-rate sensor values from eye-tracking and electrocardiogram sensors. The assessment process takes these values and performs a classification that was developed using machine learning algorithms. The mitigation process invokes a collaboration protocol called DFACCto which, based on context, performs vehicle operations that the operator would otherwise routinely execute. This system has been demonstrated in a UAM flight simulator for an operator incapacitation scenario. The methods and initial results as well as relevant UAM and AAM scenarios will be described.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"283 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131303322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-11DOI: 10.1109/ICAS49788.2021.9551194
Francesco Saverio Tedesco, D. Famularo, G. Franzé
In this paper, a resilient distributed control scheme against covert attacks for constrained multi-agent networked systems is developed. The idea consists in an adequate deployment of predictive arguments with a twofold aim: detection of malicious agent behaviors and control actions implementation to mitigate as much as possible undesirable knock-on effects.
{"title":"Leader-Follower Multi-Agent Systems: A Model Predictive Control Scheme Against Covert Attacks","authors":"Francesco Saverio Tedesco, D. Famularo, G. Franzé","doi":"10.1109/ICAS49788.2021.9551194","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551194","url":null,"abstract":"In this paper, a resilient distributed control scheme against covert attacks for constrained multi-agent networked systems is developed. The idea consists in an adequate deployment of predictive arguments with a twofold aim: detection of malicious agent behaviors and control actions implementation to mitigate as much as possible undesirable knock-on effects.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132910767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-11DOI: 10.1109/ICAS49788.2021.9551134
Aleks Attanasio, Nils Marahrens, Bruno Scaglioni, P. Valdastri
Planning and execution of autonomous tasks in minimally invasive surgical robotic are significantly more complex with respect to generic manipulators. Narrow abdominal cavities and limited entry points restrain the use of external vision systems and specialized kinematics prevent the straightforward use of standard planning algorithms. In this work, we present a novel implementation of a motion planning framework for minimally invasive surgical robots, composed of two subsystems: An arm-camera registration method only requiring the endoscopic camera and a graspable device, compatible with a 12mm trocar port, and a specialized trajectory planning algorithm, designed to generate smooth, non straight trajectories. The approach is tested on a DaVinci Research Kit obtaining an accuracy of $2.71pm 0.89$ cm in the arm-camera registration and of $1.30pm 0.39$ cm during trajectory execution. The code is organised into STORM Motion Library (STOR-MoLib), an open source library, publicly available for the research community.
{"title":"An Open Source Motion Planning Framework for Autonomous Minimally Invasive Surgical Robots","authors":"Aleks Attanasio, Nils Marahrens, Bruno Scaglioni, P. Valdastri","doi":"10.1109/ICAS49788.2021.9551134","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551134","url":null,"abstract":"Planning and execution of autonomous tasks in minimally invasive surgical robotic are significantly more complex with respect to generic manipulators. Narrow abdominal cavities and limited entry points restrain the use of external vision systems and specialized kinematics prevent the straightforward use of standard planning algorithms. In this work, we present a novel implementation of a motion planning framework for minimally invasive surgical robots, composed of two subsystems: An arm-camera registration method only requiring the endoscopic camera and a graspable device, compatible with a 12mm trocar port, and a specialized trajectory planning algorithm, designed to generate smooth, non straight trajectories. The approach is tested on a DaVinci Research Kit obtaining an accuracy of $2.71pm 0.89$ cm in the arm-camera registration and of $1.30pm 0.39$ cm during trajectory execution. The code is organised into STORM Motion Library (STOR-MoLib), an open source library, publicly available for the research community.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116347474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-11DOI: 10.1109/icas49788.2021.9551138
{"title":"[ICAS 2021 Front cover]","authors":"","doi":"10.1109/icas49788.2021.9551138","DOIUrl":"https://doi.org/10.1109/icas49788.2021.9551138","url":null,"abstract":"","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124857919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-11DOI: 10.1109/ICAS49788.2021.9551181
Alex Byrley, A. Fam
This paper studies the blind detection of radar pulse trains using self-convolution. The self-convolution of a horizontally polarized pulse train with a constant pulse repetition frequency (PRF) is the same as its autocorrelation, only shifted in time, provided that the pulses are symmetric. This makes the waveform amenable to blind detection even in the presence of a constant Doppler shift. Once detected, we estimate the carrier, demodulate, and estimate the PRF of the baseband train using a logarithmic frequency domain matched filter. We derive a Neyman-Pearson self-convolution detection threshold for additive white Gaussian noise (AWGN) and conduct numerical experiments to compare the Signal-to-Noise Ratio (SNR) performance against standard matched filtering. We also illustrate the logarithmic frequency matched filter’s PRF estimation accuracy.
{"title":"Blind Detection Of Radar Pulse Trains Via Self-Convolution","authors":"Alex Byrley, A. Fam","doi":"10.1109/ICAS49788.2021.9551181","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551181","url":null,"abstract":"This paper studies the blind detection of radar pulse trains using self-convolution. The self-convolution of a horizontally polarized pulse train with a constant pulse repetition frequency (PRF) is the same as its autocorrelation, only shifted in time, provided that the pulses are symmetric. This makes the waveform amenable to blind detection even in the presence of a constant Doppler shift. Once detected, we estimate the carrier, demodulate, and estimate the PRF of the baseband train using a logarithmic frequency domain matched filter. We derive a Neyman-Pearson self-convolution detection threshold for additive white Gaussian noise (AWGN) and conduct numerical experiments to compare the Signal-to-Noise Ratio (SNR) performance against standard matched filtering. We also illustrate the logarithmic frequency matched filter’s PRF estimation accuracy.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124858814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-11DOI: 10.1109/ICAS49788.2021.9551129
Ambareesh Ravi, Xiaozhuo Yu, Iara Santelices, F. Karray, B. Fidan
Since their inception, AutoEncoders have been very important in representational learning. They have achieved ground-breaking results in the realm of automated unsupervised anomaly detection for various critical applications. However, anomaly detection through AutoEncoders suffers from lack of transparency when it comes to decision making based on the outputs of the AutoEncoder network, especially for image-based models. Though the residual reconstruction error map from the AutoEncoder helps explaining anomalies to a certain extent, it is not a good indicator of the implicitly learnt attributes by the model. A human interpretable explanation of why an instance is anomalous not only enables the experts to fine-tune the model but also establishes and increases trust by non-expert users of the model. Convolutional AutoEncoders in particular suffer the most as there are only limited studies that focus on transparency and explainability. In this paper, aiming to bridge this gap, we explore the feasibility and compare the performances of several State-of-the-Art Explainable Artificial Intelligence (XAI) frameworks on Convolutional AutoEncoders. The paper also aims at providing the basis for future developments of reliable and trustworthy AutoEncoders for visual anomaly detection.
{"title":"General Frameworks for Anomaly Detection Explainability: Comparative Study","authors":"Ambareesh Ravi, Xiaozhuo Yu, Iara Santelices, F. Karray, B. Fidan","doi":"10.1109/ICAS49788.2021.9551129","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551129","url":null,"abstract":"Since their inception, AutoEncoders have been very important in representational learning. They have achieved ground-breaking results in the realm of automated unsupervised anomaly detection for various critical applications. However, anomaly detection through AutoEncoders suffers from lack of transparency when it comes to decision making based on the outputs of the AutoEncoder network, especially for image-based models. Though the residual reconstruction error map from the AutoEncoder helps explaining anomalies to a certain extent, it is not a good indicator of the implicitly learnt attributes by the model. A human interpretable explanation of why an instance is anomalous not only enables the experts to fine-tune the model but also establishes and increases trust by non-expert users of the model. Convolutional AutoEncoders in particular suffer the most as there are only limited studies that focus on transparency and explainability. In this paper, aiming to bridge this gap, we explore the feasibility and compare the performances of several State-of-the-Art Explainable Artificial Intelligence (XAI) frameworks on Convolutional AutoEncoders. The paper also aims at providing the basis for future developments of reliable and trustworthy AutoEncoders for visual anomaly detection.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"175 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125802601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-11DOI: 10.1109/ICAS49788.2021.9551136
I. Pitas
This lecture overviews the use of drones for infrastructure inspection and maintenance. Various types of inspection, e.g., using visual cameras, LIDAR or thermal cameras are reviewed. Drone vision plays a pivotal role in drone perception/control for infrastructure inspection and maintenance, because: a) it enhances flight safety by drone localization/mapping, obstacle detection and emergency landing detection; b) performs quality visual data acquisition, and c) allows powerful drone/human interactions, e.g., through automatic event detection and gesture control. The drone should have: a) increased multiple drone decisional autonomy and b) improved multiple drone robustness and safety mechanisms (e.g., communication robustness/safety, embedded flight regulation compliance, enhanced crowd avoidance and emergency landing mechanisms). Therefore, it must be contextually aware and adaptive. Drone vision and machine learning play a very important role towards this end, covering the following topics: a) semantic world mapping b) drone and target localization, c) drone visual analysis for target/obstacle/crowd/point of interest detection, d) 2D/3D target tracking. Finally, embedded on-drone vision (e.g., tracking) and machine learning algorithms are extremely important, as they facilitate drone autonomy, e.g., in communication-denied environments. Primary application area is electric line inspection. Line detection and tracking and drone perching are examined. Human action recognition and co-working assistance are overviewed.The lecture will offer: a) an overview of all the above plus other related topics and will stress the related algorithmic aspects, such as: b) drone localization and world mapping, c) target detection d) target tracking and 3D localization e) gesture control and co-working with humans. Some issues on embedded CNN and fast convolution computing will be overviewed as well.
{"title":"Drone Vision and Deep Learning for Infrastructure Inspection","authors":"I. Pitas","doi":"10.1109/ICAS49788.2021.9551136","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551136","url":null,"abstract":"This lecture overviews the use of drones for infrastructure inspection and maintenance. Various types of inspection, e.g., using visual cameras, LIDAR or thermal cameras are reviewed. Drone vision plays a pivotal role in drone perception/control for infrastructure inspection and maintenance, because: a) it enhances flight safety by drone localization/mapping, obstacle detection and emergency landing detection; b) performs quality visual data acquisition, and c) allows powerful drone/human interactions, e.g., through automatic event detection and gesture control. The drone should have: a) increased multiple drone decisional autonomy and b) improved multiple drone robustness and safety mechanisms (e.g., communication robustness/safety, embedded flight regulation compliance, enhanced crowd avoidance and emergency landing mechanisms). Therefore, it must be contextually aware and adaptive. Drone vision and machine learning play a very important role towards this end, covering the following topics: a) semantic world mapping b) drone and target localization, c) drone visual analysis for target/obstacle/crowd/point of interest detection, d) 2D/3D target tracking. Finally, embedded on-drone vision (e.g., tracking) and machine learning algorithms are extremely important, as they facilitate drone autonomy, e.g., in communication-denied environments. Primary application area is electric line inspection. Line detection and tracking and drone perching are examined. Human action recognition and co-working assistance are overviewed.The lecture will offer: a) an overview of all the above plus other related topics and will stress the related algorithmic aspects, such as: b) drone localization and world mapping, c) target detection d) target tracking and 3D localization e) gesture control and co-working with humans. Some issues on embedded CNN and fast convolution computing will be overviewed as well.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"132 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126867667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-11DOI: 10.1109/ICAS49788.2021.9551170
Md. Shopon, S. Yanushkevich, Yingxu Wang, M. Gavrilova
In a domain of human-machine autonomous systems, gait recognition provides unique advantages over other biometric modalities. It is an unobtrusive, widely-acceptable way of identity, gesture and activity recognition, with applications to surveillance, border control, risk prediction, military training and cybersecurity. Trustworthy and reliable person identification from videos under challenging conditions, when a subject’s walk is occluded by environmental elements, bulky clothing or a viewing angle, is addressed in this paper. It proposes a novel deep learning architecture based on Graph Convolutional Neural Network (GCNN) for accurate and reliable gait recognition from videos. The optimized feature map of the proposed GCNN architecture ensures that recognition remains accurate and invariant to viewing angle, type of clothing or other conditions.
{"title":"A Graph Convolutional Neural Network for Reliable Gait-Based Human Recognition","authors":"Md. Shopon, S. Yanushkevich, Yingxu Wang, M. Gavrilova","doi":"10.1109/ICAS49788.2021.9551170","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551170","url":null,"abstract":"In a domain of human-machine autonomous systems, gait recognition provides unique advantages over other biometric modalities. It is an unobtrusive, widely-acceptable way of identity, gesture and activity recognition, with applications to surveillance, border control, risk prediction, military training and cybersecurity. Trustworthy and reliable person identification from videos under challenging conditions, when a subject’s walk is occluded by environmental elements, bulky clothing or a viewing angle, is addressed in this paper. It proposes a novel deep learning architecture based on Graph Convolutional Neural Network (GCNN) for accurate and reliable gait recognition from videos. The optimized feature map of the proposed GCNN architecture ensures that recognition remains accurate and invariant to viewing angle, type of clothing or other conditions.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127851545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}