Pub Date : 2021-08-11DOI: 10.1109/ICAS49788.2021.9551176
Shahin Heidarian, Parnian Afshar, Nastaran Enshaei, F. Naderkhani, M. Rafiee, A. Oikonomou, F. B. Fard, A. Shafiee, K. Plataniotis, Arash Mohammadi
The automatic diagnosis of lung infections using chest computed tomography (CT) scans has been recently obtained remarkable significance, particularly during the COVID-19 pandemic that the early diagnosis of the disease is of utmost importance. In addition, infection diagnosis is the main building block of most automated diagnostic/prognostic frameworks. Recently, due to the devastating effects of the radiation on the body caused by the CT scan, there has been a surge in acquiring low and ultra-low-dose CT scans instead of the standard scans. Such CT scans, however, suffer from a high noise level which makes them difficult and time-consuming to interpret even by expert radiologists. In addition, some abnormalities are only visible using specific window settings on the radiologists’ monitor. Currently, manual adjustment of the windowing settings is the common approach to analyze such low-quality images. In this paper, we propose an automated framework based on the Capsule Networks, referred to as the “WSO-CAPS”, to detect slices demonstrating infection using low and ultra-low-dose chest CT scans. The WSOCAPS framework is equipped with a Window Setting Optimization (WSO) mechanism to automatically identify the best window setting parameters to resemble the radiologists’ efforts. The experimental results on our in-house dataset show that the WSO-CAPS enhances the capability of the Capsule Network and its counterparts to identify slices demonstrating infection. The WSO-CAPS achieves the accuracy of 92.0%, sensitivity of 90.3%, and specificity of 93.3%. We believe that the proposed WSO-CAPS has a high potential to be further utilized in future frameworks that are working with CT scans, particularly the ones which utilize an infection diagnosis step in their pipeline.
{"title":"Wso-Caps: Diagnosis Of Lung Infection From Low And Ultra-Lowdose CT Scans Using Capsule Networks And Windowsetting Optimization","authors":"Shahin Heidarian, Parnian Afshar, Nastaran Enshaei, F. Naderkhani, M. Rafiee, A. Oikonomou, F. B. Fard, A. Shafiee, K. Plataniotis, Arash Mohammadi","doi":"10.1109/ICAS49788.2021.9551176","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551176","url":null,"abstract":"The automatic diagnosis of lung infections using chest computed tomography (CT) scans has been recently obtained remarkable significance, particularly during the COVID-19 pandemic that the early diagnosis of the disease is of utmost importance. In addition, infection diagnosis is the main building block of most automated diagnostic/prognostic frameworks. Recently, due to the devastating effects of the radiation on the body caused by the CT scan, there has been a surge in acquiring low and ultra-low-dose CT scans instead of the standard scans. Such CT scans, however, suffer from a high noise level which makes them difficult and time-consuming to interpret even by expert radiologists. In addition, some abnormalities are only visible using specific window settings on the radiologists’ monitor. Currently, manual adjustment of the windowing settings is the common approach to analyze such low-quality images. In this paper, we propose an automated framework based on the Capsule Networks, referred to as the “WSO-CAPS”, to detect slices demonstrating infection using low and ultra-low-dose chest CT scans. The WSOCAPS framework is equipped with a Window Setting Optimization (WSO) mechanism to automatically identify the best window setting parameters to resemble the radiologists’ efforts. The experimental results on our in-house dataset show that the WSO-CAPS enhances the capability of the Capsule Network and its counterparts to identify slices demonstrating infection. The WSO-CAPS achieves the accuracy of 92.0%, sensitivity of 90.3%, and specificity of 93.3%. We believe that the proposed WSO-CAPS has a high potential to be further utilized in future frameworks that are working with CT scans, particularly the ones which utilize an infection diagnosis step in their pipeline.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133383568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-11DOI: 10.1109/ICAS49788.2021.9551183
Ambareesh Ravi, F. Karray
Understanding the notion of normality in visual data is a complex issue in computer vision with plenty of potential applications in several sectors. The immense effort required for optimal design for real-world application of existing methods warrants the need for a generic framework that is efficient, automated and can be momentarily deployed for the operation, reducing the effort expended on model design and hyper-parameter tuning. Hence, we propose a novel, modular and model-agnostic improvement to the conventional AutoEncoder architecture, based on visual soft-attention for the inputs to make them robust and readily improve their performance in automated semi-supervised visual anomaly detection tasks, without any extra effort in terms of hyperparameter tuning. Besides, we discuss the role of attention in AutoEncoders (AE) that can significantly improve learning and the efficacy of the models with detailed experimental results on diverse visual anomaly detection datasets.
{"title":"Attentive Autoencoders For Improving Visual Anomaly Detection","authors":"Ambareesh Ravi, F. Karray","doi":"10.1109/ICAS49788.2021.9551183","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551183","url":null,"abstract":"Understanding the notion of normality in visual data is a complex issue in computer vision with plenty of potential applications in several sectors. The immense effort required for optimal design for real-world application of existing methods warrants the need for a generic framework that is efficient, automated and can be momentarily deployed for the operation, reducing the effort expended on model design and hyper-parameter tuning. Hence, we propose a novel, modular and model-agnostic improvement to the conventional AutoEncoder architecture, based on visual soft-attention for the inputs to make them robust and readily improve their performance in automated semi-supervised visual anomaly detection tasks, without any extra effort in terms of hyperparameter tuning. Besides, we discuss the role of attention in AutoEncoders (AE) that can significantly improve learning and the efficacy of the models with detailed experimental results on diverse visual anomaly detection datasets.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131903807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-11DOI: 10.1109/ICAS49788.2021.9551114
Shixuan Hou, Chun Wang
Crowd-shipping systems, which use occasional drivers to deliver parcels with compensations, offer greater flexibility and cost-effectiveness than the conventional company-owned vehicle shipping system. This paper investigates a dynamic crowd-shipping system that uses in-store customers as crowd-shippers to deliver online orders on their way home under the condition that the crowd-shippers’ acceptances are uncertain. Optimal matching results between online orders and crowd-shippers and optimal compensation schemes should be determined to minimize the total costs of the crowd-shipping system. To this aim, we formulate this problem as a two-stage optimization model that determines matching results and compensation schemes sequentially. To evaluate the proposed optimization model, we conduct a series of computational experiments. Results show that the average delivery cost is reduced by 7.30 %, compared to the conventional shipping system.
{"title":"Matching Models for Crowd-Shipping Considering Shipper’s Acceptance Uncertainty","authors":"Shixuan Hou, Chun Wang","doi":"10.1109/ICAS49788.2021.9551114","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551114","url":null,"abstract":"Crowd-shipping systems, which use occasional drivers to deliver parcels with compensations, offer greater flexibility and cost-effectiveness than the conventional company-owned vehicle shipping system. This paper investigates a dynamic crowd-shipping system that uses in-store customers as crowd-shippers to deliver online orders on their way home under the condition that the crowd-shippers’ acceptances are uncertain. Optimal matching results between online orders and crowd-shippers and optimal compensation schemes should be determined to minimize the total costs of the crowd-shipping system. To this aim, we formulate this problem as a two-stage optimization model that determines matching results and compensation schemes sequentially. To evaluate the proposed optimization model, we conduct a series of computational experiments. Results show that the average delivery cost is reduced by 7.30 %, compared to the conventional shipping system.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114470404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-11DOI: 10.1109/ICAS49788.2021.9551116
Atharv Tendolkar, Amit Choraria, M. M. Manohara Pai, S. Girisha, Gavin Dsouza, K. Adithya
The technology in agriculture, can help farmers especially in the time of COVID pandemic, where there is shortage of labor and increasing demand for food. The technology solution can effectively and reliably improve crop yield through automated process and Agrocopter. The Agrocopter, an autonomous drone with modular systems and on-board image processing helps in holistic crop management throughout the farm. Agrocopter comes with targeted crop spraying, nutrient dropping and seed sowing modules, that can work in sync with the process of crop life cycle from sowing till harvesting. The drone with edge computing module performs periodic farm surveillance and plant health analysis using combination of NDVI (Normalized difference vegetation index) and semantic segmentation based classification to take targeted actions. It makes use of filter banks and SVM (Support Vector Machine) classifier algorithm to carry out pixel wise stitched image analysis to compute plant health indices in real time. Being very easy to operate and maintain, it can seamlessly be integrated into the farm systems and work along-side humans. It also has a completely modular design with plug and play architecture. What sets Agrocopter apart is its wide variety of applications, reliability and precision all at an affordable cost. Hence, Agrocopter is the perfect aerial farm assistant for today’s farmer.
{"title":"Modified crop health monitoring and pesticide spraying system using NDVI and Semantic Segmentation: An AGROCOPTER based approach","authors":"Atharv Tendolkar, Amit Choraria, M. M. Manohara Pai, S. Girisha, Gavin Dsouza, K. Adithya","doi":"10.1109/ICAS49788.2021.9551116","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551116","url":null,"abstract":"The technology in agriculture, can help farmers especially in the time of COVID pandemic, where there is shortage of labor and increasing demand for food. The technology solution can effectively and reliably improve crop yield through automated process and Agrocopter. The Agrocopter, an autonomous drone with modular systems and on-board image processing helps in holistic crop management throughout the farm. Agrocopter comes with targeted crop spraying, nutrient dropping and seed sowing modules, that can work in sync with the process of crop life cycle from sowing till harvesting. The drone with edge computing module performs periodic farm surveillance and plant health analysis using combination of NDVI (Normalized difference vegetation index) and semantic segmentation based classification to take targeted actions. It makes use of filter banks and SVM (Support Vector Machine) classifier algorithm to carry out pixel wise stitched image analysis to compute plant health indices in real time. Being very easy to operate and maintain, it can seamlessly be integrated into the farm systems and work along-side humans. It also has a completely modular design with plug and play architecture. What sets Agrocopter apart is its wide variety of applications, reliability and precision all at an affordable cost. Hence, Agrocopter is the perfect aerial farm assistant for today’s farmer.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114724927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-11DOI: 10.1109/ICAS49788.2021.9551124
F. Rundo, R. Leotta, V. Piuri, A. Genovese, F. Scotti, S. Battiato
Car driving safety represents one of the major targets of the ADAS (Advanced Driver Assistance Systems) technologies deeply investigated by the scientific community and car makers. From intelligent suspension control systems to adaptive braking systems, the ADAS solutions allows to significantly improve both driving comfort and safety. The aim of this contribution is to propose a driving safety assessment system based on deep networks equipped with self-attention Criss-Cross mechanism to classify the driving road surface combined with a physio-based drowsiness monitoring of the driver. The retrieved driving safety assessment performance confirmed the effectiveness of the proposed pipeline.
{"title":"Intelligent Road Surface Deep Embedded Classifier for an Efficient Physio-Based Car Driver Assistance","authors":"F. Rundo, R. Leotta, V. Piuri, A. Genovese, F. Scotti, S. Battiato","doi":"10.1109/ICAS49788.2021.9551124","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551124","url":null,"abstract":"Car driving safety represents one of the major targets of the ADAS (Advanced Driver Assistance Systems) technologies deeply investigated by the scientific community and car makers. From intelligent suspension control systems to adaptive braking systems, the ADAS solutions allows to significantly improve both driving comfort and safety. The aim of this contribution is to propose a driving safety assessment system based on deep networks equipped with self-attention Criss-Cross mechanism to classify the driving road surface combined with a physio-based drowsiness monitoring of the driver. The retrieved driving safety assessment performance confirmed the effectiveness of the proposed pipeline.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117253137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-11DOI: 10.1109/ICAS49788.2021.9551178
R. Kozma
Cutting-edge autonomous systems demonstrate outstanding performance in many important tasks requiring intelligent data processing under well-known conditions, supported by massive computational resources and big data. However, the performance of these systems may drastically deteriorate when the data are perturbed, or the environment dynamically changes, either due to natural effects or caused by manmade disturbances. The challenges are especially daunting in edge computing scenarios and on-board applications with limited resources, due to constraints on the available data, energy, computational power, while critical decisions must be made rapidly, in a robust way. A neuromorphic perspective provides crucial support under such conditions. Human brains are efficient devices using 20W power (just like a light bulb!), which is drastically less than the power consumption of today’s supercomputers requiring MWs to solve specific learning tasks in an innovative way. This is not sustainable. Brains use spatio-temporal oscillations to implement pattern-based computing, going beyond the sequential symbol manipulation paradigm of traditional Turing machines. Neuromorphic spiking chips, including memristor technology, provide crucial support to the field. Application examples include on-board signal processing, distributed sensor systems, autonomous robot navigation and control, and rapid response to emergencies.
{"title":"Sustainable Autonomy of Intelligent Systems: Challenges and Perspectives","authors":"R. Kozma","doi":"10.1109/ICAS49788.2021.9551178","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551178","url":null,"abstract":"Cutting-edge autonomous systems demonstrate outstanding performance in many important tasks requiring intelligent data processing under well-known conditions, supported by massive computational resources and big data. However, the performance of these systems may drastically deteriorate when the data are perturbed, or the environment dynamically changes, either due to natural effects or caused by manmade disturbances. The challenges are especially daunting in edge computing scenarios and on-board applications with limited resources, due to constraints on the available data, energy, computational power, while critical decisions must be made rapidly, in a robust way. A neuromorphic perspective provides crucial support under such conditions. Human brains are efficient devices using 20W power (just like a light bulb!), which is drastically less than the power consumption of today’s supercomputers requiring MWs to solve specific learning tasks in an innovative way. This is not sustainable. Brains use spatio-temporal oscillations to implement pattern-based computing, going beyond the sequential symbol manipulation paradigm of traditional Turing machines. Neuromorphic spiking chips, including memristor technology, provide crucial support to the field. Application examples include on-board signal processing, distributed sensor systems, autonomous robot navigation and control, and rapid response to emergencies.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116172404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-11DOI: 10.1109/ICAS49788.2021.9551153
Ming Hou
The advancement of AI enables the evolution of machines from relatively simple automation to completely autonomous systems that augment human capabilities with improved quality and productivity in work and life. The singularity is near! However, humans are still vulnerable. The COVID-19 pandemic reminds us of our limited knowledge about nature. The recent accidents involving Boeing 737 Max passengers ring the alarm again about the potential risks when using human-autonomy symbiosis technologies. A key challenge of safe and effective human-autonomy teaming is enabling “trust” between the human-machine team. It is even more challenging when we are facing insufficient data, incomplete information, indeterministic conditions, and inexhaustive solutions for uncertain actions. This calls for the imperative needs of appropriate design guidance and scientific methodologies for developing safety-critical autonomous systems and AI functions. The question is how to build and maintain a safe, effective, and trusted partnership between humans and autonomous systems. This talk discusses a context-based and interaction-centred design (ICD) approach for developing a safe and collaborative partnership between humans and technology by optimizing the interaction between human intelligence and AI. An associated trust model IMPACTS (Intention, Measurability, Performance, Adaptivity, Communications, Transparency, and Security) will also be introduced to enable the practitioners to foster an assured and calibrated trust relationship between humans and their partner autonomous systems. A real-world example of human-autonomy teaming in a military context will be explained to illustrate the utility and effectiveness of these trust enablers.
{"title":"Enabling Trust in Autonomous Human-Machine Teaming","authors":"Ming Hou","doi":"10.1109/ICAS49788.2021.9551153","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551153","url":null,"abstract":"The advancement of AI enables the evolution of machines from relatively simple automation to completely autonomous systems that augment human capabilities with improved quality and productivity in work and life. The singularity is near! However, humans are still vulnerable. The COVID-19 pandemic reminds us of our limited knowledge about nature. The recent accidents involving Boeing 737 Max passengers ring the alarm again about the potential risks when using human-autonomy symbiosis technologies. A key challenge of safe and effective human-autonomy teaming is enabling “trust” between the human-machine team. It is even more challenging when we are facing insufficient data, incomplete information, indeterministic conditions, and inexhaustive solutions for uncertain actions. This calls for the imperative needs of appropriate design guidance and scientific methodologies for developing safety-critical autonomous systems and AI functions. The question is how to build and maintain a safe, effective, and trusted partnership between humans and autonomous systems. This talk discusses a context-based and interaction-centred design (ICD) approach for developing a safe and collaborative partnership between humans and technology by optimizing the interaction between human intelligence and AI. An associated trust model IMPACTS (Intention, Measurability, Performance, Adaptivity, Communications, Transparency, and Security) will also be introduced to enable the practitioners to foster an assured and calibrated trust relationship between humans and their partner autonomous systems. A real-world example of human-autonomy teaming in a military context will be explained to illustrate the utility and effectiveness of these trust enablers.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128411367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-11DOI: 10.1109/ICAS49788.2021.9551135
Hamidreza Khodashenas, Pedram Fekri, M. Zadeh, J. Dargahi
Atrial fibrillation is a kind of cardiac arrhythmia in which the electrical signals of the heart are uncoordinated. The prevalence of this disease is increasing globally and the curative treatment for this problem is catheter ablation therapy. The adequate contact force between the tip of a catheter and cardiac tissue significantly can increase the efficiency and sustainability of the mentioned treatment. To satisfy the need of cardiologists for haptic feedback during the surgery and increase the efficacy of ablation therapy, in this paper a sensorfree method is proposed in such a way that the system is able to estimate the force directly from image data. To this end, a mechanical setup is designed and implemented to imitate the real ablation procedure. A novel vision-based feature extraction algorithm is also proposed to obtain catheter’s bending variations obtained from the setup. Using the extracted feature, machine learning algorithms are responsible of estimating the forces. The results revealed ${MAE lt }0.0041$ and the proposed system is able to estimate the force precisely.
{"title":"A Vision-Based Method For Estimating Contact Forces In Intracardiac Catheters","authors":"Hamidreza Khodashenas, Pedram Fekri, M. Zadeh, J. Dargahi","doi":"10.1109/ICAS49788.2021.9551135","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551135","url":null,"abstract":"Atrial fibrillation is a kind of cardiac arrhythmia in which the electrical signals of the heart are uncoordinated. The prevalence of this disease is increasing globally and the curative treatment for this problem is catheter ablation therapy. The adequate contact force between the tip of a catheter and cardiac tissue significantly can increase the efficiency and sustainability of the mentioned treatment. To satisfy the need of cardiologists for haptic feedback during the surgery and increase the efficacy of ablation therapy, in this paper a sensorfree method is proposed in such a way that the system is able to estimate the force directly from image data. To this end, a mechanical setup is designed and implemented to imitate the real ablation procedure. A novel vision-based feature extraction algorithm is also proposed to obtain catheter’s bending variations obtained from the setup. Using the extracted feature, machine learning algorithms are responsible of estimating the forces. The results revealed ${MAE lt }0.0041$ and the proposed system is able to estimate the force precisely.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127858099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-09DOI: 10.1109/ICAS49788.2021.9551157
Harshala Gammulle, Tharindu Fernando, S. Sridharan, S. Denman, C. Fookes
This paper presents a novel lightweight COVID-19 diagnosis framework using CT scans. Our system utilises a novel two-stage approach to generate robust and efficient diagnoses across heterogeneous patient level inputs. We use a powerful backbone network as a feature extractor to capture discriminative slice-level features. These features are aggregated by a lightweight network to obtain a patient level diagnosis. The aggregation network is carefully designed to have a small number of trainable parameters while also possessing sufficient capacity to generalise to diverse variations within different CT volumes and to adapt to noise introduced during the data acquisition. We achieve a significant performance increase over the baselines when benchmarked on the SPGC COVID-19 Radiomics Dataset, despite having only 2.5 million trainable parameters and requiring only 0.623 seconds on average to process a single patient’s CT volume using an Nvidia-GeForce RTX 2080 GPU.
{"title":"Multi-Slice Net: A Novel Light Weight Framework For COVID-19 Diagnosis","authors":"Harshala Gammulle, Tharindu Fernando, S. Sridharan, S. Denman, C. Fookes","doi":"10.1109/ICAS49788.2021.9551157","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551157","url":null,"abstract":"This paper presents a novel lightweight COVID-19 diagnosis framework using CT scans. Our system utilises a novel two-stage approach to generate robust and efficient diagnoses across heterogeneous patient level inputs. We use a powerful backbone network as a feature extractor to capture discriminative slice-level features. These features are aggregated by a lightweight network to obtain a patient level diagnosis. The aggregation network is carefully designed to have a small number of trainable parameters while also possessing sufficient capacity to generalise to diverse variations within different CT volumes and to adapt to noise introduced during the data acquisition. We achieve a significant performance increase over the baselines when benchmarked on the SPGC COVID-19 Radiomics Dataset, despite having only 2.5 million trainable parameters and requiring only 0.623 seconds on average to process a single patient’s CT volume using an Nvidia-GeForce RTX 2080 GPU.","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115910039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-31DOI: 10.1109/ICAS49788.2021.9551146
Nicolas Ewen, N. Khan
Neural networks often require large amounts of expert annotated data to train. When changes are made in the process of medical imaging, trained networks may not perform as well, and obtaining large amounts of expert annotations for each change in the imaging process can be time consuming and expensive. Online unsupervised learning is a method that has been proposed to deal with situations where there is a domain shift in incoming data, and a lack of annotations. The aim of this study is to see whether online unsupervised learning can help COVID-19 CT scan classification models adjust to slight domain shifts, when there are no annotations available for the new data. A total of six experiments are performed using three test datasets with differing amounts of domain shift. These experiments compare the performance of the online unsupervised learning strategy to a baseline, as well as comparing how the strategy performs on different domain shifts. Code for online unsupervised learning can be found at this link: https://github.com/Mewtwo/online-unsupervised-learning
{"title":"Online Unsupervised Learning For Domain Shift In Covid-19 CT Scan Datasets","authors":"Nicolas Ewen, N. Khan","doi":"10.1109/ICAS49788.2021.9551146","DOIUrl":"https://doi.org/10.1109/ICAS49788.2021.9551146","url":null,"abstract":"Neural networks often require large amounts of expert annotated data to train. When changes are made in the process of medical imaging, trained networks may not perform as well, and obtaining large amounts of expert annotations for each change in the imaging process can be time consuming and expensive. Online unsupervised learning is a method that has been proposed to deal with situations where there is a domain shift in incoming data, and a lack of annotations. The aim of this study is to see whether online unsupervised learning can help COVID-19 CT scan classification models adjust to slight domain shifts, when there are no annotations available for the new data. A total of six experiments are performed using three test datasets with differing amounts of domain shift. These experiments compare the performance of the online unsupervised learning strategy to a baseline, as well as comparing how the strategy performs on different domain shifts. Code for online unsupervised learning can be found at this link: https://github.com/Mewtwo/online-unsupervised-learning","PeriodicalId":287105,"journal":{"name":"2021 IEEE International Conference on Autonomous Systems (ICAS)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127916142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}