Pub Date : 2023-01-01DOI: 10.1007/s00521-023-08244-2
Tengku Mazlin Tengku Ab Hamid, Roselina Sallehuddin, Zuriahati Mohd Yunos, Aida Ali
Discovering a hearing disorder at an earlier intervention is critical for reducing the effects of hearing loss and the approaches to increase the remaining hearing ability can be implemented to achieve the successful development of human communication. Recently, the explosive dataset features have increased the complexity for audiologists to decide the proper treatment for the patient. In most cases, data with irrelevant features and improper classifier parameters causes a crucial influence on the audiometry system in terms of accuracy. This is due to the dependent processes of these two, where the classification accuracy performance could be worsened if both processes are conducted independently. Although the filter algorithm is capable of eliminating irrelevant features, it still lacks the ability to consider feature reliance and results in a poor selection of significant features. Improper kernel parameter settings may also contribute to poor accuracy performance. In this paper, an ensemble filters feature selection based on Information Gain (IG), Gain Ratio (GR), Chi-squared (CS), and Relief-F (RF) with harmonize optimization of Particle Swarm Optimization (PSO) and Support Vector Machine (SVM) is presented to mitigate these problems. Ensemble filters are utilized so that the initial top dominant features relevant for classification can be considered. Then, PSO and SVM are optimized simultaneously to achieve the optimal solution. The results on a standard Audiology dataset show that the proposed method produces 96.50% accuracy with optimal solution compared to classical SVM, which signifies the proposed method is effective in handling high dimensional data for hearing disorder prediction.
{"title":"Ensemble filters with harmonize PSO-SVM algorithm for optimal hearing disorder prediction.","authors":"Tengku Mazlin Tengku Ab Hamid, Roselina Sallehuddin, Zuriahati Mohd Yunos, Aida Ali","doi":"10.1007/s00521-023-08244-2","DOIUrl":"https://doi.org/10.1007/s00521-023-08244-2","url":null,"abstract":"<p><p>Discovering a hearing disorder at an earlier intervention is critical for reducing the effects of hearing loss and the approaches to increase the remaining hearing ability can be implemented to achieve the successful development of human communication. Recently, the explosive dataset features have increased the complexity for audiologists to decide the proper treatment for the patient. In most cases, data with irrelevant features and improper classifier parameters causes a crucial influence on the audiometry system in terms of accuracy. This is due to the dependent processes of these two, where the classification accuracy performance could be worsened if both processes are conducted independently. Although the filter algorithm is capable of eliminating irrelevant features, it still lacks the ability to consider feature reliance and results in a poor selection of significant features. Improper kernel parameter settings may also contribute to poor accuracy performance. In this paper, an ensemble filters feature selection based on Information Gain (IG), Gain Ratio (GR), Chi-squared (CS), and Relief-F (RF) with harmonize optimization of Particle Swarm Optimization (PSO) and Support Vector Machine (SVM) is presented to mitigate these problems. Ensemble filters are utilized so that the initial top dominant features relevant for classification can be considered. Then, PSO and SVM are optimized simultaneously to achieve the optimal solution. The results on a standard Audiology dataset show that the proposed method produces 96.50% accuracy with optimal solution compared to classical SVM, which signifies the proposed method is effective in handling high dimensional data for hearing disorder prediction.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9894525/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9372460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1007/s00521-022-08078-4
Mohammad Hashem Ryalat, Osama Dorgham, Sara Tedmori, Zainab Al-Rahamneh, Nijad Al-Najdawi, Seyedali Mirjalili
Digital image processing techniques and algorithms have become a great tool to support medical experts in identifying, studying, diagnosing certain diseases. Image segmentation methods are of the most widely used techniques in this area simplifying image representation and analysis. During the last few decades, many approaches have been proposed for image segmentation, among which multilevel thresholding methods have shown better results than most other methods. Traditional statistical approaches such as the Otsu and the Kapur methods are the standard benchmark algorithms for automatic image thresholding. Such algorithms provide optimal results, yet they suffer from high computational costs when multilevel thresholding is required, which is considered as an optimization matter. In this work, the Harris hawks optimization technique is combined with Otsu's method to effectively reduce the required computational cost while maintaining optimal outcomes. The proposed approach is tested on a publicly available imaging datasets, including chest images with clinical and genomic correlates, and represents a rural COVID-19-positive (COVID-19-AR) population. According to various performance measures, the proposed approach can achieve a substantial decrease in the computational cost and the time to converge while maintaining a level of quality highly competitive with the Otsu method for the same threshold values.
{"title":"Harris hawks optimization for COVID-19 diagnosis based on multi-threshold image segmentation.","authors":"Mohammad Hashem Ryalat, Osama Dorgham, Sara Tedmori, Zainab Al-Rahamneh, Nijad Al-Najdawi, Seyedali Mirjalili","doi":"10.1007/s00521-022-08078-4","DOIUrl":"https://doi.org/10.1007/s00521-022-08078-4","url":null,"abstract":"<p><p>Digital image processing techniques and algorithms have become a great tool to support medical experts in identifying, studying, diagnosing certain diseases. Image segmentation methods are of the most widely used techniques in this area simplifying image representation and analysis. During the last few decades, many approaches have been proposed for image segmentation, among which multilevel thresholding methods have shown better results than most other methods. Traditional statistical approaches such as the Otsu and the Kapur methods are the standard benchmark algorithms for automatic image thresholding. Such algorithms provide optimal results, yet they suffer from high computational costs when multilevel thresholding is required, which is considered as an optimization matter. In this work, the Harris hawks optimization technique is combined with Otsu's method to effectively reduce the required computational cost while maintaining optimal outcomes. The proposed approach is tested on a publicly available imaging datasets, including chest images with clinical and genomic correlates, and represents a rural COVID-19-positive (COVID-19-AR) population. According to various performance measures, the proposed approach can achieve a substantial decrease in the computational cost and the time to converge while maintaining a level of quality highly competitive with the Otsu method for the same threshold values.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9714421/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9376951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2021-07-06DOI: 10.1007/s00521-021-06286-y
Marek R Ogiela, Urszula Ogiela
One of the most important goals of modern medicine is prevention against pandemic and civilization diseases. For such tasks, advanced IT infrastructures and intelligent AI systems are used, which allow supporting patients' diagnosis and treatment. In our research, we also try to define efficient tools for coronavirus classification, especially using mathematical linguistic methods. This paper presents the ways of application of linguistics techniques in supporting effective management of medical data obtained during coronavirus treatments, and possibilities of application of such methods in classification of different variants of the coronaviruses detected for particular patients. Currently, several types of coronavirus are distinguished, which are characterized by differences in their RNA structure, which in turn causes an increase in the rate of mutation and infection with these viruses.
{"title":"Linguistic methods in healthcare application and COVID-19 variants classification.","authors":"Marek R Ogiela, Urszula Ogiela","doi":"10.1007/s00521-021-06286-y","DOIUrl":"10.1007/s00521-021-06286-y","url":null,"abstract":"<p><p>One of the most important goals of modern medicine is prevention against pandemic and civilization diseases. For such tasks, advanced IT infrastructures and intelligent AI systems are used, which allow supporting patients' diagnosis and treatment. In our research, we also try to define efficient tools for coronavirus classification, especially using mathematical linguistic methods. This paper presents the ways of application of linguistics techniques in supporting effective management of medical data obtained during coronavirus treatments, and possibilities of application of such methods in classification of different variants of the coronaviruses detected for particular patients. Currently, several types of coronavirus are distinguished, which are characterized by differences in their RNA structure, which in turn causes an increase in the rate of mutation and infection with these viruses.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s00521-021-06286-y","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9526795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2021-09-15DOI: 10.1007/s00521-021-06440-6
Xingdong Wu, Chao Liu, Lijun Wang, Muhammad Bilal
Smart healthcare monitoring systems are proliferating due to the Internet of Things (IoT)-enabled portable medical devices. The IoT and deep learning in the healthcare sector prevent diseases by evolving healthcare from face-to-face consultation to telemedicine. To protect athletes' life from life-threatening severe conditions and injuries in training and competitions, real-time monitoring of physiological indicators is critical. In this research work, we present a deep learning-based IoT-enabled real-time health monitoring system. The proposed system uses wearable medical devices to measure vital signs and apply various deep learning algorithms to extract valuable information. For this purpose, we have taken Sanda athletes as our case study. The deep learning algorithms help physicians properly analyze these athletes' conditions and offer the proper medications to them, even if the doctors are away. The performance of the proposed system is extensively evaluated using a cross-validation test by considering various statistical-based performance measurement metrics. The proposed system is considered an effective tool that diagnoses dreadful diseases among the athletes, such as brain tumors, heart disease, cancer, etc. The performance results of the proposed system are evaluated in terms of precision, recall, AUC, and F1, respectively.
{"title":"Internet of things-enabled real-time health monitoring system using deep learning.","authors":"Xingdong Wu, Chao Liu, Lijun Wang, Muhammad Bilal","doi":"10.1007/s00521-021-06440-6","DOIUrl":"10.1007/s00521-021-06440-6","url":null,"abstract":"<p><p>Smart healthcare monitoring systems are proliferating due to the Internet of Things (IoT)-enabled portable medical devices. The IoT and deep learning in the healthcare sector prevent diseases by evolving healthcare from face-to-face consultation to telemedicine. To protect athletes' life from life-threatening severe conditions and injuries in training and competitions, real-time monitoring of physiological indicators is critical. In this research work, we present a deep learning-based IoT-enabled real-time health monitoring system. The proposed system uses wearable medical devices to measure vital signs and apply various deep learning algorithms to extract valuable information. For this purpose, we have taken Sanda athletes as our case study. The deep learning algorithms help physicians properly analyze these athletes' conditions and offer the proper medications to them, even if the doctors are away. The performance of the proposed system is extensively evaluated using a cross-validation test by considering various statistical-based performance measurement metrics. The proposed system is considered an effective tool that diagnoses dreadful diseases among the athletes, such as brain tumors, heart disease, cancer, etc. The performance results of the proposed system are evaluated in terms of precision, recall, AUC, and F1, respectively.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8442525/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9572895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1007/s00521-022-08169-2
Joydeep Dey
Telemedicine is one of the safest methods to provide healthcare facilities to the remote patients with the help of digitization. In this paper, state-of-the-art session key has been proposed based on the priority oriented neural machines followed by its validation. State-of-the-art technique can be mentioned as newer scientific method. Soft computing has been extensively used and modified here under the ANN domain. Telemedicine facilitates secure data communication between the patients and the doctors regarding their treatments. The best fitted hidden neuron can contribute only in the formation of the neural output. Minimum correlation was taken into consideration under this study. Hebbian learning rule was applied on both the patient's neural machine and the doctor's neural machine. Lesser iterations were needed in the patient's machine and the doctor's machine for the synchronization. Thus, the key generation time has been shortened here which were 4.011 ms, 4.324 ms, 5.338 ms, 5.691 ms, and 6.105 ms for 56 bits, 128 bits, 256 bits, 512 bits, and 1024 bits of state-of-the-art session keys, respectively. Statistically, different key sizes of the state-of-the-art session keys were tested and accepted. Derived value-based function had yielded successful outcomes too. Partial validations with different mathematical hardness had been imposed here too. Thus, the proposed technique is suitable for the session key generation and authentication in the telemedicine in order to preserve the patients' data privacy. This proposed method has been highly protective against numerous data attacks inside the public networks. Partial transmission of the state-of-the-art session key disables the intruders to decode the same bit patterns of the proposed set of keys.
{"title":"State-of-the-art session key generation on priority-based adaptive neural machine (PANM) in telemedicine.","authors":"Joydeep Dey","doi":"10.1007/s00521-022-08169-2","DOIUrl":"https://doi.org/10.1007/s00521-022-08169-2","url":null,"abstract":"<p><p>Telemedicine is one of the safest methods to provide healthcare facilities to the remote patients with the help of digitization. In this paper, state-of-the-art session key has been proposed based on the priority oriented neural machines followed by its validation. State-of-the-art technique can be mentioned as newer scientific method. Soft computing has been extensively used and modified here under the ANN domain. Telemedicine facilitates secure data communication between the patients and the doctors regarding their treatments. The best fitted hidden neuron can contribute only in the formation of the neural output. Minimum correlation was taken into consideration under this study. Hebbian learning rule was applied on both the patient's neural machine and the doctor's neural machine. Lesser iterations were needed in the patient's machine and the doctor's machine for the synchronization. Thus, the key generation time has been shortened here which were 4.011 ms, 4.324 ms, 5.338 ms, 5.691 ms, and 6.105 ms for 56 bits, 128 bits, 256 bits, 512 bits, and 1024 bits of state-of-the-art session keys, respectively. Statistically, different key sizes of the state-of-the-art session keys were tested and accepted. Derived value-based function had yielded successful outcomes too. Partial validations with different mathematical hardness had been imposed here too. Thus, the proposed technique is suitable for the session key generation and authentication in the telemedicine in order to preserve the patients' data privacy. This proposed method has been highly protective against numerous data attacks inside the public networks. Partial transmission of the state-of-the-art session key disables the intruders to decode the same bit patterns of the proposed set of keys.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10032630/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9752892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1007/s00521-022-07997-6
Hanh Thi-Hong Duong, Linh Thi-My Tran, Huy Quoc To, Kiet Van Nguyen
Academic probation at universities has become a matter of pressing concern in recent years, as many students face severe consequences of academic probation. We carried out research to find solutions to decrease the situation mentioned above. Our research used the power of massive data sources from the education sector and the modernity of machine learning techniques to build an academic warning system. Our system is based on academic performance that directly reflects students' academic probation status at the university. Through the research process, we provided a dataset that has been extracted and developed from raw data sources, including a wealth of information about students, subjects, and scores. We build a dataset with many features that are extremely useful in predicting students' academic warning status via feature generation techniques and feature selection strategies. Remarkably, the dataset contributed is flexible and scalable because we provided detailed calculation formulas that its materials are found in any university or college in Vietnam. That allows any university to reuse or reconstruct another similar dataset based on their raw academic database. Moreover, we variously combined data, unbalanced data handling techniques, model selection techniques, and research to propose suitable machine learning algorithms to build the best possible warning system. As a result, a two-stage academic performance warning system for higher education was proposed, with the F2-score measure of more than 74% at the beginning of the semester using the algorithm Support Vector Machine and more than 92% before the final examination using the algorithm LightGBM.
{"title":"Academic performance warning system based on data driven for higher education.","authors":"Hanh Thi-Hong Duong, Linh Thi-My Tran, Huy Quoc To, Kiet Van Nguyen","doi":"10.1007/s00521-022-07997-6","DOIUrl":"https://doi.org/10.1007/s00521-022-07997-6","url":null,"abstract":"<p><p>Academic probation at universities has become a matter of pressing concern in recent years, as many students face severe consequences of academic probation. We carried out research to find solutions to decrease the situation mentioned above. Our research used the power of massive data sources from the education sector and the modernity of machine learning techniques to build an academic warning system. Our system is based on academic performance that directly reflects students' academic probation status at the university. Through the research process, we provided a dataset that has been extracted and developed from raw data sources, including a wealth of information about students, subjects, and scores. We build a dataset with many features that are extremely useful in predicting students' academic warning status via feature generation techniques and feature selection strategies. Remarkably, the dataset contributed is flexible and scalable because we provided detailed calculation formulas that its materials are found in any university or college in Vietnam. That allows any university to reuse or reconstruct another similar dataset based on their raw academic database. Moreover, we variously combined data, unbalanced data handling techniques, model selection techniques, and research to propose suitable machine learning algorithms to build the best possible warning system. As a result, a two-stage academic performance warning system for higher education was proposed, with the F2-score measure of more than 74% at the beginning of the semester using the algorithm Support Vector Machine and more than 92% before the final examination using the algorithm LightGBM.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9640845/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10815475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1007/s00521-021-05839-5
Alireza Vafaei Sadr, Bruce A Bassett, M Kunz
Anomaly detection is challenging, especially for large datasets in high dimensions. Here, we explore a general anomaly detection framework based on dimensionality reduction and unsupervised clustering. DRAMA is released as a general python package that implements the general framework with a wide range of built-in options. This approach identifies the primary prototypes in the data with anomalies detected by their large distances from the prototypes, either in the latent space or in the original, high-dimensional space. DRAMA is tested on a wide variety of simulated and real datasets, in up to 3000 dimensions, and is found to be robust and highly competitive with commonly used anomaly detection algorithms, especially in high dimensions. The flexibility of the DRAMA framework allows for significant optimization once some examples of anomalies are available, making it ideal for online anomaly detection, active learning, and highly unbalanced datasets. Besides, DRAMA naturally provides clustering of outliers for subsequent analysis.
{"title":"A flexible framework for anomaly Detection via dimensionality reduction.","authors":"Alireza Vafaei Sadr, Bruce A Bassett, M Kunz","doi":"10.1007/s00521-021-05839-5","DOIUrl":"https://doi.org/10.1007/s00521-021-05839-5","url":null,"abstract":"<p><p>Anomaly detection is challenging, especially for large datasets in high dimensions. Here, we explore a general anomaly detection framework based on dimensionality reduction and unsupervised clustering. DRAMA is released as a general python package that implements the general framework with a wide range of built-in options. This approach identifies the primary prototypes in the data with anomalies detected by their large distances from the prototypes, either in the latent space or in the original, high-dimensional space. DRAMA is tested on a wide variety of simulated and real datasets, in up to 3000 dimensions, and is found to be robust and highly competitive with commonly used anomaly detection algorithms, especially in high dimensions. The flexibility of the DRAMA framework allows for significant optimization once some examples of anomalies are available, making it ideal for online anomaly detection, active learning, and highly unbalanced datasets. Besides, DRAMA naturally provides clustering of outliers for subsequent analysis.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s00521-021-05839-5","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10566461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2023-05-09DOI: 10.1007/s00521-023-08625-7
Lidia Ogiela, Arcangelo Castiglione, Brij B Gupta, Dharma P Agrawal
{"title":"IoT-based health monitoring system to handle pandemic diseases using estimated computing.","authors":"Lidia Ogiela, Arcangelo Castiglione, Brij B Gupta, Dharma P Agrawal","doi":"10.1007/s00521-023-08625-7","DOIUrl":"10.1007/s00521-023-08625-7","url":null,"abstract":"","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10169154/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9897470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1007/s00521-023-08324-3
Mohit Agarwal, Suneet K Gupta, K K Biswas
Researchers have adapted the conventional deep learning classification networks to generate Fully Conventional Networks (FCN) for carrying out accurate semantic segmentation. However, such models are expensive both in terms of storage and inference time and not readily employable on edge devices. In this paper, a compressed version of VGG16-based Fully Convolution Network (FCN) has been developed using Particle Swarm Optimization. It has been shown that the developed model can offer tremendous saving in storage space and also faster inference time, and can be implemented on edge devices. The efficacy of the proposed approach has been tested using potato late blight leaf images from publicly available PlantVillage dataset, street scene image dataset and lungs X-Ray dataset and it has been shown that it approaches the accuracies offered by standard FCN even after 851× compression.
{"title":"Development of a compressed FCN architecture for semantic segmentation using Particle Swarm Optimization.","authors":"Mohit Agarwal, Suneet K Gupta, K K Biswas","doi":"10.1007/s00521-023-08324-3","DOIUrl":"https://doi.org/10.1007/s00521-023-08324-3","url":null,"abstract":"<p><p>Researchers have adapted the conventional deep learning classification networks to generate Fully Conventional Networks (FCN) for carrying out accurate semantic segmentation. However, such models are expensive both in terms of storage and inference time and not readily employable on edge devices. In this paper, a compressed version of VGG16-based Fully Convolution Network (FCN) has been developed using Particle Swarm Optimization. It has been shown that the developed model can offer tremendous saving in storage space and also faster inference time, and can be implemented on edge devices. The efficacy of the proposed approach has been tested using potato late blight leaf images from publicly available PlantVillage dataset, street scene image dataset and lungs X-Ray dataset and it has been shown that it approaches the accuracies offered by standard FCN even after 851× compression.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9897161/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9855405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2023-04-30DOI: 10.1007/s00521-023-08554-5
Şefki Kolozali, Lia Chatzidiakou, Roderic Jones, Jennifer K Quint, Frank Kelly, Benjamin Barratt
In this study, we present a cohort study involving 106 COPD patients using portable environmental sensor nodes with attached air pollution sensors and activity-related sensors, as well as daily symptom records and peak flow measurements to monitor patients' activity and personal exposure to air pollution. This is the first study which attempts to predict COPD symptoms based on personal air pollution exposure. We developed a system that can detect COPD patients' symptoms one day in advance of symptoms appearing. We proposed using the Probabilistic Latent Component Analysis (PLCA) model based on 3-dimensional and 4-dimensional spectral dictionary tensors for personalised and population monitoring, respectively. The model is combined with Linear Dynamic Systems (LDS) to track the patients' symptoms. We compared the performance of PLCA and PLCA-LDS models against Random Forest models in the identification of COPD patients' symptoms, since tree-based classifiers were used for remote monitoring of COPD patients in the literature. We found that there was a significant difference between the classifiers, symptoms and the personalised versus population factors. Our results show that the proposed PLCA-LDS-3D model outperformed the PLCA and the RF models between 4 and 20% on average. When we used only air pollutants as input, the PLCA-LDS-3D forecasting results in personalised and population models were 48.67 and 36.33% accuracy for worsening of lung capacity and 38.67 and 19% accuracy for exacerbation of COPD patients' symptoms, respectively. We have shown that indicators of the quality of an individual's environment, specifically air pollutants, are as good predictors of the worsening of respiratory symptoms in COPD patients as a direct measurement.
{"title":"Early detection of COPD patients' symptoms with personal environmental sensors: a remote sensing framework using probabilistic latent component analysis with linear dynamic systems.","authors":"Şefki Kolozali, Lia Chatzidiakou, Roderic Jones, Jennifer K Quint, Frank Kelly, Benjamin Barratt","doi":"10.1007/s00521-023-08554-5","DOIUrl":"10.1007/s00521-023-08554-5","url":null,"abstract":"<p><p>In this study, we present a cohort study involving 106 COPD patients using portable environmental sensor nodes with attached air pollution sensors and activity-related sensors, as well as daily symptom records and peak flow measurements to monitor patients' activity and personal exposure to air pollution. This is the first study which attempts to predict COPD symptoms based on personal air pollution exposure. We developed a system that can detect COPD patients' symptoms one day in advance of symptoms appearing. We proposed using the Probabilistic Latent Component Analysis (PLCA) model based on 3-dimensional and 4-dimensional spectral dictionary tensors for personalised and population monitoring, respectively. The model is combined with Linear Dynamic Systems (LDS) to track the patients' symptoms. We compared the performance of PLCA and PLCA-LDS models against Random Forest models in the identification of COPD patients' symptoms, since tree-based classifiers were used for remote monitoring of COPD patients in the literature. We found that there was a significant difference between the classifiers, symptoms and the personalised versus population factors. Our results show that the proposed PLCA-LDS-3D model outperformed the PLCA and the RF models between 4 and 20% on average. When we used only air pollutants as input, the PLCA-LDS-3D forecasting results in personalised and population models were 48.67 and 36.33% accuracy for worsening of lung capacity and 38.67 and 19% accuracy for exacerbation of COPD patients' symptoms, respectively. We have shown that indicators of the quality of an individual's environment, specifically air pollutants, are as good predictors of the worsening of respiratory symptoms in COPD patients as a direct measurement.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10338599/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10007233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}