Pub Date : 2023-01-01DOI: 10.1007/s00521-022-07925-8
Imène Neggaz, Nabil Neggaz, Hadria Fizazi
Due to technical advancements and the proliferation of mobile applications, facial analysis (FA) of humans has recently become an important area for computer vision research. FA investigates a variety of difficulties, including gender recognition, facial expression recognition, age and race recognition, with the goal of automatically comprehending social interactions. Due to the dimensional challenge posed by pre-trained CNN networks, the scientific community has developed numerous techniques inspired by biology, swarm intelligence theory, physics, and mathematical rules. This article presents a gender recognition system based on scAOA, that is a modified version of the Archimedes optimization algorithm (AOA). The latest variant (scAOA) enhances the exploitation stage by using trigonometric operators inspired by the sine cosine algorithm (SCA) in order to prevent local optima and to accelerate the convergence. The main purpose of this paper is to apply scAOA to select the relevant deep features provided by two pretrained models of CNN (AlexNet & ResNet) to recognize the gender of a human person categorized into two classes (men and women). Two datasets are used to evaluate the proposed approach (scAOA): the Brazilian FEI dataset and the Georgia Tech Face dataset (GT). In terms of accuracy, Fscore and statistical test, the comparison analysis demonstrates that scAOA outperforms other modern and competitive optimizers such as AOA, SCA, Ant lion optimizer (ALO), Salp swarm algorithm (SSA), Grey wolf optimizer (GWO), Simple genetic algorithm (SGA), Grasshopper optimization algorithm (GOA) and Particle swarm optimizer (PSO).
{"title":"Boosting Archimedes optimization algorithm using trigonometric operators based on feature selection for facial analysis.","authors":"Imène Neggaz, Nabil Neggaz, Hadria Fizazi","doi":"10.1007/s00521-022-07925-8","DOIUrl":"https://doi.org/10.1007/s00521-022-07925-8","url":null,"abstract":"<p><p>Due to technical advancements and the proliferation of mobile applications, facial analysis (FA) of humans has recently become an important area for computer vision research. FA investigates a variety of difficulties, including gender recognition, facial expression recognition, age and race recognition, with the goal of automatically comprehending social interactions. Due to the dimensional challenge posed by pre-trained CNN networks, the scientific community has developed numerous techniques inspired by biology, swarm intelligence theory, physics, and mathematical rules. This article presents a gender recognition system based on scAOA, that is a modified version of the Archimedes optimization algorithm (AOA). The latest variant (scAOA) enhances the exploitation stage by using trigonometric operators inspired by the sine cosine algorithm (SCA) in order to prevent local optima and to accelerate the convergence. The main purpose of this paper is to apply scAOA to select the relevant deep features provided by two pretrained models of CNN (AlexNet & ResNet) to recognize the gender of a human person categorized into two classes (men and women). Two datasets are used to evaluate the proposed approach (scAOA): the Brazilian FEI dataset and the Georgia Tech Face dataset (GT). In terms of accuracy, Fscore and statistical test, the comparison analysis demonstrates that scAOA outperforms other modern and competitive optimizers such as AOA, SCA, Ant lion optimizer (ALO), Salp swarm algorithm (SSA), Grey wolf optimizer (GWO), Simple genetic algorithm (SGA), Grasshopper optimization algorithm (GOA) and Particle swarm optimizer (PSO).</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":"35 5","pages":"3903-3923"},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9569187/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10622872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2022-01-20DOI: 10.1007/s00521-021-06720-1
Andrea Bobbio, Lelio Campanile, Marco Gribaudo, Mauro Iacono, Fiammetta Marulli, Michele Mastroianni
The wide use of IT resources to assess and manage the recent COVID-19 pandemic allows to increase the effectiveness of the countermeasures and the pervasiveness of monitoring and prevention. Unfortunately, the literature reports that IoT devices, a widely adopted technology for these applications, are characterized by security vulnerabilities that are difficult to manage at the state level. Comparable problems exist for related technologies that leverage smartphones, such as contact tracing applications, and non-medical health monitoring devices. In analogous situations, these vulnerabilities may be exploited in the cyber domain to overload the crisis management systems with false alarms and to interfere with the interests of target countries, with consequences on their economy and their political equilibria. In this paper we analyze the potential threat to an example subsystem to show how these influences may impact it and evaluate a possible consequence.
{"title":"A cyber warfare perspective on risks related to health IoT devices and contact tracing.","authors":"Andrea Bobbio, Lelio Campanile, Marco Gribaudo, Mauro Iacono, Fiammetta Marulli, Michele Mastroianni","doi":"10.1007/s00521-021-06720-1","DOIUrl":"10.1007/s00521-021-06720-1","url":null,"abstract":"<p><p>The wide use of IT resources to assess and manage the recent COVID-19 pandemic allows to increase the effectiveness of the countermeasures and the pervasiveness of monitoring and prevention. Unfortunately, the literature reports that IoT devices, a widely adopted technology for these applications, are characterized by security vulnerabilities that are difficult to manage at the state level. Comparable problems exist for related technologies that leverage smartphones, such as contact tracing applications, and non-medical health monitoring devices. In analogous situations, these vulnerabilities may be exploited in the cyber domain to overload the crisis management systems with false alarms and to interfere with the interests of target countries, with consequences on their economy and their political equilibria. In this paper we analyze the potential threat to an example subsystem to show how these influences may impact it and evaluate a possible consequence.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":"35 19","pages":"13823-13837"},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8769794/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9524521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2021-05-20DOI: 10.1007/s00521-021-06102-7
Rong Yi, Lanying Tang, Yuqiu Tian, Jie Liu, Zhihui Wu
Pneumonia is one of the hazardous diseases that lead to life insecurity. It needs to be diagnosed at the initial stages to prevent a person from more damage and help them save their lives. Various techniques are used to identify pneumonia, including chest X-ray, blood culture, sputum culture, fluid sample, bronchoscopy, and pulse oximetry. Chest X-ray is the most widely used method to diagnose pneumonia and is considered one of the most reliable approaches. To analyse chest X-ray images accurately, an expert radiologist needs expertise and experience in the desired domain. However, human-assisted approaches have some drawbacks: expert availability, treatment cost, availability of diagnostic tools, etc. Hence, the need for an intelligent and automated system comes into place that operates on chest X-ray images and diagnoses pneumonia. The primary purpose of technology is to develop algorithms and tools that assist humans and make their lives easier. This study proposes a scalable and interpretable deep convolutional neural network (DCNN) to identify pneumonia using chest X-ray images. The proposed modified DCNN model first extracts useful features from the images and then classifies them into normal and pneumonia classes. The proposed system has been trained and tested on chest X-ray images dataset. Various performance metrics have been utilized to inspect the stability and efficacy of the proposed model. The experimental result shows that the proposed model's performance is greater compared to the other state-of-the-art methodologies used to identify pneumonia.
{"title":"Identification and classification of pneumonia disease using a deep learning-based intelligent computational framework.","authors":"Rong Yi, Lanying Tang, Yuqiu Tian, Jie Liu, Zhihui Wu","doi":"10.1007/s00521-021-06102-7","DOIUrl":"10.1007/s00521-021-06102-7","url":null,"abstract":"<p><p>Pneumonia is one of the hazardous diseases that lead to life insecurity. It needs to be diagnosed at the initial stages to prevent a person from more damage and help them save their lives. Various techniques are used to identify pneumonia, including chest X-ray, blood culture, sputum culture, fluid sample, bronchoscopy, and pulse oximetry. Chest X-ray is the most widely used method to diagnose pneumonia and is considered one of the most reliable approaches. To analyse chest X-ray images accurately, an expert radiologist needs expertise and experience in the desired domain. However, human-assisted approaches have some drawbacks: expert availability, treatment cost, availability of diagnostic tools, etc. Hence, the need for an intelligent and automated system comes into place that operates on chest X-ray images and diagnoses pneumonia. The primary purpose of technology is to develop algorithms and tools that assist humans and make their lives easier. This study proposes a scalable and interpretable deep convolutional neural network (DCNN) to identify pneumonia using chest X-ray images. The proposed modified DCNN model first extracts useful features from the images and then classifies them into normal and pneumonia classes. The proposed system has been trained and tested on chest X-ray images dataset. Various performance metrics have been utilized to inspect the stability and efficacy of the proposed model. The experimental result shows that the proposed model's performance is greater compared to the other state-of-the-art methodologies used to identify pneumonia.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":"35 20","pages":"14473-14486"},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s00521-021-06102-7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9565919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2023-05-05DOI: 10.1007/s00521-023-08612-y
Bahareh Rezazadeh, Parvaneh Asghari, Amir Masoud Rahmani
The infectious disease Covid-19 has been causing severe social, economic, and human suffering across the globe since 2019. The countries have utilized different strategies in the last few years to combat Covid-19 based on their capabilities, technological infrastructure, and investments. A massive epidemic like this cannot be controlled without an intelligent and automatic health care system. The first reaction to the disease outbreak was lockdown, and researchers focused more on developing methods to diagnose the disease and recognize its behavior. However, as the new lifestyle becomes more normalized, research has shifted to utilizing computer-aided methods to monitor, track, detect, and treat individuals and provide services to citizens. Thus, the Internet of things, based on fog-cloud computing, using artificial intelligence approaches such as machine learning, and deep learning are practical concepts. This article aims to survey computer-based approaches to combat Covid-19 based on prevention, detection, and service provision. Technically and statistically, this article analyzes current methods, categorizes them, presents a technical taxonomy, and explores future and open issues.
{"title":"Computer-aided methods for combating Covid-19 in prevention, detection, and service provision approaches.","authors":"Bahareh Rezazadeh, Parvaneh Asghari, Amir Masoud Rahmani","doi":"10.1007/s00521-023-08612-y","DOIUrl":"10.1007/s00521-023-08612-y","url":null,"abstract":"<p><p>The infectious disease Covid-19 has been causing severe social, economic, and human suffering across the globe since 2019. The countries have utilized different strategies in the last few years to combat Covid-19 based on their capabilities, technological infrastructure, and investments. A massive epidemic like this cannot be controlled without an intelligent and automatic health care system. The first reaction to the disease outbreak was lockdown, and researchers focused more on developing methods to diagnose the disease and recognize its behavior. However, as the new lifestyle becomes more normalized, research has shifted to utilizing computer-aided methods to monitor, track, detect, and treat individuals and provide services to citizens. Thus, the Internet of things, based on fog-cloud computing, using artificial intelligence approaches such as machine learning, and deep learning are practical concepts. This article aims to survey computer-based approaches to combat Covid-19 based on prevention, detection, and service provision. Technically and statistically, this article analyzes current methods, categorizes them, presents a technical taxonomy, and explores future and open issues.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":"35 20","pages":"14739-14778"},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10162652/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9576950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1007/s00521-022-07853-7
Carmen De Maio, Giuseppe Fenza, Mariacristina Gallo, Vincenzo Loia, Claudio Stanzione
The spreading of machine learning (ML) and deep learning (DL) methods in different and critical application domains, like medicine and healthcare, introduces many opportunities but raises risks and opens ethical issues, mainly attaining to the lack of transparency. This contribution deals with the lack of transparency of ML and DL models focusing on the lack of trust in predictions and decisions generated. In this sense, this paper establishes a measure, namely Congruity, to provide information about the reliability of ML/DL model results. Congruity is defined by the lattice extracted through the formal concept analysis built on the training data. It measures how much the incoming data items are close to the ones used at the training stage of the ML and DL models. The general idea is that the reliability of trained model results is highly correlated with the similarity of input data and the training set. The objective of the paper is to demonstrate the correlation between the Congruity and the well-known Accuracy of the whole ML/DL model. Experimental results reveal that the value of correlation between Congruity and Accuracy of ML model is greater than 80% by varying ML models.
{"title":"Toward reliable machine learning with <i>Congruity</i>: a quality measure based on formal concept analysis.","authors":"Carmen De Maio, Giuseppe Fenza, Mariacristina Gallo, Vincenzo Loia, Claudio Stanzione","doi":"10.1007/s00521-022-07853-7","DOIUrl":"https://doi.org/10.1007/s00521-022-07853-7","url":null,"abstract":"<p><p>The spreading of machine learning (ML) and deep learning (DL) methods in different and critical application domains, like medicine and healthcare, introduces many opportunities but raises risks and opens ethical issues, mainly attaining to the lack of transparency. This contribution deals with the lack of transparency of ML and DL models focusing on the lack of trust in predictions and decisions generated. In this sense, this paper establishes a measure, namely <i>Congruity</i>, to provide information about the reliability of ML/DL model results. <i>Congruity</i> is defined by the lattice extracted through the formal concept analysis built on the training data. It measures how much the incoming data items are close to the ones used at the training stage of the ML and DL models. The general idea is that the reliability of trained model results is highly correlated with the similarity of input data and the training set. The objective of the paper is to demonstrate the correlation between the <i>Congruity</i> and the well-known <i>Accuracy</i> of the whole ML/DL model. Experimental results reveal that the value of correlation between <i>Congruity</i> and <i>Accuracy</i> of ML model is greater than 80% by varying ML models.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":"35 2","pages":"1899-1913"},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9540094/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10510179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1007/s00521-022-08115-2
Philip Kenneweg, Dominik Stallmann, Barbara Hammer
Transfer learning schemes based on deep networks which have been trained on huge image corpora offer state-of-the-art technologies in computer vision. Here, supervised and semi-supervised approaches constitute efficient technologies which work well with comparably small data sets. Yet, such applications are currently restricted to application domains where suitable deep network models are readily available. In this contribution, we address an important application area in the domain of biotechnology, the automatic analysis of CHO-K1 suspension growth in microfluidic single-cell cultivation, where data characteristics are very dissimilar to existing domains and trained deep networks cannot easily be adapted by classical transfer learning. We propose a novel transfer learning scheme which expands a recently introduced Twin-VAE architecture, which is trained on realistic and synthetic data, and we modify its specialized training procedure to the transfer learning domain. In the specific domain, often only few to no labels exist and annotations are costly. We investigate a novel transfer learning strategy, which incorporates a simultaneous retraining on natural and synthetic data using an invariant shared representation as well as suitable target variables, while it learns to handle unseen data from a different microscopy technology. We show the superiority of the variation of our Twin-VAE architecture over the state-of-the-art transfer learning methodology in image processing as well as classical image processing technologies, which persists, even with strongly shortened training times and leads to satisfactory results in this domain. The source code is available at https://github.com/dstallmann/transfer_learning_twinvae, works cross-platform, is open-source and free (MIT licensed) software. We make the data sets available at https://pub.uni-bielefeld.de/record/2960030.
{"title":"Novel transfer learning schemes based on Siamese networks and synthetic data.","authors":"Philip Kenneweg, Dominik Stallmann, Barbara Hammer","doi":"10.1007/s00521-022-08115-2","DOIUrl":"https://doi.org/10.1007/s00521-022-08115-2","url":null,"abstract":"<p><p>Transfer learning schemes based on deep networks which have been trained on huge image corpora offer state-of-the-art technologies in computer vision. Here, supervised and semi-supervised approaches constitute efficient technologies which work well with comparably small data sets. Yet, such applications are currently restricted to application domains where suitable deep network models are readily available. In this contribution, we address an important application area in the domain of biotechnology, the automatic analysis of CHO-K1 suspension growth in microfluidic single-cell cultivation, where data characteristics are very dissimilar to existing domains and trained deep networks cannot easily be adapted by classical transfer learning. We propose a novel transfer learning scheme which expands a recently introduced Twin-VAE architecture, which is trained on realistic and synthetic data, and we modify its specialized training procedure to the transfer learning domain. In the specific domain, often only few to no labels exist and annotations are costly. We investigate a novel transfer learning strategy, which incorporates a simultaneous retraining on natural and synthetic data using an invariant shared representation as well as suitable target variables, while it learns to handle unseen data from a different microscopy technology. We show the superiority of the variation of our Twin-VAE architecture over the state-of-the-art transfer learning methodology in image processing as well as classical image processing technologies, which persists, even with strongly shortened training times and leads to satisfactory results in this domain. The source code is available at https://github.com/dstallmann/transfer_learning_twinvae, works cross-platform, is open-source and free (MIT licensed) software. We make the data sets available at https://pub.uni-bielefeld.de/record/2960030.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":"35 11","pages":"8423-8436"},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9757634/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9156452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A lot of different methods are being opted for improving the educational standards through monitoring of the classrooms. The developed world uses Smart classrooms to enhance faculty efficiency based on accumulated learning outcomes and interests. Smart classroom boards, audio-visual aids, and multimedia are directly related to the Smart classroom environment. Along with these facilities, more effort is required to monitor and analyze students' outcomes, teachers' performance, attendance records, and contents delivery in on-campus classrooms. One can achieve more improvement in quality teaching and learning outcomes by developing digital twins in on-campus classrooms. In this article, we have proposed DeepClass-Rooms, a digital twin framework for attendance and course contents monitoring for the public sector schools of Punjab, Pakistan. DeepClassRooms is cost-effective and requires RFID readers and high-edge computing devices at the Fog layer for attendance monitoring and content matching, using convolution neural network for on-campus and online classes.
{"title":"DeepClassRooms: a deep learning based digital twin framework for on-campus class rooms.","authors":"Saad Razzaq, Babar Shah, Farkhund Iqbal, Muhammad Ilyas, Fahad Maqbool, Alvaro Rocha","doi":"10.1007/s00521-021-06754-5","DOIUrl":"https://doi.org/10.1007/s00521-021-06754-5","url":null,"abstract":"<p><p>A lot of different methods are being opted for improving the educational standards through monitoring of the classrooms. The developed world uses Smart classrooms to enhance faculty efficiency based on accumulated learning outcomes and interests. Smart classroom boards, audio-visual aids, and multimedia are directly related to the Smart classroom environment. Along with these facilities, more effort is required to monitor and analyze students' outcomes, teachers' performance, attendance records, and contents delivery in on-campus classrooms. One can achieve more improvement in quality teaching and learning outcomes by developing digital twins in on-campus classrooms. In this article, we have proposed DeepClass-Rooms, a digital twin framework for attendance and course contents monitoring for the public sector schools of Punjab, Pakistan. DeepClassRooms is cost-effective and requires RFID readers and high-edge computing devices at the Fog layer for attendance monitoring and content matching, using convolution neural network for on-campus and online classes.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":"35 11","pages":"8017-8026"},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8736310/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9164643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Coronavirus (COVID-19) is a very contagious infection that has drawn the world's attention. Modeling such diseases can be extremely valuable in predicting their effects. Although classic statistical modeling may provide adequate models, it may also fail to understand the data's intricacy. An automatic COVID-19 detection system based on computed tomography (CT) scan or X-ray images is effective, but a robust system design is challenging. In this study, we propose an intelligent healthcare system that integrates IoT-cloud technologies. This architecture uses smart connectivity sensors and deep learning (DL) for intelligent decision-making from the perspective of the smart city. The intelligent system tracks the status of patients in real time and delivers reliable, timely, and high-quality healthcare facilities at a low cost. COVID-19 detection experiments are performed using DL to test the viability of the proposed system. We use a sensor for recording, transferring, and tracking healthcare data. CT scan images from patients are sent to the cloud by IoT sensors, where the cognitive module is stored. The system decides the patient status by examining the images of the CT scan. The DL cognitive module makes the real-time decision on the possible course of action. When information is conveyed to a cognitive module, we use a state-of-the-art classification algorithm based on DL, i.e., ResNet50, to detect and classify whether the patients are normal or infected by COVID-19. We validate the proposed system's robustness and effectiveness using two benchmark publicly available datasets (Covid-Chestxray dataset and Chex-Pert dataset). At first, a dataset of 6000 images is prepared from the above two datasets. The proposed system was trained on the collection of images from 80% of the datasets and tested with 20% of the data. Cross-validation is performed using a tenfold cross-validation technique for performance evaluation. The results indicate that the proposed system gives an accuracy of 98.6%, a sensitivity of 97.3%, a specificity of 98.2%, and an F1-score of 97.87%. Results clearly show that the accuracy, specificity, sensitivity, and F1-score of our proposed method are high. The comparison shows that the proposed system performs better than the existing state-of-the-art systems. The proposed system will be helpful in medical diagnosis research and healthcare systems. It will also support the medical experts for COVID-19 screening and lead to a precious second opinion.
{"title":"A smart healthcare framework for detection and monitoring of COVID-19 using IoT and cloud computing.","authors":"Nidal Nasser, Qazi Emad-Ul-Haq, Muhammad Imran, Asmaa Ali, Imran Razzak, Abdulaziz Al-Helali","doi":"10.1007/s00521-021-06396-7","DOIUrl":"10.1007/s00521-021-06396-7","url":null,"abstract":"<p><p>Coronavirus (COVID-19) is a very contagious infection that has drawn the world's attention. Modeling such diseases can be extremely valuable in predicting their effects. Although classic statistical modeling may provide adequate models, it may also fail to understand the data's intricacy. An automatic COVID-19 detection system based on computed tomography (CT) scan or X-ray images is effective, but a robust system design is challenging. In this study, we propose an intelligent healthcare system that integrates IoT-cloud technologies. This architecture uses smart connectivity sensors and deep learning (DL) for intelligent decision-making from the perspective of the smart city. The intelligent system tracks the status of patients in real time and delivers reliable, timely, and high-quality healthcare facilities at a low cost. COVID-19 detection experiments are performed using DL to test the viability of the proposed system. We use a sensor for recording, transferring, and tracking healthcare data. CT scan images from patients are sent to the cloud by IoT sensors, where the cognitive module is stored. The system decides the patient status by examining the images of the CT scan. The DL cognitive module makes the real-time decision on the possible course of action. When information is conveyed to a cognitive module, we use a state-of-the-art classification algorithm based on DL, i.e., ResNet50, to detect and classify whether the patients are normal or infected by COVID-19. We validate the proposed system's robustness and effectiveness using two benchmark publicly available datasets (Covid-Chestxray dataset and Chex-Pert dataset). At first, a dataset of 6000 images is prepared from the above two datasets. The proposed system was trained on the collection of images from 80% of the datasets and tested with 20% of the data. Cross-validation is performed using a tenfold cross-validation technique for performance evaluation. The results indicate that the proposed system gives an accuracy of 98.6%, a sensitivity of 97.3%, a specificity of 98.2%, and an F1-score of 97.87%. Results clearly show that the accuracy, specificity, sensitivity, and F1-score of our proposed method are high. The comparison shows that the proposed system performs better than the existing state-of-the-art systems. The proposed system will be helpful in medical diagnosis research and healthcare systems. It will also support the medical experts for COVID-19 screening and lead to a precious second opinion.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":"35 19","pages":"13775-13789"},"PeriodicalIF":6.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8431959/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9529746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2021-10-18DOI: 10.1007/s00521-021-06485-7
Dinh Phamtoan, Tai Vovan
This paper proposes a new model to interpolate time series and forecast it effectively for the future. The important contribution of this study is the combination of optimal techniques for fuzzy clustering problem using genetic algorithm and forecasting model for fuzzy time series. Firstly, the proposed model finds the suitable number of clusters for a series and optimizes the clustering problem by the genetic algorithm using the improved Davies and Bouldin index as the objective function. Secondly, the study gives the method to establish the fuzzy relationship of each element to the established clusters. Finally, the developed model establishes the rule to forecast for the future. The steps of the proposed model are presented clearly and illustrated by the numerical example. Furthermore, it has been realized positively by the established MATLAB procedure. Performing for a lot of series (3007 series) with the differences about characteristics and areas, the new model has shown the significant performance in comparison with the existing models via some parameters to evaluate the built model. In addition, we also present an application of the proposed model in forecasting the COVID-19 victims in Vietnam that it can perform similarly for other countries. The numerical examples and application show potential in the forecasting area of this research.
{"title":"Building fuzzy time series model from unsupervised learning technique and genetic algorithm.","authors":"Dinh Phamtoan, Tai Vovan","doi":"10.1007/s00521-021-06485-7","DOIUrl":"10.1007/s00521-021-06485-7","url":null,"abstract":"<p><p>This paper proposes a new model to interpolate time series and forecast it effectively for the future. The important contribution of this study is the combination of optimal techniques for fuzzy clustering problem using genetic algorithm and forecasting model for fuzzy time series. Firstly, the proposed model finds the suitable number of clusters for a series and optimizes the clustering problem by the genetic algorithm using the improved Davies and Bouldin index as the objective function. Secondly, the study gives the method to establish the fuzzy relationship of each element to the established clusters. Finally, the developed model establishes the rule to forecast for the future. The steps of the proposed model are presented clearly and illustrated by the numerical example. Furthermore, it has been realized positively by the established MATLAB procedure. Performing for a lot of series (3007 series) with the differences about characteristics and areas, the new model has shown the significant performance in comparison with the existing models via some parameters to evaluate the built model. In addition, we also present an application of the proposed model in forecasting the COVID-19 victims in Vietnam that it can perform similarly for other countries. The numerical examples and application show potential in the forecasting area of this research.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":"35 10","pages":"7235-7252"},"PeriodicalIF":4.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8522192/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9128773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2022-11-17DOI: 10.1007/s00521-022-07998-5
A H Alamoodi, O S Albahri, A A Zaidan, H A Alsattar, B B Zaidan, A S Albahri
This research proposes a novel mobile health-based hospital selection framework for remote patients with multi-chronic diseases based on wearable body medical sensors that use the Internet of Things. The proposed framework uses two powerful multi-criteria decision-making (MCDM) methods, namely fuzzy-weighted zero-inconsistency and fuzzy decision by opinion score method for criteria weighting and hospital ranking. The development of both methods is based on a Q-rung orthopair fuzzy environment to address the uncertainty issues associated with the case study in this research. The other MCDM issues of multiple criteria, various levels of significance and data variation are also addressed. The proposed framework comprises two main phases, namely identification and development. The first phase discusses the telemedicine architecture selected, patient dataset used and decision matrix integrated. The development phase discusses criteria weighting by q-ROFWZIC and hospital ranking by q-ROFDOSM and their sub-associated processes. Weighting results by q-ROFWZIC indicate that the time of arrival criterion is the most significant across all experimental scenarios with (0.1837, 0.183, 0.230, 0.276, 0.335) for (q = 1, 3, 5, 7, 10), respectively. Ranking results indicate that Hospital (H-4) is the best-ranked hospital in all experimental scenarios. Both methods were evaluated based on systematic ranking and sensitivity analysis, thereby confirming the validity of the proposed framework.
{"title":"Hospital selection framework for remote MCD patients based on fuzzy q-rung orthopair environment.","authors":"A H Alamoodi, O S Albahri, A A Zaidan, H A Alsattar, B B Zaidan, A S Albahri","doi":"10.1007/s00521-022-07998-5","DOIUrl":"10.1007/s00521-022-07998-5","url":null,"abstract":"<p><p>This research proposes a novel mobile health-based hospital selection framework for remote patients with multi-chronic diseases based on wearable body medical sensors that use the Internet of Things. The proposed framework uses two powerful multi-criteria decision-making (MCDM) methods, namely fuzzy-weighted zero-inconsistency and fuzzy decision by opinion score method for criteria weighting and hospital ranking. The development of both methods is based on a Q-rung orthopair fuzzy environment to address the uncertainty issues associated with the case study in this research. The other MCDM issues of multiple criteria, various levels of significance and data variation are also addressed. The proposed framework comprises two main phases, namely identification and development. The first phase discusses the telemedicine architecture selected, patient dataset used and decision matrix integrated. The development phase discusses criteria weighting by q-ROFWZIC and hospital ranking by q-ROFDOSM and their sub-associated processes. Weighting results by q-ROFWZIC indicate that the time of arrival criterion is the most significant across all experimental scenarios with (<i>0.1837, 0.183, 0.230, 0.276, 0.335</i>) for (<i>q</i> = <i>1, 3, 5, 7, 10</i>), respectively. Ranking results indicate that Hospital (H-4) is the best-ranked hospital in all experimental scenarios. Both methods were evaluated based on systematic ranking and sensitivity analysis, thereby confirming the validity of the proposed framework.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":"35 8","pages":"6185-6196"},"PeriodicalIF":4.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9672551/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9360563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}