Pub Date : 2025-02-13DOI: 10.1080/0954898X.2024.2443611
Karthika Perumal, Karmel Arockiasamy
Security has become crucial as Internet of Things (IoT) applications proliferate. IoT vulnerabilities are widespread, as demonstrated by a recent distributed denial-of-service (DDoS) assault, which many IoT devices unintentionally assisted with. IoT device management may be done safely with the help of the new software-defined anything (SDx) paradigm. In this study, a five-phase SDN design will be equipped with a detection and mitigation system of DDoS attack. Data cleaning is a method of pre-processing raw data that is crucial to the flow of information. The suitable features are chosen from the retrieved features using the augmented chi-square method. A deep two-layer architecture with four classifiers is utilized to characterize the attack's detection stage. Using the recently created hybrid optimization method known as the MUAE approach, the weight of the QNN is adjusted. Until the optimized QNN detects an attacker, regular data routing occurs. In that scenario, control is passed along to the mitigation of attacks step. For training rates of 60, 70, 80, and 90, the predicted accuracy of the model is 94.273%, 94.860%, 94.93%, and 96.02%. Finally, the decided system is verified against traditional ways to demonstrate its superiority in both mitigation and attack detection.
{"title":"Optimization-assisted deep two-layer framework for ddos attack detection and proposed mitigation in software defined network.","authors":"Karthika Perumal, Karmel Arockiasamy","doi":"10.1080/0954898X.2024.2443611","DOIUrl":"https://doi.org/10.1080/0954898X.2024.2443611","url":null,"abstract":"<p><p>Security has become crucial as Internet of Things (IoT) applications proliferate. IoT vulnerabilities are widespread, as demonstrated by a recent distributed denial-of-service (DDoS) assault, which many IoT devices unintentionally assisted with. IoT device management may be done safely with the help of the new software-defined anything (SDx) paradigm. In this study, a five-phase SDN design will be equipped with a detection and mitigation system of DDoS attack. Data cleaning is a method of pre-processing raw data that is crucial to the flow of information. The suitable features are chosen from the retrieved features using the augmented chi-square method. A deep two-layer architecture with four classifiers is utilized to characterize the attack's detection stage. Using the recently created hybrid optimization method known as the MUAE approach, the weight of the QNN is adjusted. Until the optimized QNN detects an attacker, regular data routing occurs. In that scenario, control is passed along to the mitigation of attacks step. For training rates of 60, 70, 80, and 90, the predicted accuracy of the model is 94.273%, 94.860%, 94.93%, and 96.02%. Finally, the decided system is verified against traditional ways to demonstrate its superiority in both mitigation and attack detection.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"1-36"},"PeriodicalIF":1.1,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143411548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-11DOI: 10.1080/0954898X.2025.2452288
Sharda Yashwant Salunkhe, Mahesh S Chavan
Alzheimer's disease (AD) is a severe neurological disorder that leads to irreversible memory loss. In the previous research, the early-stage Alzheimer's often presents with subtle memory issues that are difficult to differentiate from normal age-related changes. This research designed a novel detection model called the Zeiler and Fergus Quantum Dilated Convolutional Neural Network (ZF-QDCNN) for AD detection using Magnetic Resonance Imaging (MRI). Initially, the input MRI images are taken from a specific dataset, which is pre-processed using a Gaussian filter. Then, the brain area segmentation is performed by utilizing the Channel-wise Feature Pyramid Network for Medicine (CFPNet-M). After segmentation, relevant features are extracted, and the classification of AD is performed using the ZF-QDCNN, which is the integration of the Zeiler and Fergus Network (ZFNet) with the Quantum Dilated Convolutional Neural Network (QDCNN). Moreover, the ZF-QDCNN model demonstrated promising performance, achieving an accuracy of 91.7%, a sensitivity of 90.7%, a specificity of 92.7%, and a f-measure of 91.8% in detecting AD. Additionally, the proposed ZF-QDCNN model effectively identifies and classifies Alzheimer's disease in MRI images, highlighting its potential as a valuable tool for early diagnosis and management of the condition.
{"title":"ZF-QDCNN: ZFNet and quantum dilated convolutional neural network based Alzheimer's disease detection using MRI images.","authors":"Sharda Yashwant Salunkhe, Mahesh S Chavan","doi":"10.1080/0954898X.2025.2452288","DOIUrl":"https://doi.org/10.1080/0954898X.2025.2452288","url":null,"abstract":"<p><p>Alzheimer's disease (AD) is a severe neurological disorder that leads to irreversible memory loss. In the previous research, the early-stage Alzheimer's often presents with subtle memory issues that are difficult to differentiate from normal age-related changes. This research designed a novel detection model called the Zeiler and Fergus Quantum Dilated Convolutional Neural Network (ZF-QDCNN) for AD detection using Magnetic Resonance Imaging (MRI). Initially, the input MRI images are taken from a specific dataset, which is pre-processed using a Gaussian filter. Then, the brain area segmentation is performed by utilizing the Channel-wise Feature Pyramid Network for Medicine (CFPNet-M). After segmentation, relevant features are extracted, and the classification of AD is performed using the ZF-QDCNN, which is the integration of the Zeiler and Fergus Network (ZFNet) with the Quantum Dilated Convolutional Neural Network (QDCNN). Moreover, the ZF-QDCNN model demonstrated promising performance, achieving an accuracy of 91.7%, a sensitivity of 90.7%, a specificity of 92.7%, and a f-measure of 91.8% in detecting AD. Additionally, the proposed ZF-QDCNN model effectively identifies and classifies Alzheimer's disease in MRI images, highlighting its potential as a valuable tool for early diagnosis and management of the condition.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"1-45"},"PeriodicalIF":1.1,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143400539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01Epub Date: 2024-09-25DOI: 10.1080/0954898X.2024.2404915
Jayachitra Sekar, Prasanth Aruchamy
In today's world, heart disease threatens human life owing to higher mortality and morbidity across the globe. The earlier prediction of heart disease engenders interoperability for the treatment of patients and offers better diagnostic recommendations from medical professionals. However, the existing machine learning classifiers suffer from computational complexity and overfitting problems, which reduces the classification accuracy of the diagnostic system. To address these constraints, this work proposes a new hybrid optimization algorithm to improve the classification accuracy and optimize computation time in smart healthcare applications. Primarily, the optimal features are selected through the hybrid Arithmetic Optimization and Inter-Twinned Mutation-Based Harris Hawk Optimization (AITH2O) algorithm. The proposed hybrid AITH2O algorithm entails advantages of both exploration and exploitation abilities and acquires faster convergence. It is further employed to tune the parameters of the Stabilized Adaptive Neuro-Fuzzy Inference System (SANFIS) classifier for predicting heart disease accurately. The Cleveland heart disease dataset is utilized to validate the efficacy of the proposed algorithm. The simulation is carried out using MATLAB 2020a environment. The simulation results show that the proposed hybrid SANFIS classifier attains a superior accuracy of 99.28% and true positive rate of 99.46% compared to existing state-of-the-art techniques.
{"title":"A novel approach for heart disease prediction using hybridized AITH<sup>2</sup>O algorithm and SANFIS classifier.","authors":"Jayachitra Sekar, Prasanth Aruchamy","doi":"10.1080/0954898X.2024.2404915","DOIUrl":"10.1080/0954898X.2024.2404915","url":null,"abstract":"<p><p>In today's world, heart disease threatens human life owing to higher mortality and morbidity across the globe. The earlier prediction of heart disease engenders interoperability for the treatment of patients and offers better diagnostic recommendations from medical professionals. However, the existing machine learning classifiers suffer from computational complexity and overfitting problems, which reduces the classification accuracy of the diagnostic system. To address these constraints, this work proposes a new hybrid optimization algorithm to improve the classification accuracy and optimize computation time in smart healthcare applications. Primarily, the optimal features are selected through the hybrid Arithmetic Optimization and Inter-Twinned Mutation-Based Harris Hawk Optimization (AITH<sup>2</sup>O) algorithm. The proposed hybrid AITH<sup>2</sup>O algorithm entails advantages of both exploration and exploitation abilities and acquires faster convergence. It is further employed to tune the parameters of the Stabilized Adaptive Neuro-Fuzzy Inference System (SANFIS) classifier for predicting heart disease accurately. The Cleveland heart disease dataset is utilized to validate the efficacy of the proposed algorithm. The simulation is carried out using MATLAB 2020a environment. The simulation results show that the proposed hybrid SANFIS classifier attains a superior accuracy of 99.28% and true positive rate of 99.46% compared to existing state-of-the-art techniques.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"109-147"},"PeriodicalIF":1.1,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142332540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1080/0954898X.2025.2452280
Anchana Balakrishnannair Sreekumari, Arul Teen Yesudasan Paulsy
Stroke remains a leading global health concern and early diagnosis and accurate identification of stroke lesions are essential for improving treatment outcomes and reducing long-term disabilities. Computed Tomography (CT) imaging is widely used in clinical settings for diagnosing stroke, assessing lesion size, and determining the severity. However, the accurate segmentation and early detection of stroke lesions in CT images remain challenging. Thus, a Jaccard_Residual SqueezeNet is proposed for predicting stroke from CT images with the integration of the Internet of Things (IoT). The Jaccard_Residual SqueezeNet is the integration of the Jaccard index in Residual SqueezeNet. Firstly, the brain CT image is routed to the Base Station (BS) using the Fractional Jellyfish Search Pelican Optimization Algorithm (FJSPOA) and preprocessing is accomplished by median filter. Then, the skull segmentation is accomplished by ENet and then feature extraction is done. Lastly, Stroke is detected using the Jaccard_Residual SqueezeNet. The values of throughput, energy, distance, trust, and delay determined in terms of routing are 72.172 Mbps, 0.580J, 22.243 m, 0.915, and 0.083S. Also, the accuracy, sensitivity, precision, and F1-score for stroke detection are 0.902, 0.896, 0.916, and 0.906. These findings suggest that Jaccard_Residual SqueezeNet offers a robust and efficient platform for stroke detection.
{"title":"Hybrid deep learning based stroke detection using CT images with routing in an IoT environment.","authors":"Anchana Balakrishnannair Sreekumari, Arul Teen Yesudasan Paulsy","doi":"10.1080/0954898X.2025.2452280","DOIUrl":"https://doi.org/10.1080/0954898X.2025.2452280","url":null,"abstract":"<p><p>Stroke remains a leading global health concern and early diagnosis and accurate identification of stroke lesions are essential for improving treatment outcomes and reducing long-term disabilities. Computed Tomography (CT) imaging is widely used in clinical settings for diagnosing stroke, assessing lesion size, and determining the severity. However, the accurate segmentation and early detection of stroke lesions in CT images remain challenging. Thus, a Jaccard_Residual SqueezeNet is proposed for predicting stroke from CT images with the integration of the Internet of Things (IoT). The Jaccard_Residual SqueezeNet is the integration of the Jaccard index in Residual SqueezeNet. Firstly, the brain CT image is routed to the Base Station (BS) using the Fractional Jellyfish Search Pelican Optimization Algorithm (FJSPOA) and preprocessing is accomplished by median filter. Then, the skull segmentation is accomplished by ENet and then feature extraction is done. Lastly, Stroke is detected using the Jaccard_Residual SqueezeNet. The values of throughput, energy, distance, trust, and delay determined in terms of routing are 72.172 Mbps, 0.580J, 22.243 m, 0.915, and 0.083S. Also, the accuracy, sensitivity, precision, and F1-score for stroke detection are 0.902, 0.896, 0.916, and 0.906. These findings suggest that Jaccard_Residual SqueezeNet offers a robust and efficient platform for stroke detection.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"1-40"},"PeriodicalIF":1.1,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143076490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01Epub Date: 2024-09-20DOI: 10.1080/0954898X.2024.2392772
Rashmi Chaudhary, Manoj Kumar
Monitoring Surveillance video is really time-consuming, and the complexity of typical crowd behaviour in crowded situations makes this even more challenging. This has sparked a curiosity about computer vision-based anomaly detection. This study introduces a new crowd anomaly detection method with two main steps: Visual Attention Detection and Anomaly Detection. The Visual Attention Detection phase uses an Enhanced Bilateral Texture-Based Methodology to pinpoint crucial areas in crowded scenes, improving anomaly detection precision. Next, the Anomaly Detection phase employs Optimized Deep Maxout Network to robustly identify unusual behaviours. This network's deep learning capabilities are essential for detecting complex patterns in diverse crowd scenarios. To enhance accuracy, the model is trained using the innovative Battle Royale Coalesced Atom Search Optimization (BRCASO) algorithm, which fine-tunes optimal weights for superior performance, ensuring heightened detection accuracy and reliability. Lastly, using various performance metrics, the suggested work's effectiveness will be contrasted with that of the other traditional approaches. The proposed crowd anomaly detection is implemented in Python. On observing the result showed that the suggested model attains a detection accuracy of 97.28% at a learning rate of 90%, which is much superior than the detection accuracy of other models, including ASO = 90.56%, BMO = 91.39%, BES = 88.63%, BRO = 86.98%, and FFLY = 89.59%.
{"title":"Optimized deep maxout for crowd anomaly detection: A hybrid optimization-based model.","authors":"Rashmi Chaudhary, Manoj Kumar","doi":"10.1080/0954898X.2024.2392772","DOIUrl":"10.1080/0954898X.2024.2392772","url":null,"abstract":"<p><p>Monitoring Surveillance video is really time-consuming, and the complexity of typical crowd behaviour in crowded situations makes this even more challenging. This has sparked a curiosity about computer vision-based anomaly detection. This study introduces a new crowd anomaly detection method with two main steps: Visual Attention Detection and Anomaly Detection. The Visual Attention Detection phase uses an Enhanced Bilateral Texture-Based Methodology to pinpoint crucial areas in crowded scenes, improving anomaly detection precision. Next, the Anomaly Detection phase employs Optimized Deep Maxout Network to robustly identify unusual behaviours. This network's deep learning capabilities are essential for detecting complex patterns in diverse crowd scenarios. To enhance accuracy, the model is trained using the innovative Battle Royale Coalesced Atom Search Optimization (BRCASO) algorithm, which fine-tunes optimal weights for superior performance, ensuring heightened detection accuracy and reliability. Lastly, using various performance metrics, the suggested work's effectiveness will be contrasted with that of the other traditional approaches. The proposed crowd anomaly detection is implemented in Python. On observing the result showed that the suggested model attains a detection accuracy of 97.28% at a learning rate of 90%, which is much superior than the detection accuracy of other models, including ASO = 90.56%, BMO = 91.39%, BES = 88.63%, BRO = 86.98%, and FFLY = 89.59%.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"148-173"},"PeriodicalIF":1.1,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142301214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01Epub Date: 2024-11-17DOI: 10.1080/0954898X.2024.2424248
Annamalai Thiruvengadam Madhavi, Kamal Basha Rahimunnisa
Estimating the optimal answer is expensive for huge data resources that decrease the functionality of the system. To solve these issues, the latest groundnut leaf disorder identification model by deep learning techniques is implemented. The images are collected from traditional databases, and then they are given to the pre-processing stage. Then, relevant features are drawn out from the preprocessed images in two stages. In the first stage, the preprocessed image is segmented using adaptive TransResunet++, where the variables are tuned with the help of designed Hybrid Position of Beluga Whale and Cuttle Fish (HP-BWCF) and finally get the feature set 1 using Kaze Feature Points and Binary Descriptors. In the second stage, the same Kaze feature points and the binary descriptors are extracted from the preprocessed image separately, and then obtain feature set 2. Then, the extracted feature sets 1 and 2 are concatenated and given to the Hybrid Convolution-based Adaptive Resnet with Attention Mechanism (HCAR-AM) to detect the ground nut leaf diseases very effectively. The parameters from this HCAR-AM are tuned via the same HP-BWCF. The experimental outcome is analysed over various recently developed ground nut leaf disease detection approaches in accordance with various performance measures.
{"title":"HCAR-AM ground nut leaf net: Hybrid convolution-based adaptive ResNet with attention mechanism for detecting ground nut leaf diseases with adaptive segmentation.","authors":"Annamalai Thiruvengadam Madhavi, Kamal Basha Rahimunnisa","doi":"10.1080/0954898X.2024.2424248","DOIUrl":"10.1080/0954898X.2024.2424248","url":null,"abstract":"<p><p>Estimating the optimal answer is expensive for huge data resources that decrease the functionality of the system. To solve these issues, the latest groundnut leaf disorder identification model by deep learning techniques is implemented. The images are collected from traditional databases, and then they are given to the pre-processing stage. Then, relevant features are drawn out from the preprocessed images in two stages. In the first stage, the preprocessed image is segmented using adaptive TransResunet++, where the variables are tuned with the help of designed Hybrid Position of Beluga Whale and Cuttle Fish (HP-BWCF) and finally get the feature set 1 using Kaze Feature Points and Binary Descriptors. In the second stage, the same Kaze feature points and the binary descriptors are extracted from the preprocessed image separately, and then obtain feature set 2. Then, the extracted feature sets 1 and 2 are concatenated and given to the Hybrid Convolution-based Adaptive Resnet with Attention Mechanism (HCAR-AM) to detect the ground nut leaf diseases very effectively. The parameters from this HCAR-AM are tuned via the same HP-BWCF. The experimental outcome is analysed over various recently developed ground nut leaf disease detection approaches in accordance with various performance measures.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"38-78"},"PeriodicalIF":1.1,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142649538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In cloud computing (CC), task scheduling allocates the task to best suitable resource for execution. This article proposes a model for task scheduling utilizing the multi-objective optimization and deep learning (DL) model. Initially, the multi-objective task scheduling is carried out by the incoming user utilizing the proposed hybrid fractional flamingo beetle optimization (FFBO) which is formed by integrating dung beetle optimization (DBO), flamingo search algorithm (FSA) and fractional calculus (FC). Here, the fitness function depends on reliability, cost, predicted energy, and makespan, the predicted energy is forecasted by a deep residual network (DRN). Thereafter, task scheduling is accomplished based on DL using the proposed deep feedforward neural network fused long short-term memory (DFNN-LSTM), which is the combination of DFNN and LSTM. Moreover, when scheduling the workflow, the task parameters and the virtual machine's (VM) live parameters are taken into consideration. Task parameters are earliest finish time (EFT), earliest start time (EST), task length, task priority, and actual task running time, whereas VM parameters include memory utilization, bandwidth utilization, capacity, and central processing unit (CPU). The proposed model DFNN-LSTM+FFBO has achieved superior makespan, energy, and resource utilization of 0.188, 0.950J, and 0.238, respectively.
{"title":"Deep learning and optimization enabled multi-objective for task scheduling in cloud computing.","authors":"Dinesh Komarasamy, Siva Malar Ramaganthan, Dharani Molapalayam Kandaswamy, Gokuldhev Mony","doi":"10.1080/0954898X.2024.2391395","DOIUrl":"10.1080/0954898X.2024.2391395","url":null,"abstract":"<p><p>In cloud computing (CC), task scheduling allocates the task to best suitable resource for execution. This article proposes a model for task scheduling utilizing the multi-objective optimization and deep learning (DL) model. Initially, the multi-objective task scheduling is carried out by the incoming user utilizing the proposed hybrid fractional flamingo beetle optimization (FFBO) which is formed by integrating dung beetle optimization (DBO), flamingo search algorithm (FSA) and fractional calculus (FC). Here, the fitness function depends on reliability, cost, predicted energy, and makespan, the predicted energy is forecasted by a deep residual network (DRN). Thereafter, task scheduling is accomplished based on DL using the proposed deep feedforward neural network fused long short-term memory (DFNN-LSTM), which is the combination of DFNN and LSTM. Moreover, when scheduling the workflow, the task parameters and the virtual machine's (VM) live parameters are taken into consideration. Task parameters are earliest finish time (EFT), earliest start time (EST), task length, task priority, and actual task running time, whereas VM parameters include memory utilization, bandwidth utilization, capacity, and central processing unit (CPU). The proposed model DFNN-LSTM+FFBO has achieved superior makespan, energy, and resource utilization of 0.188, 0.950J, and 0.238, respectively.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"79-108"},"PeriodicalIF":1.1,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142009908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01Epub Date: 2024-02-12DOI: 10.1080/0954898X.2024.2310687
Rathinavelu Sathiyaseelan, Krishnamoorthy Ranganathan, Ramesh Ramamoorthy, M Pedda Chennaiah
Retinal haemorrhage stands as an early indicator of diabetic retinopathy, necessitating accurate detection for timely diagnosis. Addressing this need, this study proposes an enhanced machine-based diagnostic test for diabetic retinopathy through an updated UNet framework, adept at scrutinizing fundus images for signs of retinal haemorrhages. The customized UNet underwent GPU training using the IDRiD database, validated against the publicly available DIARETDB1 and IDRiD datasets. Emphasizing the complexity of segmentation, the study employed preprocessing techniques, augmenting image quality and data integrity. Subsequently, the trained neural network showcased a remarkable performance boost, accurately identifying haemorrhage regions with 80% sensitivity, 99.6% specificity, and 98.6% accuracy. The experimental findings solidify the network's reliability, showcasing potential to alleviate ophthalmologists' workload significantly. Notably, achieving an Intersection over Union (IoU) of 76.61% and a Dice coefficient of 86.51% underscores the system's competence. The study's outcomes signify substantial enhancements in diagnosing critical diabetic retinal conditions, promising profound improvements in diagnostic accuracy and efficiency, thereby marking a significant advancement in automated retinal haemorrhage detection for diabetic retinopathy.
{"title":"Haemorrhage diagnosis in colour fundus images using a fast-convolutional neural network based on a modified U-Net.","authors":"Rathinavelu Sathiyaseelan, Krishnamoorthy Ranganathan, Ramesh Ramamoorthy, M Pedda Chennaiah","doi":"10.1080/0954898X.2024.2310687","DOIUrl":"10.1080/0954898X.2024.2310687","url":null,"abstract":"<p><p>Retinal haemorrhage stands as an early indicator of diabetic retinopathy, necessitating accurate detection for timely diagnosis. Addressing this need, this study proposes an enhanced machine-based diagnostic test for diabetic retinopathy through an updated UNet framework, adept at scrutinizing fundus images for signs of retinal haemorrhages. The customized UNet underwent GPU training using the IDRiD database, validated against the publicly available DIARETDB1 and IDRiD datasets. Emphasizing the complexity of segmentation, the study employed preprocessing techniques, augmenting image quality and data integrity. Subsequently, the trained neural network showcased a remarkable performance boost, accurately identifying haemorrhage regions with 80% sensitivity, 99.6% specificity, and 98.6% accuracy. The experimental findings solidify the network's reliability, showcasing potential to alleviate ophthalmologists' workload significantly. Notably, achieving an Intersection over Union (IoU) of 76.61% and a Dice coefficient of 86.51% underscores the system's competence. The study's outcomes signify substantial enhancements in diagnosing critical diabetic retinal conditions, promising profound improvements in diagnostic accuracy and efficiency, thereby marking a significant advancement in automated retinal haemorrhage detection for diabetic retinopathy.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"198-219"},"PeriodicalIF":1.1,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139725025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01Epub Date: 2024-11-27DOI: 10.1080/0954898X.2024.2429721
Love Kumar Thawait, Mukesh Kumar Singh
Thermal Power Plant is a common power plant that generates power by fuel-burning to produce electricity. Being a significant component of the energy sector, the Thermal Power Plant faces several issues that lead to reduced productivity. Conventional researchers have tried using different mechanisms for improvising the production of Thermal Power Plants in varied dimensions. Due to the diverse dimensions considered by existing works, the present review endeavours to afford a comprehensive summary of these works. To achieve this, the study reviews articles in the range (2019-2023) that are allied with the utility of SC methodologies (encompassing AI-ML (Machine Learning) and DL (Deep Learning) in enhancing the productivity of Thermal Power Plants by various dimensions. The conventional AI-based approaches are comparatively evaluated for effective contribution in improvising Thermal Power Plant production. Following this, a critical assessment encompasses the year-wise distribution and varied dimensions focussed by traditional studies in this area. This would support future researchers in determining the dimensions that have attained limited and high focus based on which appropriate research works can be performed. Finally, future suggestions and research gaps are included to offer new stimulus for further investigation of AI in Thermal Power Plants.
{"title":"A review on real time implementation of soft computing techniques in thermal power plant.","authors":"Love Kumar Thawait, Mukesh Kumar Singh","doi":"10.1080/0954898X.2024.2429721","DOIUrl":"10.1080/0954898X.2024.2429721","url":null,"abstract":"<p><p>Thermal Power Plant is a common power plant that generates power by fuel-burning to produce electricity. Being a significant component of the energy sector, the Thermal Power Plant faces several issues that lead to reduced productivity. Conventional researchers have tried using different mechanisms for improvising the production of Thermal Power Plants in varied dimensions. Due to the diverse dimensions considered by existing works, the present review endeavours to afford a comprehensive summary of these works. To achieve this, the study reviews articles in the range (2019-2023) that are allied with the utility of SC methodologies (encompassing AI-ML (Machine Learning) and DL (Deep Learning) in enhancing the productivity of Thermal Power Plants by various dimensions. The conventional AI-based approaches are comparatively evaluated for effective contribution in improvising Thermal Power Plant production. Following this, a critical assessment encompasses the year-wise distribution and varied dimensions focussed by traditional studies in this area. This would support future researchers in determining the dimensions that have attained limited and high focus based on which appropriate research works can be performed. Finally, future suggestions and research gaps are included to offer new stimulus for further investigation of AI in Thermal Power Plants.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"1-37"},"PeriodicalIF":1.1,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142734747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01Epub Date: 2024-03-06DOI: 10.1080/0954898X.2024.2309947
Sangeetha Alagumani, Uma Maheswari Natarajan
The 5th generation (5 G) network is required to meet the growing demand for fast data speeds and the expanding number of customers. Apart from offering higher speeds, 5 G will be employed in other industries such as the Internet of Things, broadcast services, and so on. Energy efficiency, scalability, resiliency, interoperability, and high data rate/low delay are the primary requirements and obstacles of 5 G cellular networks. Due to IEEE 802.11p's constraints, such as limited coverage, inability to handle dense vehicle networks, signal congestion, and connectivity outages, efficient data distribution is a big challenge (MAC contention problem). In this research, vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I) and vehicle-to-pedestrian (V2P) services are used to overcome bandwidth constraints in very dense network communications from cellular tool to everything (C-V2X). Clustering is done through multi-layered multi-access edge clustering, which helps reduce vehicle contention. Fuzzy logic and Q-learning and intelligence are used for a multi-hop route selection system. The proposed protocol adjusts the number of cluster-head nodes using a Q-learning algorithm, allowing it to quickly adapt to a range of scenarios with varying bandwidths and vehicle densities.
第五代(5 G)网络需要满足日益增长的高速数据需求和不断扩大的客户数量。除了提供更高的速度,5 G 还将应用于其他行业,如物联网、广播服务等。能源效率、可扩展性、弹性、互操作性和高数据速率/低延迟是 5 G 蜂窝网络的主要要求和障碍。由于 IEEE 802.11p 的限制,如有限的覆盖范围、无法处理密集的车辆网络、信号拥塞和连接中断,高效的数据分发是一个巨大的挑战(MAC 竞争问题)。在这项研究中,车对车(V2V)、车对基础设施(V2I)和车对行人(V2P)服务被用来克服从蜂窝工具到万物(C-V2X)的高密度网络通信中的带宽限制。聚类是通过多层多接入边缘聚类完成的,这有助于减少车辆争用。多跳路由选择系统采用了模糊逻辑和 Q 学习与智能。提议的协议使用 Q-learning 算法调整簇头节点的数量,使其能够快速适应带宽和车辆密度不同的各种情况。
{"title":"Q-learning and fuzzy logic multi-tier multi-access edge clustering for 5g v2x communication.","authors":"Sangeetha Alagumani, Uma Maheswari Natarajan","doi":"10.1080/0954898X.2024.2309947","DOIUrl":"10.1080/0954898X.2024.2309947","url":null,"abstract":"<p><p>The 5th generation (5 G) network is required to meet the growing demand for fast data speeds and the expanding number of customers. Apart from offering higher speeds, 5 G will be employed in other industries such as the Internet of Things, broadcast services, and so on. Energy efficiency, scalability, resiliency, interoperability, and high data rate/low delay are the primary requirements and obstacles of 5 G cellular networks. Due to IEEE 802.11p's constraints, such as limited coverage, inability to handle dense vehicle networks, signal congestion, and connectivity outages, efficient data distribution is a big challenge (MAC contention problem). In this research, vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I) and vehicle-to-pedestrian (V2P) services are used to overcome bandwidth constraints in very dense network communications from cellular tool to everything (C-V2X). Clustering is done through multi-layered multi-access edge clustering, which helps reduce vehicle contention. Fuzzy logic and Q-learning and intelligence are used for a multi-hop route selection system. The proposed protocol adjusts the number of cluster-head nodes using a Q-learning algorithm, allowing it to quickly adapt to a range of scenarios with varying bandwidths and vehicle densities.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"174-197"},"PeriodicalIF":1.1,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140040908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}