Pub Date : 2025-02-01Epub Date: 2024-09-25DOI: 10.1080/0954898X.2024.2404915
Jayachitra Sekar, Prasanth Aruchamy
In today's world, heart disease threatens human life owing to higher mortality and morbidity across the globe. The earlier prediction of heart disease engenders interoperability for the treatment of patients and offers better diagnostic recommendations from medical professionals. However, the existing machine learning classifiers suffer from computational complexity and overfitting problems, which reduces the classification accuracy of the diagnostic system. To address these constraints, this work proposes a new hybrid optimization algorithm to improve the classification accuracy and optimize computation time in smart healthcare applications. Primarily, the optimal features are selected through the hybrid Arithmetic Optimization and Inter-Twinned Mutation-Based Harris Hawk Optimization (AITH2O) algorithm. The proposed hybrid AITH2O algorithm entails advantages of both exploration and exploitation abilities and acquires faster convergence. It is further employed to tune the parameters of the Stabilized Adaptive Neuro-Fuzzy Inference System (SANFIS) classifier for predicting heart disease accurately. The Cleveland heart disease dataset is utilized to validate the efficacy of the proposed algorithm. The simulation is carried out using MATLAB 2020a environment. The simulation results show that the proposed hybrid SANFIS classifier attains a superior accuracy of 99.28% and true positive rate of 99.46% compared to existing state-of-the-art techniques.
{"title":"A novel approach for heart disease prediction using hybridized AITH<sup>2</sup>O algorithm and SANFIS classifier.","authors":"Jayachitra Sekar, Prasanth Aruchamy","doi":"10.1080/0954898X.2024.2404915","DOIUrl":"10.1080/0954898X.2024.2404915","url":null,"abstract":"<p><p>In today's world, heart disease threatens human life owing to higher mortality and morbidity across the globe. The earlier prediction of heart disease engenders interoperability for the treatment of patients and offers better diagnostic recommendations from medical professionals. However, the existing machine learning classifiers suffer from computational complexity and overfitting problems, which reduces the classification accuracy of the diagnostic system. To address these constraints, this work proposes a new hybrid optimization algorithm to improve the classification accuracy and optimize computation time in smart healthcare applications. Primarily, the optimal features are selected through the hybrid Arithmetic Optimization and Inter-Twinned Mutation-Based Harris Hawk Optimization (AITH<sup>2</sup>O) algorithm. The proposed hybrid AITH<sup>2</sup>O algorithm entails advantages of both exploration and exploitation abilities and acquires faster convergence. It is further employed to tune the parameters of the Stabilized Adaptive Neuro-Fuzzy Inference System (SANFIS) classifier for predicting heart disease accurately. The Cleveland heart disease dataset is utilized to validate the efficacy of the proposed algorithm. The simulation is carried out using MATLAB 2020a environment. The simulation results show that the proposed hybrid SANFIS classifier attains a superior accuracy of 99.28% and true positive rate of 99.46% compared to existing state-of-the-art techniques.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"109-147"},"PeriodicalIF":1.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142332540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1080/0954898X.2025.2452280
Anchana Balakrishnannair Sreekumari, Arul Teen Yesudasan Paulsy
Stroke remains a leading global health concern and early diagnosis and accurate identification of stroke lesions are essential for improving treatment outcomes and reducing long-term disabilities. Computed Tomography (CT) imaging is widely used in clinical settings for diagnosing stroke, assessing lesion size, and determining the severity. However, the accurate segmentation and early detection of stroke lesions in CT images remain challenging. Thus, a Jaccard_Residual SqueezeNet is proposed for predicting stroke from CT images with the integration of the Internet of Things (IoT). The Jaccard_Residual SqueezeNet is the integration of the Jaccard index in Residual SqueezeNet. Firstly, the brain CT image is routed to the Base Station (BS) using the Fractional Jellyfish Search Pelican Optimization Algorithm (FJSPOA) and preprocessing is accomplished by median filter. Then, the skull segmentation is accomplished by ENet and then feature extraction is done. Lastly, Stroke is detected using the Jaccard_Residual SqueezeNet. The values of throughput, energy, distance, trust, and delay determined in terms of routing are 72.172 Mbps, 0.580J, 22.243 m, 0.915, and 0.083S. Also, the accuracy, sensitivity, precision, and F1-score for stroke detection are 0.902, 0.896, 0.916, and 0.906. These findings suggest that Jaccard_Residual SqueezeNet offers a robust and efficient platform for stroke detection.
{"title":"Hybrid deep learning based stroke detection using CT images with routing in an IoT environment.","authors":"Anchana Balakrishnannair Sreekumari, Arul Teen Yesudasan Paulsy","doi":"10.1080/0954898X.2025.2452280","DOIUrl":"https://doi.org/10.1080/0954898X.2025.2452280","url":null,"abstract":"<p><p>Stroke remains a leading global health concern and early diagnosis and accurate identification of stroke lesions are essential for improving treatment outcomes and reducing long-term disabilities. Computed Tomography (CT) imaging is widely used in clinical settings for diagnosing stroke, assessing lesion size, and determining the severity. However, the accurate segmentation and early detection of stroke lesions in CT images remain challenging. Thus, a Jaccard_Residual SqueezeNet is proposed for predicting stroke from CT images with the integration of the Internet of Things (IoT). The Jaccard_Residual SqueezeNet is the integration of the Jaccard index in Residual SqueezeNet. Firstly, the brain CT image is routed to the Base Station (BS) using the Fractional Jellyfish Search Pelican Optimization Algorithm (FJSPOA) and preprocessing is accomplished by median filter. Then, the skull segmentation is accomplished by ENet and then feature extraction is done. Lastly, Stroke is detected using the Jaccard_Residual SqueezeNet. The values of throughput, energy, distance, trust, and delay determined in terms of routing are 72.172 Mbps, 0.580J, 22.243 m, 0.915, and 0.083S. Also, the accuracy, sensitivity, precision, and F1-score for stroke detection are 0.902, 0.896, 0.916, and 0.906. These findings suggest that Jaccard_Residual SqueezeNet offers a robust and efficient platform for stroke detection.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"1-40"},"PeriodicalIF":1.1,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143076490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01Epub Date: 2024-09-20DOI: 10.1080/0954898X.2024.2392772
Rashmi Chaudhary, Manoj Kumar
Monitoring Surveillance video is really time-consuming, and the complexity of typical crowd behaviour in crowded situations makes this even more challenging. This has sparked a curiosity about computer vision-based anomaly detection. This study introduces a new crowd anomaly detection method with two main steps: Visual Attention Detection and Anomaly Detection. The Visual Attention Detection phase uses an Enhanced Bilateral Texture-Based Methodology to pinpoint crucial areas in crowded scenes, improving anomaly detection precision. Next, the Anomaly Detection phase employs Optimized Deep Maxout Network to robustly identify unusual behaviours. This network's deep learning capabilities are essential for detecting complex patterns in diverse crowd scenarios. To enhance accuracy, the model is trained using the innovative Battle Royale Coalesced Atom Search Optimization (BRCASO) algorithm, which fine-tunes optimal weights for superior performance, ensuring heightened detection accuracy and reliability. Lastly, using various performance metrics, the suggested work's effectiveness will be contrasted with that of the other traditional approaches. The proposed crowd anomaly detection is implemented in Python. On observing the result showed that the suggested model attains a detection accuracy of 97.28% at a learning rate of 90%, which is much superior than the detection accuracy of other models, including ASO = 90.56%, BMO = 91.39%, BES = 88.63%, BRO = 86.98%, and FFLY = 89.59%.
{"title":"Optimized deep maxout for crowd anomaly detection: A hybrid optimization-based model.","authors":"Rashmi Chaudhary, Manoj Kumar","doi":"10.1080/0954898X.2024.2392772","DOIUrl":"10.1080/0954898X.2024.2392772","url":null,"abstract":"<p><p>Monitoring Surveillance video is really time-consuming, and the complexity of typical crowd behaviour in crowded situations makes this even more challenging. This has sparked a curiosity about computer vision-based anomaly detection. This study introduces a new crowd anomaly detection method with two main steps: Visual Attention Detection and Anomaly Detection. The Visual Attention Detection phase uses an Enhanced Bilateral Texture-Based Methodology to pinpoint crucial areas in crowded scenes, improving anomaly detection precision. Next, the Anomaly Detection phase employs Optimized Deep Maxout Network to robustly identify unusual behaviours. This network's deep learning capabilities are essential for detecting complex patterns in diverse crowd scenarios. To enhance accuracy, the model is trained using the innovative Battle Royale Coalesced Atom Search Optimization (BRCASO) algorithm, which fine-tunes optimal weights for superior performance, ensuring heightened detection accuracy and reliability. Lastly, using various performance metrics, the suggested work's effectiveness will be contrasted with that of the other traditional approaches. The proposed crowd anomaly detection is implemented in Python. On observing the result showed that the suggested model attains a detection accuracy of 97.28% at a learning rate of 90%, which is much superior than the detection accuracy of other models, including ASO = 90.56%, BMO = 91.39%, BES = 88.63%, BRO = 86.98%, and FFLY = 89.59%.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"148-173"},"PeriodicalIF":1.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142301214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01Epub Date: 2024-02-12DOI: 10.1080/0954898X.2024.2310687
Rathinavelu Sathiyaseelan, Krishnamoorthy Ranganathan, Ramesh Ramamoorthy, M Pedda Chennaiah
Retinal haemorrhage stands as an early indicator of diabetic retinopathy, necessitating accurate detection for timely diagnosis. Addressing this need, this study proposes an enhanced machine-based diagnostic test for diabetic retinopathy through an updated UNet framework, adept at scrutinizing fundus images for signs of retinal haemorrhages. The customized UNet underwent GPU training using the IDRiD database, validated against the publicly available DIARETDB1 and IDRiD datasets. Emphasizing the complexity of segmentation, the study employed preprocessing techniques, augmenting image quality and data integrity. Subsequently, the trained neural network showcased a remarkable performance boost, accurately identifying haemorrhage regions with 80% sensitivity, 99.6% specificity, and 98.6% accuracy. The experimental findings solidify the network's reliability, showcasing potential to alleviate ophthalmologists' workload significantly. Notably, achieving an Intersection over Union (IoU) of 76.61% and a Dice coefficient of 86.51% underscores the system's competence. The study's outcomes signify substantial enhancements in diagnosing critical diabetic retinal conditions, promising profound improvements in diagnostic accuracy and efficiency, thereby marking a significant advancement in automated retinal haemorrhage detection for diabetic retinopathy.
{"title":"Haemorrhage diagnosis in colour fundus images using a fast-convolutional neural network based on a modified U-Net.","authors":"Rathinavelu Sathiyaseelan, Krishnamoorthy Ranganathan, Ramesh Ramamoorthy, M Pedda Chennaiah","doi":"10.1080/0954898X.2024.2310687","DOIUrl":"10.1080/0954898X.2024.2310687","url":null,"abstract":"<p><p>Retinal haemorrhage stands as an early indicator of diabetic retinopathy, necessitating accurate detection for timely diagnosis. Addressing this need, this study proposes an enhanced machine-based diagnostic test for diabetic retinopathy through an updated UNet framework, adept at scrutinizing fundus images for signs of retinal haemorrhages. The customized UNet underwent GPU training using the IDRiD database, validated against the publicly available DIARETDB1 and IDRiD datasets. Emphasizing the complexity of segmentation, the study employed preprocessing techniques, augmenting image quality and data integrity. Subsequently, the trained neural network showcased a remarkable performance boost, accurately identifying haemorrhage regions with 80% sensitivity, 99.6% specificity, and 98.6% accuracy. The experimental findings solidify the network's reliability, showcasing potential to alleviate ophthalmologists' workload significantly. Notably, achieving an Intersection over Union (IoU) of 76.61% and a Dice coefficient of 86.51% underscores the system's competence. The study's outcomes signify substantial enhancements in diagnosing critical diabetic retinal conditions, promising profound improvements in diagnostic accuracy and efficiency, thereby marking a significant advancement in automated retinal haemorrhage detection for diabetic retinopathy.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"198-219"},"PeriodicalIF":1.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139725025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In cloud computing (CC), task scheduling allocates the task to best suitable resource for execution. This article proposes a model for task scheduling utilizing the multi-objective optimization and deep learning (DL) model. Initially, the multi-objective task scheduling is carried out by the incoming user utilizing the proposed hybrid fractional flamingo beetle optimization (FFBO) which is formed by integrating dung beetle optimization (DBO), flamingo search algorithm (FSA) and fractional calculus (FC). Here, the fitness function depends on reliability, cost, predicted energy, and makespan, the predicted energy is forecasted by a deep residual network (DRN). Thereafter, task scheduling is accomplished based on DL using the proposed deep feedforward neural network fused long short-term memory (DFNN-LSTM), which is the combination of DFNN and LSTM. Moreover, when scheduling the workflow, the task parameters and the virtual machine's (VM) live parameters are taken into consideration. Task parameters are earliest finish time (EFT), earliest start time (EST), task length, task priority, and actual task running time, whereas VM parameters include memory utilization, bandwidth utilization, capacity, and central processing unit (CPU). The proposed model DFNN-LSTM+FFBO has achieved superior makespan, energy, and resource utilization of 0.188, 0.950J, and 0.238, respectively.
{"title":"Deep learning and optimization enabled multi-objective for task scheduling in cloud computing.","authors":"Dinesh Komarasamy, Siva Malar Ramaganthan, Dharani Molapalayam Kandaswamy, Gokuldhev Mony","doi":"10.1080/0954898X.2024.2391395","DOIUrl":"10.1080/0954898X.2024.2391395","url":null,"abstract":"<p><p>In cloud computing (CC), task scheduling allocates the task to best suitable resource for execution. This article proposes a model for task scheduling utilizing the multi-objective optimization and deep learning (DL) model. Initially, the multi-objective task scheduling is carried out by the incoming user utilizing the proposed hybrid fractional flamingo beetle optimization (FFBO) which is formed by integrating dung beetle optimization (DBO), flamingo search algorithm (FSA) and fractional calculus (FC). Here, the fitness function depends on reliability, cost, predicted energy, and makespan, the predicted energy is forecasted by a deep residual network (DRN). Thereafter, task scheduling is accomplished based on DL using the proposed deep feedforward neural network fused long short-term memory (DFNN-LSTM), which is the combination of DFNN and LSTM. Moreover, when scheduling the workflow, the task parameters and the virtual machine's (VM) live parameters are taken into consideration. Task parameters are earliest finish time (EFT), earliest start time (EST), task length, task priority, and actual task running time, whereas VM parameters include memory utilization, bandwidth utilization, capacity, and central processing unit (CPU). The proposed model DFNN-LSTM+FFBO has achieved superior makespan, energy, and resource utilization of 0.188, 0.950J, and 0.238, respectively.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"79-108"},"PeriodicalIF":1.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142009908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01Epub Date: 2024-11-17DOI: 10.1080/0954898X.2024.2424248
Annamalai Thiruvengadam Madhavi, Kamal Basha Rahimunnisa
Estimating the optimal answer is expensive for huge data resources that decrease the functionality of the system. To solve these issues, the latest groundnut leaf disorder identification model by deep learning techniques is implemented. The images are collected from traditional databases, and then they are given to the pre-processing stage. Then, relevant features are drawn out from the preprocessed images in two stages. In the first stage, the preprocessed image is segmented using adaptive TransResunet++, where the variables are tuned with the help of designed Hybrid Position of Beluga Whale and Cuttle Fish (HP-BWCF) and finally get the feature set 1 using Kaze Feature Points and Binary Descriptors. In the second stage, the same Kaze feature points and the binary descriptors are extracted from the preprocessed image separately, and then obtain feature set 2. Then, the extracted feature sets 1 and 2 are concatenated and given to the Hybrid Convolution-based Adaptive Resnet with Attention Mechanism (HCAR-AM) to detect the ground nut leaf diseases very effectively. The parameters from this HCAR-AM are tuned via the same HP-BWCF. The experimental outcome is analysed over various recently developed ground nut leaf disease detection approaches in accordance with various performance measures.
{"title":"HCAR-AM ground nut leaf net: Hybrid convolution-based adaptive ResNet with attention mechanism for detecting ground nut leaf diseases with adaptive segmentation.","authors":"Annamalai Thiruvengadam Madhavi, Kamal Basha Rahimunnisa","doi":"10.1080/0954898X.2024.2424248","DOIUrl":"10.1080/0954898X.2024.2424248","url":null,"abstract":"<p><p>Estimating the optimal answer is expensive for huge data resources that decrease the functionality of the system. To solve these issues, the latest groundnut leaf disorder identification model by deep learning techniques is implemented. The images are collected from traditional databases, and then they are given to the pre-processing stage. Then, relevant features are drawn out from the preprocessed images in two stages. In the first stage, the preprocessed image is segmented using adaptive TransResunet++, where the variables are tuned with the help of designed Hybrid Position of Beluga Whale and Cuttle Fish (HP-BWCF) and finally get the feature set 1 using Kaze Feature Points and Binary Descriptors. In the second stage, the same Kaze feature points and the binary descriptors are extracted from the preprocessed image separately, and then obtain feature set 2. Then, the extracted feature sets 1 and 2 are concatenated and given to the Hybrid Convolution-based Adaptive Resnet with Attention Mechanism (HCAR-AM) to detect the ground nut leaf diseases very effectively. The parameters from this HCAR-AM are tuned via the same HP-BWCF. The experimental outcome is analysed over various recently developed ground nut leaf disease detection approaches in accordance with various performance measures.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"38-78"},"PeriodicalIF":1.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142649538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01Epub Date: 2024-11-27DOI: 10.1080/0954898X.2024.2429721
Love Kumar Thawait, Mukesh Kumar Singh
Thermal Power Plant is a common power plant that generates power by fuel-burning to produce electricity. Being a significant component of the energy sector, the Thermal Power Plant faces several issues that lead to reduced productivity. Conventional researchers have tried using different mechanisms for improvising the production of Thermal Power Plants in varied dimensions. Due to the diverse dimensions considered by existing works, the present review endeavours to afford a comprehensive summary of these works. To achieve this, the study reviews articles in the range (2019-2023) that are allied with the utility of SC methodologies (encompassing AI-ML (Machine Learning) and DL (Deep Learning) in enhancing the productivity of Thermal Power Plants by various dimensions. The conventional AI-based approaches are comparatively evaluated for effective contribution in improvising Thermal Power Plant production. Following this, a critical assessment encompasses the year-wise distribution and varied dimensions focussed by traditional studies in this area. This would support future researchers in determining the dimensions that have attained limited and high focus based on which appropriate research works can be performed. Finally, future suggestions and research gaps are included to offer new stimulus for further investigation of AI in Thermal Power Plants.
{"title":"A review on real time implementation of soft computing techniques in thermal power plant.","authors":"Love Kumar Thawait, Mukesh Kumar Singh","doi":"10.1080/0954898X.2024.2429721","DOIUrl":"10.1080/0954898X.2024.2429721","url":null,"abstract":"<p><p>Thermal Power Plant is a common power plant that generates power by fuel-burning to produce electricity. Being a significant component of the energy sector, the Thermal Power Plant faces several issues that lead to reduced productivity. Conventional researchers have tried using different mechanisms for improvising the production of Thermal Power Plants in varied dimensions. Due to the diverse dimensions considered by existing works, the present review endeavours to afford a comprehensive summary of these works. To achieve this, the study reviews articles in the range (2019-2023) that are allied with the utility of SC methodologies (encompassing AI-ML (Machine Learning) and DL (Deep Learning) in enhancing the productivity of Thermal Power Plants by various dimensions. The conventional AI-based approaches are comparatively evaluated for effective contribution in improvising Thermal Power Plant production. Following this, a critical assessment encompasses the year-wise distribution and varied dimensions focussed by traditional studies in this area. This would support future researchers in determining the dimensions that have attained limited and high focus based on which appropriate research works can be performed. Finally, future suggestions and research gaps are included to offer new stimulus for further investigation of AI in Thermal Power Plants.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"1-37"},"PeriodicalIF":1.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142734747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01Epub Date: 2024-03-06DOI: 10.1080/0954898X.2024.2309947
Sangeetha Alagumani, Uma Maheswari Natarajan
The 5th generation (5 G) network is required to meet the growing demand for fast data speeds and the expanding number of customers. Apart from offering higher speeds, 5 G will be employed in other industries such as the Internet of Things, broadcast services, and so on. Energy efficiency, scalability, resiliency, interoperability, and high data rate/low delay are the primary requirements and obstacles of 5 G cellular networks. Due to IEEE 802.11p's constraints, such as limited coverage, inability to handle dense vehicle networks, signal congestion, and connectivity outages, efficient data distribution is a big challenge (MAC contention problem). In this research, vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I) and vehicle-to-pedestrian (V2P) services are used to overcome bandwidth constraints in very dense network communications from cellular tool to everything (C-V2X). Clustering is done through multi-layered multi-access edge clustering, which helps reduce vehicle contention. Fuzzy logic and Q-learning and intelligence are used for a multi-hop route selection system. The proposed protocol adjusts the number of cluster-head nodes using a Q-learning algorithm, allowing it to quickly adapt to a range of scenarios with varying bandwidths and vehicle densities.
第五代(5 G)网络需要满足日益增长的高速数据需求和不断扩大的客户数量。除了提供更高的速度,5 G 还将应用于其他行业,如物联网、广播服务等。能源效率、可扩展性、弹性、互操作性和高数据速率/低延迟是 5 G 蜂窝网络的主要要求和障碍。由于 IEEE 802.11p 的限制,如有限的覆盖范围、无法处理密集的车辆网络、信号拥塞和连接中断,高效的数据分发是一个巨大的挑战(MAC 竞争问题)。在这项研究中,车对车(V2V)、车对基础设施(V2I)和车对行人(V2P)服务被用来克服从蜂窝工具到万物(C-V2X)的高密度网络通信中的带宽限制。聚类是通过多层多接入边缘聚类完成的,这有助于减少车辆争用。多跳路由选择系统采用了模糊逻辑和 Q 学习与智能。提议的协议使用 Q-learning 算法调整簇头节点的数量,使其能够快速适应带宽和车辆密度不同的各种情况。
{"title":"Q-learning and fuzzy logic multi-tier multi-access edge clustering for 5g v2x communication.","authors":"Sangeetha Alagumani, Uma Maheswari Natarajan","doi":"10.1080/0954898X.2024.2309947","DOIUrl":"10.1080/0954898X.2024.2309947","url":null,"abstract":"<p><p>The 5th generation (5 G) network is required to meet the growing demand for fast data speeds and the expanding number of customers. Apart from offering higher speeds, 5 G will be employed in other industries such as the Internet of Things, broadcast services, and so on. Energy efficiency, scalability, resiliency, interoperability, and high data rate/low delay are the primary requirements and obstacles of 5 G cellular networks. Due to IEEE 802.11p's constraints, such as limited coverage, inability to handle dense vehicle networks, signal congestion, and connectivity outages, efficient data distribution is a big challenge (MAC contention problem). In this research, vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I) and vehicle-to-pedestrian (V2P) services are used to overcome bandwidth constraints in very dense network communications from cellular tool to everything (C-V2X). Clustering is done through multi-layered multi-access edge clustering, which helps reduce vehicle contention. Fuzzy logic and Q-learning and intelligence are used for a multi-hop route selection system. The proposed protocol adjusts the number of cluster-head nodes using a Q-learning algorithm, allowing it to quickly adapt to a range of scenarios with varying bandwidths and vehicle densities.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"174-197"},"PeriodicalIF":1.6,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140040908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-20DOI: 10.1080/0954898X.2025.2451388
Selvalakshmi B, Hemalatha K, Kumarganesh S, Vijayalakshmi P
The image retrieval is the process of retrieving the relevant images to the query image with minimal searching time in internet. The problem of the conventional Content-Based Image Retrieval (CBIR) system is that they produce retrieval results for either colour images or grey scale images alone. Moreover, the CBIR system is more complex which consumes more time period for producing the significant retrieval results. These problems are overcome through the proposed methodologies stated in this work. In this paper, the General Image (GI) and Medical Image (MI) are retrieved using deep learning architecture. The proposed system is designed with feature computation module, Retrieval Convolutional Neural Network (RETCNN) module, and Distance computation algorithm. The distance computation algorithm is used to compute the distances between the query image and the images in the datasets and produces the retrieval results. The average precision and recall for the proposed RETCNN-based CBIRS is 98.98% and 99.15% respectively for GI category, and the average precision and recall for the proposed RETCNN-based CBIRS are 99.04% and 98.89% respectively for MI category. The significance of these experimental results is used to produce the higher image retrieval rate of the proposed system.
{"title":"Performance analysis of image retrieval system using deep learning techniques.","authors":"Selvalakshmi B, Hemalatha K, Kumarganesh S, Vijayalakshmi P","doi":"10.1080/0954898X.2025.2451388","DOIUrl":"https://doi.org/10.1080/0954898X.2025.2451388","url":null,"abstract":"<p><p>The image retrieval is the process of retrieving the relevant images to the query image with minimal searching time in internet. The problem of the conventional Content-Based Image Retrieval (CBIR) system is that they produce retrieval results for either colour images or grey scale images alone. Moreover, the CBIR system is more complex which consumes more time period for producing the significant retrieval results. These problems are overcome through the proposed methodologies stated in this work. In this paper, the General Image (GI) and Medical Image (MI) are retrieved using deep learning architecture. The proposed system is designed with feature computation module, Retrieval Convolutional Neural Network (RETCNN) module, and Distance computation algorithm. The distance computation algorithm is used to compute the distances between the query image and the images in the datasets and produces the retrieval results. The average precision and recall for the proposed RETCNN-based CBIRS is 98.98% and 99.15% respectively for GI category, and the average precision and recall for the proposed RETCNN-based CBIRS are 99.04% and 98.89% respectively for MI category. The significance of these experimental results is used to produce the higher image retrieval rate of the proposed system.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"1-21"},"PeriodicalIF":1.1,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143016857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-17DOI: 10.1080/0954898X.2024.2443622
Aruna Kari Balakrishnan, Arunachalaperumal Chellaperumal, Sudha Lakshmanan, Sureka Vijayakumar
The optimization on the cloud-based data structures is carried out using Adaptive Level and Skill Rate-based Child Drawing Development Optimization algorithm (ALSR-CDDO). Also, the overall cost required in computing and communicating is reduced by optimally selecting these data structures by the ALSR-CDDO algorithm. The storage of the data in the cloud platform is performed using the Divide and Conquer Table (D&CT). The location table and the information table are generated using the D&CT method. The details, such as the file information, file ID, version number, and user ID, are all present in the information table. Every time data is deleted or updated, and its version number is modified. Whenever an update takes place using D&CT, the location table also gets upgraded. The information regarding the location of a file in the Cloud Service Provider (CSP) is given in the location table. Once the data is stored in the CSP, the auditing of the data is then performed on the stored data. Both dynamic and batch auditing are carried out on the stored data, even if it gets updated dynamically in the CSP. The security offered by the executed scheme is verified by contrasting it with other existing auditing schemes.
{"title":"A novel efficient data storage and data auditing in cloud environment using enhanced child drawing development optimization strategy.","authors":"Aruna Kari Balakrishnan, Arunachalaperumal Chellaperumal, Sudha Lakshmanan, Sureka Vijayakumar","doi":"10.1080/0954898X.2024.2443622","DOIUrl":"https://doi.org/10.1080/0954898X.2024.2443622","url":null,"abstract":"<p><p>The optimization on the cloud-based data structures is carried out using Adaptive Level and Skill Rate-based Child Drawing Development Optimization algorithm (ALSR-CDDO). Also, the overall cost required in computing and communicating is reduced by optimally selecting these data structures by the ALSR-CDDO algorithm. The storage of the data in the cloud platform is performed using the Divide and Conquer Table (D&CT). The location table and the information table are generated using the D&CT method. The details, such as the file information, file ID, version number, and user ID, are all present in the information table. Every time data is deleted or updated, and its version number is modified. Whenever an update takes place using D&CT, the location table also gets upgraded. The information regarding the location of a file in the Cloud Service Provider (CSP) is given in the location table. Once the data is stored in the CSP, the auditing of the data is then performed on the stored data. Both dynamic and batch auditing are carried out on the stored data, even if it gets updated dynamically in the CSP. The security offered by the executed scheme is verified by contrasting it with other existing auditing schemes.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"1-41"},"PeriodicalIF":1.1,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143016854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}