The degree to which customers express satisfaction with a product on Twitter and other social media platforms is increasingly used to evaluate product quality. However, the volume and variety of textual data make traditional sentiment analysis methods challenging. The nuanced and context-dependent nature of product-related opinions presents a challenge for existing tools. This research addresses this gap by utilizing complex graph-based modelling strategies to capture the intricacies of real-world data. The Graph-based Quickprop Method constructs a graph model using the Sentiment140 dataset with 1.6 million tweets, where individuals are nodes and interactions are edges. Experimental results show a significant increase in sentiment classification accuracy, demonstrating the method's efficacy. This contribution underscores the importance of relational structures in sentiment analysis and provides a robust framework for extracting actionable insights from user-generated content, leading to improved product quality evaluations. The GQP-PQE method advances sentiment analysis and offers practical implications for businesses seeking to enhance product quality through a better understanding of consumer feedback on social media.
{"title":"Sentiment analysis using graph-based Quickprop method for product quality enhancement.","authors":"Raj Kumar Veerasamy Subramani, Thirumoorthy Kumaresan","doi":"10.1080/0954898X.2024.2410777","DOIUrl":"10.1080/0954898X.2024.2410777","url":null,"abstract":"<p><p>The degree to which customers express satisfaction with a product on Twitter and other social media platforms is increasingly used to evaluate product quality. However, the volume and variety of textual data make traditional sentiment analysis methods challenging. The nuanced and context-dependent nature of product-related opinions presents a challenge for existing tools. This research addresses this gap by utilizing complex graph-based modelling strategies to capture the intricacies of real-world data. The Graph-based Quickprop Method constructs a graph model using the Sentiment140 dataset with 1.6 million tweets, where individuals are nodes and interactions are edges. Experimental results show a significant increase in sentiment classification accuracy, demonstrating the method's efficacy. This contribution underscores the importance of relational structures in sentiment analysis and provides a robust framework for extracting actionable insights from user-generated content, leading to improved product quality evaluations. The GQP-PQE method advances sentiment analysis and offers practical implications for businesses seeking to enhance product quality through a better understanding of consumer feedback on social media.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"1996-2018"},"PeriodicalIF":1.6,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142481200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Natural language is frequently employed for information exchange between humans and computers in modern digital environments. Natural Language Processing (NLP) is a basic requirement for technological advancement in the field of speech recognition. For additional NLP activities like speech-to-text translation, speech-to-speech translation, speaker recognition, and speech information retrieval, language identification (LID) is a prerequisite. In this paper, we developed a Language Identification (LID) model for Ethio-Semitic languages. We used a hybrid approach (a convolutional recurrent neural network (CRNN)), in addition to a mixed (Mel frequency cepstral coefficient (MFCC) and mel-spectrogram) approach, to build our LID model. The study focused on four Ethio-Semitic languages: Amharic, Ge'ez, Guragigna, and Tigrinya. By using data augmentation for the selected languages, we were able to expand our original dataset of 8 h of audio data to 24 h and 40 min. The proposed selected features, when evaluated, achieved an average performance accuracy of 98.1%, 98.6%, and 99.9% for testing, validation, and training, respectively. The results show that the CRNN model with (Mel-Spectrogram + MFCC) combination feature achieved the best results when compared to other existing models.
{"title":"Speaker-based language identification for Ethio-Semitic languages using CRNN and hybrid features.","authors":"Malefia Demilie Melese, Amlakie Aschale Alemu, Ayodeji Olalekan Salau, Ibrahim Gashaw Kasa","doi":"10.1080/0954898X.2024.2359610","DOIUrl":"10.1080/0954898X.2024.2359610","url":null,"abstract":"<p><p>Natural language is frequently employed for information exchange between humans and computers in modern digital environments. Natural Language Processing (NLP) is a basic requirement for technological advancement in the field of speech recognition. For additional NLP activities like speech-to-text translation, speech-to-speech translation, speaker recognition, and speech information retrieval, language identification (LID) is a prerequisite. In this paper, we developed a Language Identification (LID) model for Ethio-Semitic languages. We used a hybrid approach (a convolutional recurrent neural network (CRNN)), in addition to a mixed (Mel frequency cepstral coefficient (MFCC) and mel-spectrogram) approach, to build our LID model. The study focused on four Ethio-Semitic languages: Amharic, Ge'ez, Guragigna, and Tigrinya. By using data augmentation for the selected languages, we were able to expand our original dataset of 8 h of audio data to 24 h and 40 min. The proposed selected features, when evaluated, achieved an average performance accuracy of 98.1%, 98.6%, and 99.9% for testing, validation, and training, respectively. The results show that the CRNN model with (Mel-Spectrogram + MFCC) combination feature achieved the best results when compared to other existing models.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"1313-1335"},"PeriodicalIF":1.6,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141238784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud computing (CC) is a future revolution in the Information technology (IT) and Communication field. Security and internet connectivity are the common major factors to slow down the proliferation of CC. Recently, a new kind of denial of service (DDoS) attacks, known as Economic Denial of Sustainability (EDoS) attack, has been emerging. Though EDoS attacks are smaller at a moment, it can be expected to develop in nearer prospective in tandem with progression in the cloud usage. Here, EfficientNet-B3-Attn-2 fused Deep Quantum Neural Network (EfficientNet-DQNN) is presented for EDoS detection. Initially, cloud is simulated and thereafter, considered input log file is fed to perform data pre-processing. Z-Score Normalization ;(ZSN) is employed to carry out pre-processing of data. Afterwards, feature fusion (FF) is accomplished based on Deep Neural Network (DNN) with Kulczynski similarity. Then, data augmentation (DA) is executed by oversampling based upon Synthetic Minority Over-sampling Technique (SMOTE). At last, attack detection is conducted utilizing EfficientNet-DQNN. Furthermore, EfficientNet-DQNN is formed by incorporation of EfficientNet-B3-Attn-2 with DQNN. In addition, EfficientNet-DQNN attained 89.8% of F1-score, 90.4% of accuracy, 91.1% of precision and 91.2% of recall using BOT-IOT dataset at K-Fold is 9.
{"title":"EfficientNet-deep quantum neural network-based economic denial of sustainability attack detection to enhance network security in cloud.","authors":"Mariappan Navaneethakrishnan, Maharajan Robinson Joel, Sriram Kalavai Palani, Gandhi Jabakumar Gnanaprakasam","doi":"10.1080/0954898X.2024.2361093","DOIUrl":"10.1080/0954898X.2024.2361093","url":null,"abstract":"<p><p>Cloud computing (CC) is a future revolution in the Information technology (IT) and Communication field. Security and internet connectivity are the common major factors to slow down the proliferation of CC. Recently, a new kind of denial of service (DDoS) attacks, known as Economic Denial of Sustainability (EDoS) attack, has been emerging. Though EDoS attacks are smaller at a moment, it can be expected to develop in nearer prospective in tandem with progression in the cloud usage. Here, EfficientNet-B3-Attn-2 fused Deep Quantum Neural Network (EfficientNet-DQNN) is presented for EDoS detection. Initially, cloud is simulated and thereafter, considered input log file is fed to perform data pre-processing. Z-Score Normalization ;(ZSN) is employed to carry out pre-processing of data. Afterwards, feature fusion (FF) is accomplished based on Deep Neural Network (DNN) with Kulczynski similarity. Then, data augmentation (DA) is executed by oversampling based upon Synthetic Minority Over-sampling Technique (SMOTE). At last, attack detection is conducted utilizing EfficientNet-DQNN. Furthermore, EfficientNet-DQNN is formed by incorporation of EfficientNet-B3-Attn-2 with DQNN. In addition, EfficientNet-DQNN attained 89.8% of F1-score, 90.4% of accuracy, 91.1% of precision and 91.2% of recall using BOT-IOT dataset at K-Fold is 9.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"1360-1384"},"PeriodicalIF":1.6,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141433400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2024-06-03DOI: 10.1080/0954898X.2024.2360157
Yunfei Yin, Caihao Huang, Xianjian Bao
The imputation of missing values in multivariate time-series data is a basic and popular data processing technology. Recently, some studies have exploited Recurrent Neural Networks (RNNs) and Generative Adversarial Networks (GANs) to impute/fill the missing values in multivariate time-series data. However, when faced with datasets with high missing rates, the imputation error of these methods increases dramatically. To this end, we propose a neural network model based on dynamic contribution and attention, denoted as ContrAttNet. ContrAttNet consists of three novel modules: feature attention module, iLSTM (imputation Long Short-Term Memory) module, and 1D-CNN (1-Dimensional Convolutional Neural Network) module. ContrAttNet exploits temporal information and spatial feature information to predict missing values, where iLSTM attenuates the memory of LSTM according to the characteristics of the missing values, to learn the contributions of different features. Moreover, the feature attention module introduces an attention mechanism based on contributions, to calculate supervised weights. Furthermore, under the influence of these supervised weights, 1D-CNN processes the time-series data by treating them as spatial features. Experimental results show that ContrAttNet outperforms other state-of-the-art models in the missing value imputation of multivariate time-series data, with average 6% MAPE and 9% MAE on the benchmark datasets.
{"title":"ContrAttNet: Contribution and attention approach to multivariate time-series data imputation.","authors":"Yunfei Yin, Caihao Huang, Xianjian Bao","doi":"10.1080/0954898X.2024.2360157","DOIUrl":"10.1080/0954898X.2024.2360157","url":null,"abstract":"<p><p>The imputation of missing values in multivariate time-series data is a basic and popular data processing technology. Recently, some studies have exploited Recurrent Neural Networks (RNNs) and Generative Adversarial Networks (GANs) to impute/fill the missing values in multivariate time-series data. However, when faced with datasets with high missing rates, the imputation error of these methods increases dramatically. To this end, we propose a neural network model based on dynamic contribution and attention, denoted as <b>ContrAttNet</b>. <b>ContrAttNet</b> consists of three novel modules: feature attention module, iLSTM (imputation Long Short-Term Memory) module, and 1D-CNN (1-Dimensional Convolutional Neural Network) module. <b>ContrAttNet</b> exploits temporal information and spatial feature information to predict missing values, where iLSTM attenuates the memory of LSTM according to the characteristics of the missing values, to learn the contributions of different features. Moreover, the feature attention module introduces an attention mechanism based on contributions, to calculate supervised weights. Furthermore, under the influence of these supervised weights, 1D-CNN processes the time-series data by treating them as spatial features. Experimental results show that <b>ContrAttNet</b> outperforms other state-of-the-art models in the missing value imputation of multivariate time-series data, with average 6% MAPE and 9% MAE on the benchmark datasets.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"1336-1359"},"PeriodicalIF":1.6,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141201474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2024-11-12DOI: 10.1080/0954898X.2024.2421196
Maxime Carriere, Rosario Tomasello, Friedemann Pulvermüller
The ability of humans to store spoken words in verbal working memory and build extensive vocabularies is believed to stem from evolutionary changes in cortical connectivity across primate species. However, the underlying neurobiological mechanisms remain unclear. Why can humans acquire vast vocabularies, while non-human primates cannot? This study addresses this question using brain-constrained neural networks that realize between-species differences in cortical connectivity. It investigates how these structural differences support the formation of neural representations for spoken words and the emergence of verbal working memory, crucial for human vocabulary building. We develop comparative models of frontotemporal and occipital cortices, reflecting human and non-human primate neuroanatomy. Using meanfield and spiking neural networks, we simulate auditory word recognition and examine verbal working memory function. The "human models", characterized by denser inter-area connectivity in core language areas, produced larger cell assemblies than the "monkey models", with specific topographies reflecting semantic properties of the represented words. Crucially, longer-lasting reverberant neural activity was observed in human versus monkey architectures, compatible with robust verbal working memory, a necessary condition for vocabulary building. Our findings offer insights into the structural basis of human-specific symbol learning and verbal working memory, shedding light on humans' unique capacity for large vocabulary acquisition.
{"title":"Can human brain connectivity explain verbal working memory?","authors":"Maxime Carriere, Rosario Tomasello, Friedemann Pulvermüller","doi":"10.1080/0954898X.2024.2421196","DOIUrl":"10.1080/0954898X.2024.2421196","url":null,"abstract":"<p><p>The ability of humans to store spoken words in verbal working memory and build extensive vocabularies is believed to stem from evolutionary changes in cortical connectivity across primate species. However, the underlying neurobiological mechanisms remain unclear. Why can humans acquire vast vocabularies, while non-human primates cannot? This study addresses this question using brain-constrained neural networks that realize between-species differences in cortical connectivity. It investigates how these structural differences support the formation of neural representations for spoken words and the emergence of verbal working memory, crucial for human vocabulary building. We develop comparative models of frontotemporal and occipital cortices, reflecting human and non-human primate neuroanatomy. Using meanfield and spiking neural networks, we simulate auditory word recognition and examine verbal working memory function. The \"human models\", characterized by denser inter-area connectivity in core language areas, produced larger cell assemblies than the \"monkey models\", with specific topographies reflecting semantic properties of the represented words. Crucially, longer-lasting reverberant neural activity was observed in human versus monkey architectures, compatible with robust verbal working memory, a necessary condition for vocabulary building. Our findings offer insights into the structural basis of human-specific symbol learning and verbal working memory, shedding light on humans' unique capacity for large vocabulary acquisition.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"2106-2147"},"PeriodicalIF":1.6,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142632807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2024-10-13DOI: 10.1080/0954898X.2024.2392770
Thimmakkondu Babuji Sivakumar, Shahul Hameed Hasan Hussain, R Balamanigandan
The integration of IoT and cloud services enhances communication and quality of life, while predictive analytics powered by AI and deep learning enables proactive healthcare. Deep learning, a subset of machine learning, efficiently analyzes vast datasets, offering rapid disease prediction. Leveraging recurrent neural networks on electronic health records improves accuracy for timely intervention and preventative care. In this manuscript, Internet of Things and Cloud Computing-based Disease Diagnosis using Optimized Improved Generative Adversarial Network in Smart Healthcare System (IOT-CC-DD-OICAN-SHS) is proposed. Initially, an Internet of Things (IoT) device collects diabetes, chronic kidney disease, and heart disease data from patients via wearable devices and intelligent sensors and then saves the patient's large data in the cloud. These cloud data are pre-processed to turn them into a suitable format. The pre-processed dataset is sent into the Improved Generative Adversarial Network (IGAN), which reliably classifies the data as disease-free or diseased. Then, IGAN was optimized using the Flamingo Search optimization algorithm (FSOA). The proposed technique is implemented in Java using Cloud Sim and examined utilizing several performance metrics. The proposed method attains greater accuracy and specificity with lower execution time compared to existing methodologies, IoT-C-SHMS-HDP-DL, PPEDL-MDTC and CSO-CLSTM-DD-SHS respectively.
{"title":"Internet of Things and Cloud Computing-based Disease Diagnosis using Optimized Improved Generative Adversarial Network in Smart Healthcare System.","authors":"Thimmakkondu Babuji Sivakumar, Shahul Hameed Hasan Hussain, R Balamanigandan","doi":"10.1080/0954898X.2024.2392770","DOIUrl":"10.1080/0954898X.2024.2392770","url":null,"abstract":"<p><p>The integration of IoT and cloud services enhances communication and quality of life, while predictive analytics powered by AI and deep learning enables proactive healthcare. Deep learning, a subset of machine learning, efficiently analyzes vast datasets, offering rapid disease prediction. Leveraging recurrent neural networks on electronic health records improves accuracy for timely intervention and preventative care. In this manuscript, Internet of Things and Cloud Computing-based Disease Diagnosis using Optimized Improved Generative Adversarial Network in Smart Healthcare System (IOT-CC-DD-OICAN-SHS) is proposed. Initially, an Internet of Things (IoT) device collects diabetes, chronic kidney disease, and heart disease data from patients via wearable devices and intelligent sensors and then saves the patient's large data in the cloud. These cloud data are pre-processed to turn them into a suitable format. The pre-processed dataset is sent into the Improved Generative Adversarial Network (IGAN), which reliably classifies the data as disease-free or diseased. Then, IGAN was optimized using the Flamingo Search optimization algorithm (FSOA). The proposed technique is implemented in Java using Cloud Sim and examined utilizing several performance metrics. The proposed method attains greater accuracy and specificity with lower execution time compared to existing methodologies, IoT-C-SHMS-HDP-DL, PPEDL-MDTC and CSO-CLSTM-DD-SHS respectively.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"1752-1775"},"PeriodicalIF":1.6,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142481198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2024-07-08DOI: 10.1080/0954898X.2024.2367480
Ashutosh Kumar, Garima Verma
Cloud computing is an on-demand virtual-based technology to develop, configure, and modify applications online through the internet. It enables the users to handle various operations such as storage, back-up, and recovery of data, data analysis, delivery of software applications, implementation of new services and applications, hosting websites and blogs, and streaming of audio and video files. Thereby, it provides us many benefits although it is backlashed due to problems related to cloud security like data leakage, data loss, cyber attacks, etc. To address the security concerns, researchers have developed a variety of authentication mechanisms. This means that the authentication procedure used in the suggested method is multi-levelled. As a result, a better QKD method is offered to strengthen cloud security against different types of security risks. Key generation for enhanced QKD is based on the ABE public key cryptography approach. Here, an approach named CPABE is used in improved QKD. The Improved QKD scored the reduced KCA attack ratings of 0.3193, this is superior to CMMLA (0.7915), CPABE (0.8916), AES (0.5277), Blowfish (0.6144), and ECC (0.4287), accordingly. Finally, this multi-level authentication using an improved QKD approach is analysed under various measures and validates the enhancement over the state-of-the-art models.
{"title":"Multi-level authentication for security in cloud using improved quantum key distribution.","authors":"Ashutosh Kumar, Garima Verma","doi":"10.1080/0954898X.2024.2367480","DOIUrl":"10.1080/0954898X.2024.2367480","url":null,"abstract":"<p><p>Cloud computing is an on-demand virtual-based technology to develop, configure, and modify applications online through the internet. It enables the users to handle various operations such as storage, back-up, and recovery of data, data analysis, delivery of software applications, implementation of new services and applications, hosting websites and blogs, and streaming of audio and video files. Thereby, it provides us many benefits although it is backlashed due to problems related to cloud security like data leakage, data loss, cyber attacks, etc. To address the security concerns, researchers have developed a variety of authentication mechanisms. This means that the authentication procedure used in the suggested method is multi-levelled. As a result, a better QKD method is offered to strengthen cloud security against different types of security risks. Key generation for enhanced QKD is based on the ABE public key cryptography approach. Here, an approach named CPABE is used in improved QKD. The Improved QKD scored the reduced KCA attack ratings of 0.3193, this is superior to CMMLA (0.7915), CPABE (0.8916), AES (0.5277), Blowfish (0.6144), and ECC (0.4287), accordingly. Finally, this multi-level authentication using an improved QKD approach is analysed under various measures and validates the enhancement over the state-of-the-art models.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"1443-1463"},"PeriodicalIF":1.6,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141555959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2024-09-16DOI: 10.1080/0954898X.2024.2389248
Loganayagi T, Pooja Panapana, Ganji Ramanjaiah, Smritilekha Das
This research presents a novel deep learning framework for MRI-based brain tumour (BT) detection. The input brain MRI image is first acquired from the dataset. Once the images have been obtained, they are passed to an image preprocessing step where a median filter is used to eliminate noise and artefacts from the input image. The tumour-tumour region segmentation module receives the denoised image and it uses RP-Net to segment the BT region. Following that, in order to prevent overfitting, image augmentation is carried out utilizing methods including rotating, flipping, shifting, and colour augmentation. Later, the augmented image is forwarded to the feature extraction phase, wherein features like GLCM and proposed EGDP formulated by including entropy with GDP are extracted. Finally, based on the extracted features, BT detection is accomplished based on the proposed deep convolutional belief network (DCvB-Net), which is formulated using the deep convolutional neural network and deep belief network.The devised DCvB-Net for BT detection is investigated for its performance concerning true negative rate, accuracy, and true positive rate is established to have acquired values of 93%, 92.3%, and 93.1% correspondingly.
{"title":"EGDP based feature extraction and deep convolutional belief network for brain tumor detection using MRI image.","authors":"Loganayagi T, Pooja Panapana, Ganji Ramanjaiah, Smritilekha Das","doi":"10.1080/0954898X.2024.2389248","DOIUrl":"10.1080/0954898X.2024.2389248","url":null,"abstract":"<p><p>This research presents a novel deep learning framework for MRI-based brain tumour (BT) detection. The input brain MRI image is first acquired from the dataset. Once the images have been obtained, they are passed to an image preprocessing step where a median filter is used to eliminate noise and artefacts from the input image. The tumour-tumour region segmentation module receives the denoised image and it uses RP-Net to segment the BT region. Following that, in order to prevent overfitting, image augmentation is carried out utilizing methods including rotating, flipping, shifting, and colour augmentation. Later, the augmented image is forwarded to the feature extraction phase, wherein features like GLCM and proposed EGDP formulated by including entropy with GDP are extracted. Finally, based on the extracted features, BT detection is accomplished based on the proposed deep convolutional belief network (DCvB-Net), which is formulated using the deep convolutional neural network and deep belief network.The devised DCvB-Net for BT detection is investigated for its performance concerning true negative rate, accuracy, and true positive rate is established to have acquired values of 93%, 92.3%, and 93.1% correspondingly.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"1721-1751"},"PeriodicalIF":1.6,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142301213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wireless Sensor Networks (WSNs) are mainly used for data monitoring and collection purposes. Usually, they are made up of numerous sensor nodes that are utilized to gather data remotely. Each sensor node is small and inexpensive. Due to the increasing intelligence, frequency, and complexity of these malicious attacks, traditional attack detection is less effective. In this manuscript, Optimized Memory Augmented Graph Neural Network-based DoS Attacks Detection in Wireless Sensor Network (DoS-AD-MAGNN-WSN) is proposed. Here, the input data is amassed from WSN-DS dataset. The input data is pre-processing by secure adaptive event-triggered filter for handling negation and stemming. Then, the output is fed to nested patch-based feature extraction to extract the optimal features. The extracted features are given to MAGNN for the effective classification of blackhole, flooding, grayhole, scheduling, and normal. The weight parameter of MAGNN is optimized by gradient-based optimizers for better accuracy. The proposed method is activated in Python, and it attains 31.20%, 23.30%, and 26.43% higher accuracy analyzed with existing techniques, such as CNN-LSTM-based method for Denial of Service attacks detection in WSNs (CNN-DoS-AD-WSN), Trust-based DoS attack detection in WSNs for reliable data transmission (TB-DoS-AD-WSN-RDT), and FBDR-Fuzzy-based DoS attack detection with recovery mechanism for WSNs (FBDR-DoS-AD-RM-WSN), respectively.
无线传感器网络(WSN)主要用于监测和收集数据。通常,它们由许多传感器节点组成,用于远程收集数据。每个传感器节点体积小、成本低。由于这些恶意攻击的智能性、频率和复杂性不断提高,传统的攻击检测方法已不再有效。本文提出了基于优化内存增强图神经网络的无线传感器网络 DoS 攻击检测(DoS-AD-MAGNN-WSN)。输入数据来自 WSN-DS 数据集。输入数据通过安全自适应事件触发滤波器进行预处理,以处理否定和词干。然后,将输出输入基于嵌套补丁的特征提取,以提取最佳特征。提取的特征将交给 MAGNN,以便对黑洞、洪水、灰洞、调度和正常进行有效分类。MAGNN 的权重参数通过基于梯度的优化器进行优化,以提高准确性。提出的方法在 Python 中被激活,与基于 CNN-LSTM 的 WSN 中拒绝服务攻击检测方法(CNN-DoS-AD-WSN)、基于信任的 WSN 中 DoS 攻击检测方法(TB-DoS-AD-WSN-RDT)和基于 FBDR-Fuzzy 的 WSN DoS 攻击检测与恢复机制(FBDR-DoS-AD-RM-WSN)等现有技术相比,准确率分别提高了 31.20%、23.30% 和 26.43%。
{"title":"Optimized memory augmented graph neural network-based DoS attacks detection in wireless sensor network.","authors":"Ayyasamy Pushpalatha, Sunkari Pradeep, Matta Venkata Pullarao, Shanmuganathan Sankar","doi":"10.1080/0954898X.2024.2392786","DOIUrl":"10.1080/0954898X.2024.2392786","url":null,"abstract":"<p><p>Wireless Sensor Networks (WSNs) are mainly used for data monitoring and collection purposes. Usually, they are made up of numerous sensor nodes that are utilized to gather data remotely. Each sensor node is small and inexpensive. Due to the increasing intelligence, frequency, and complexity of these malicious attacks, traditional attack detection is less effective. In this manuscript, Optimized Memory Augmented Graph Neural Network-based DoS Attacks Detection in Wireless Sensor Network (DoS-AD-MAGNN-WSN) is proposed. Here, the input data is amassed from WSN-DS dataset. The input data is pre-processing by secure adaptive event-triggered filter for handling negation and stemming. Then, the output is fed to nested patch-based feature extraction to extract the optimal features. The extracted features are given to MAGNN for the effective classification of blackhole, flooding, grayhole, scheduling, and normal. The weight parameter of MAGNN is optimized by gradient-based optimizers for better accuracy. The proposed method is activated in Python, and it attains 31.20%, 23.30%, and 26.43% higher accuracy analyzed with existing techniques, such as CNN-LSTM-based method for Denial of Service attacks detection in WSNs (CNN-DoS-AD-WSN), Trust-based DoS attack detection in WSNs for reliable data transmission (TB-DoS-AD-WSN-RDT), and FBDR-Fuzzy-based DoS attack detection with recovery mechanism for WSNs (FBDR-DoS-AD-RM-WSN), respectively.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"1810-1836"},"PeriodicalIF":1.6,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142513157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An efficient resource utilization method can greatly reduce expenses and unwanted resources. Typical cloud resource planning approaches lack support for the emerging paradigm regarding asset management speed and optimization. The use of cloud computing relies heavily on task planning and allocation of resources. The task scheduling issue is more crucial in arranging and allotting application jobs supplied by customers on Virtual Machines (VM) in a specific manner. The task planning issue needs to be specifically stated to increase scheduling efficiency. The task scheduling in the cloud environment model is developed using optimization techniques. This model intends to optimize both the task scheduling and VM placement over the cloud environment. In this model, a new hybrid-meta-heuristic optimization algorithm is developed named the Hybrid Lemurs-based Gannet Optimization Algorithm (HL-GOA). The multi-objective function is considered with constraints like cost, time, resource utilization, makespan, and throughput. The proposed model is further validated and compared against existing methodologies. The total time required for scheduling and VM placement is 30.23%, 6.25%, 11.76%, and 10.44% reduced than ESO, RSO, LO, and GOA with 2 VMs. The simulation outcomes revealed that the developed model effectively resolved the scheduling and VL placement issues.
{"title":"Designing an optimal task scheduling and VM placement in the cloud environment with multi-objective constraints using Hybrid Lemurs and Gannet Optimization Algorithm.","authors":"Kapil Vhatkar, Atul Baliram Kathole, Savita Lonare, Jayashree Katti, Vinod Vijaykumar Kimbahune","doi":"10.1080/0954898X.2024.2412678","DOIUrl":"10.1080/0954898X.2024.2412678","url":null,"abstract":"<p><p>An efficient resource utilization method can greatly reduce expenses and unwanted resources. Typical cloud resource planning approaches lack support for the emerging paradigm regarding asset management speed and optimization. The use of cloud computing relies heavily on task planning and allocation of resources. The task scheduling issue is more crucial in arranging and allotting application jobs supplied by customers on Virtual Machines (VM) in a specific manner. The task planning issue needs to be specifically stated to increase scheduling efficiency. The task scheduling in the cloud environment model is developed using optimization techniques. This model intends to optimize both the task scheduling and VM placement over the cloud environment. In this model, a new hybrid-meta-heuristic optimization algorithm is developed named the Hybrid Lemurs-based Gannet Optimization Algorithm (HL-GOA). The multi-objective function is considered with constraints like cost, time, resource utilization, makespan, and throughput. The proposed model is further validated and compared against existing methodologies. The total time required for scheduling and VM placement is 30.23%, 6.25%, 11.76%, and 10.44% reduced than ESO, RSO, LO, and GOA with 2 VMs. The simulation outcomes revealed that the developed model effectively resolved the scheduling and VL placement issues.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"2075-2105"},"PeriodicalIF":1.6,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142395378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}