Pub Date : 2024-06-11DOI: 10.1080/0954898X.2024.2354477
J Sulthan Alikhan, S Miruna Joe Amali, R Karthick
In this paper, Quaternion Fractional Order Meixner Moments-based Deep Siamese Domain Adaptation Convolutional Neural Network-based Big Data Analytical Technique is proposed for improving Cloud Data Security (DSDA-CNN-QFOMM-BD-CDS). The proposed methodology comprises six phases: data collection, transmission, pre-processing, storage, analysis, and security of data. Big data analysis methodologies start with the data collection phase. Deep Siamese domain adaptation convolutional Neural Network (DSDA-CNN) is applied to categorize the types of attacks in the cloud database during the data analysis process. During data security phase, Quaternion Fractional Order Meixner Moments (QFOMM) is employed to protect the cloud data for encryption with decryption. The proposed method is implemented in JAVA and assessed using performance metrics, including precision, sensitivity, accuracy, recall, specificity, f-measure, computational complexity information loss, compression ratio, throughput, encryption time, decryption time. The performance of the proposed method offers 23.31%, 15.64%, 18.89% better accuracy and 36.69%, 17.25%, 19.96% less information loss. When compared to existing methods like Fractional order discrete Tchebyshev encryption fostered big data analytical model to maximize the safety of cloud data depend on Enhanced Elman spike neural network (EESNN-FrDTM-BD-CDS), an innovative scheme architecture for safe authentication along data sharing in cloud enabled Big data Environment (LZMA-DBSCAN-BD-CDS).
{"title":"Deep Siamese domain adaptation convolutional neural network-based quaternion fractional order Meixner moments fostered big data analytical method for enhancing cloud data security.","authors":"J Sulthan Alikhan, S Miruna Joe Amali, R Karthick","doi":"10.1080/0954898X.2024.2354477","DOIUrl":"https://doi.org/10.1080/0954898X.2024.2354477","url":null,"abstract":"<p><p>In this paper, Quaternion Fractional Order Meixner Moments-based Deep Siamese Domain Adaptation Convolutional Neural Network-based Big Data Analytical Technique is proposed for improving Cloud Data Security (DSDA-CNN-QFOMM-BD-CDS). The proposed methodology comprises six phases: data collection, transmission, pre-processing, storage, analysis, and security of data. Big data analysis methodologies start with the data collection phase. Deep Siamese domain adaptation convolutional Neural Network (DSDA-CNN) is applied to categorize the types of attacks in the cloud database during the data analysis process. During data security phase, Quaternion Fractional Order Meixner Moments (QFOMM) is employed to protect the cloud data for encryption with decryption. The proposed method is implemented in JAVA and assessed using performance metrics, including precision, sensitivity, accuracy, recall, specificity, f-measure, computational complexity information loss, compression ratio, throughput, encryption time, decryption time. The performance of the proposed method offers 23.31%, 15.64%, 18.89% better accuracy and 36.69%, 17.25%, 19.96% less information loss. When compared to existing methods like Fractional order discrete Tchebyshev encryption fostered big data analytical model to maximize the safety of cloud data depend on Enhanced Elman spike neural network (EESNN-FrDTM-BD-CDS), an innovative scheme architecture for safe authentication along data sharing in cloud enabled Big data Environment (LZMA-DBSCAN-BD-CDS).</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"1-28"},"PeriodicalIF":7.8,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141302119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-10DOI: 10.1080/0954898X.2024.2358957
J Karthick Myilvahanan, N Mohana Sundaram
Predicting the stock market is one of the significant chores and has a successful prediction of stock rates, and it helps in making correct decisions. The prediction of the stock market is the main challenge due to blaring, chaotic data as well as non-stationary data. In this research, the support vector machine (SVM) is devised for performing an effective stock market prediction. At first, the input time series data is considered and the pre-processing of data is done by employing a standard scalar. Then, the time intrinsic features are extracted and the suitable features are selected in the feature selection stage by eliminating other features using recursive feature elimination. Afterwards, the Long Short-Term Memory (LSTM) based prediction is done, wherein LSTM is trained to employ Aquila circle-inspired optimization (ACIO) that is newly introduced by merging Aquila optimizer (AO) with circle-inspired optimization algorithm (CIOA). On the other hand, delay-based matrix formation is conducted by considering input time series data. After that, convolutional neural network (CNN)-based prediction is performed, where CNN is tuned by the same ACIO. Finally, stock market prediction is executed utilizing SVM by fusing the predicted outputs attained from LSTM-based prediction and CNN-based prediction. Furthermore, the SVM attains better outcomes of minimum mean absolute percentage error; (MAPE) and normalized root-mean-square error (RMSE) with values about 0.378 and 0.294.
{"title":"Support vector machine-based stock market prediction using long short-term memory and convolutional neural network with aquila circle inspired optimization.","authors":"J Karthick Myilvahanan, N Mohana Sundaram","doi":"10.1080/0954898X.2024.2358957","DOIUrl":"https://doi.org/10.1080/0954898X.2024.2358957","url":null,"abstract":"<p><p>Predicting the stock market is one of the significant chores and has a successful prediction of stock rates, and it helps in making correct decisions. The prediction of the stock market is the main challenge due to blaring, chaotic data as well as non-stationary data. In this research, the support vector machine (SVM) is devised for performing an effective stock market prediction. At first, the input time series data is considered and the pre-processing of data is done by employing a standard scalar. Then, the time intrinsic features are extracted and the suitable features are selected in the feature selection stage by eliminating other features using recursive feature elimination. Afterwards, the Long Short-Term Memory (LSTM) based prediction is done, wherein LSTM is trained to employ Aquila circle-inspired optimization (ACIO) that is newly introduced by merging Aquila optimizer (AO) with circle-inspired optimization algorithm (CIOA). On the other hand, delay-based matrix formation is conducted by considering input time series data. After that, convolutional neural network (CNN)-based prediction is performed, where CNN is tuned by the same ACIO. Finally, stock market prediction is executed utilizing SVM by fusing the predicted outputs attained from LSTM-based prediction and CNN-based prediction. Furthermore, the SVM attains better outcomes of minimum mean absolute percentage error; (MAPE) and normalized root-mean-square error (RMSE) with values about 0.378 and 0.294.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"1-36"},"PeriodicalIF":7.8,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141297344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-10DOI: 10.1080/0954898X.2024.2363353
Mandeep Kumar, Jahid Ali
The Wireless Sensor Network (WSN) is susceptible to two kinds of attacks, namely active attack and passive attack. In an active attack, the attacker directly communicates with the target system or network. In contrast, in passive attack, the attacker is in indirect contact with the network. To preserve the functionality and dependability of wireless sensor networks, this research has been conducted recently to detect and mitigate the black hole attacks. In this research, a Deep learning (DL) based black hole attack detection model is designed. The WSN simulation is the beginning stage of this process. Moreover, routing is the key process, where the data is passed to the base station (BS) via the shortest and finest route. The proposed Worst Elite Sailfish Optimization (WESFO) is utilized for routing. Moreover, black hole attack detection is performed in the BS. The Auto Encoder (AE) is employed in attack detection, which is trained with the use of the proposed WESFO algorithm. Additionally, the proposed model is validated in terms of delay, Packet Delivery Rate (PDR), throughput, False-Negative Rate (FNR), and False-Positive Rate (FPR) parameters with the corresponding outcomes like 25.64 s, 94.83%, 119.3, 0.084, and 0.135 are obtained.
{"title":"A secure worst elite sailfish optimizer based routing and deep learning for black hole attack detection.","authors":"Mandeep Kumar, Jahid Ali","doi":"10.1080/0954898X.2024.2363353","DOIUrl":"https://doi.org/10.1080/0954898X.2024.2363353","url":null,"abstract":"<p><p>The Wireless Sensor Network (WSN) is susceptible to two kinds of attacks, namely active attack and passive attack. In an active attack, the attacker directly communicates with the target system or network. In contrast, in passive attack, the attacker is in indirect contact with the network. To preserve the functionality and dependability of wireless sensor networks, this research has been conducted recently to detect and mitigate the black hole attacks. In this research, a Deep learning (DL) based black hole attack detection model is designed. The WSN simulation is the beginning stage of this process. Moreover, routing is the key process, where the data is passed to the base station (BS) via the shortest and finest route. The proposed Worst Elite Sailfish Optimization (WESFO) is utilized for routing. Moreover, black hole attack detection is performed in the BS. The Auto Encoder (AE) is employed in attack detection, which is trained with the use of the proposed WESFO algorithm. Additionally, the proposed model is validated in terms of delay, Packet Delivery Rate (PDR), throughput, False-Negative Rate (FNR), and False-Positive Rate (FPR) parameters with the corresponding outcomes like 25.64 s, 94.83%, 119.3, 0.084, and 0.135 are obtained.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"1-26"},"PeriodicalIF":7.8,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141297343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Natural language is frequently employed for information exchange between humans and computers in modern digital environments. Natural Language Processing (NLP) is a basic requirement for technological advancement in the field of speech recognition. For additional NLP activities like speech-to-text translation, speech-to-speech translation, speaker recognition, and speech information retrieval, language identification (LID) is a prerequisite. In this paper, we developed a Language Identification (LID) model for Ethio-Semitic languages. We used a hybrid approach (a convolutional recurrent neural network (CRNN)), in addition to a mixed (Mel frequency cepstral coefficient (MFCC) and mel-spectrogram) approach, to build our LID model. The study focused on four Ethio-Semitic languages: Amharic, Ge'ez, Guragigna, and Tigrinya. By using data augmentation for the selected languages, we were able to expand our original dataset of 8 h of audio data to 24 h and 40 min. The proposed selected features, when evaluated, achieved an average performance accuracy of 98.1%, 98.6%, and 99.9% for testing, validation, and training, respectively. The results show that the CRNN model with (Mel-Spectrogram + MFCC) combination feature achieved the best results when compared to other existing models.
{"title":"Speaker-based language identification for Ethio-Semitic languages using CRNN and hybrid features.","authors":"Malefia Demilie Melese, Amlakie Aschale Alemu, Ayodeji Olalekan Salau, Ibrahim Gashaw Kasa","doi":"10.1080/0954898X.2024.2359610","DOIUrl":"https://doi.org/10.1080/0954898X.2024.2359610","url":null,"abstract":"<p><p>Natural language is frequently employed for information exchange between humans and computers in modern digital environments. Natural Language Processing (NLP) is a basic requirement for technological advancement in the field of speech recognition. For additional NLP activities like speech-to-text translation, speech-to-speech translation, speaker recognition, and speech information retrieval, language identification (LID) is a prerequisite. In this paper, we developed a Language Identification (LID) model for Ethio-Semitic languages. We used a hybrid approach (a convolutional recurrent neural network (CRNN)), in addition to a mixed (Mel frequency cepstral coefficient (MFCC) and mel-spectrogram) approach, to build our LID model. The study focused on four Ethio-Semitic languages: Amharic, Ge'ez, Guragigna, and Tigrinya. By using data augmentation for the selected languages, we were able to expand our original dataset of 8 h of audio data to 24 h and 40 min. The proposed selected features, when evaluated, achieved an average performance accuracy of 98.1%, 98.6%, and 99.9% for testing, validation, and training, respectively. The results show that the CRNN model with (Mel-Spectrogram + MFCC) combination feature achieved the best results when compared to other existing models.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"1-23"},"PeriodicalIF":7.8,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141238784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-03DOI: 10.1080/0954898X.2024.2360157
Yunfei Yin, Caihao Huang, Xianjian Bao
The imputation of missing values in multivariate time-series data is a basic and popular data processing technology. Recently, some studies have exploited Recurrent Neural Networks (RNNs) and Generative Adversarial Networks (GANs) to impute/fill the missing values in multivariate time-series data. However, when faced with datasets with high missing rates, the imputation error of these methods increases dramatically. To this end, we propose a neural network model based on dynamic contribution and attention, denoted as ContrAttNet. ContrAttNet consists of three novel modules: feature attention module, iLSTM (imputation Long Short-Term Memory) module, and 1D-CNN (1-Dimensional Convolutional Neural Network) module. ContrAttNet exploits temporal information and spatial feature information to predict missing values, where iLSTM attenuates the memory of LSTM according to the characteristics of the missing values, to learn the contributions of different features. Moreover, the feature attention module introduces an attention mechanism based on contributions, to calculate supervised weights. Furthermore, under the influence of these supervised weights, 1D-CNN processes the time-series data by treating them as spatial features. Experimental results show that ContrAttNet outperforms other state-of-the-art models in the missing value imputation of multivariate time-series data, with average 6% MAPE and 9% MAE on the benchmark datasets.
{"title":"ContrAttNet: Contribution and attention approach to multivariate time-series data imputation.","authors":"Yunfei Yin, Caihao Huang, Xianjian Bao","doi":"10.1080/0954898X.2024.2360157","DOIUrl":"https://doi.org/10.1080/0954898X.2024.2360157","url":null,"abstract":"<p><p>The imputation of missing values in multivariate time-series data is a basic and popular data processing technology. Recently, some studies have exploited Recurrent Neural Networks (RNNs) and Generative Adversarial Networks (GANs) to impute/fill the missing values in multivariate time-series data. However, when faced with datasets with high missing rates, the imputation error of these methods increases dramatically. To this end, we propose a neural network model based on dynamic contribution and attention, denoted as <b>ContrAttNet</b>. <b>ContrAttNet</b> consists of three novel modules: feature attention module, iLSTM (imputation Long Short-Term Memory) module, and 1D-CNN (1-Dimensional Convolutional Neural Network) module. <b>ContrAttNet</b> exploits temporal information and spatial feature information to predict missing values, where iLSTM attenuates the memory of LSTM according to the characteristics of the missing values, to learn the contributions of different features. Moreover, the feature attention module introduces an attention mechanism based on contributions, to calculate supervised weights. Furthermore, under the influence of these supervised weights, 1D-CNN processes the time-series data by treating them as spatial features. Experimental results show that <b>ContrAttNet</b> outperforms other state-of-the-art models in the missing value imputation of multivariate time-series data, with average 6% MAPE and 9% MAE on the benchmark datasets.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"1-24"},"PeriodicalIF":7.8,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141201474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-29DOI: 10.1080/0954898X.2024.2349275
Arjun Kuruva, C Nagaraju Chiluka
Sentiment Analysis (SA) is a technique for categorizing texts based on the sentimental polarity of people's opinions. This paper introduces a sentiment analysis (SA) model with text and emojis. The two preprocessed data's are data with text and emojis and text without emojis. Feature extraction consists text features and text with emojis features. The text features are features like N-grams, modified Term Frequency-Inverse Document Frequency (TF-IDF), and Bag-of-Words (BoW) features extracted from the text. In classification, CNN (Conventional Neural Network) and MLP (Multi-Layer Perception) use emojis and text-based SA. The CNN weight is optimized by a new Electric fish Customized Shark Smell Optimization (ECSSO) Algorithm. Similarly, the text-based SA is carried out by hybrid Long Short-Term Memory (LSTM) and Recurrent Neural Network (RNN) classifiers. The bagged data are given as input to the classification process via RNN and LSTM. Here, the weight of LSTM is optimized by the suggested ECSSO algorithm. Then, the mean of LSTM and RNN determines the final output. The specificity of the developed scheme is 29.01%, 42.75%, 23.88%,22.07%, 25.31%, 18.42%, 5.68%, 10.34%, 6.20%, 6.64%, and 6.84% better for 70% than other models. The efficiency of the proposed scheme is computed and evaluated.
{"title":"Hybrid deep learning approach for sentiment analysis using text and emojis.","authors":"Arjun Kuruva, C Nagaraju Chiluka","doi":"10.1080/0954898X.2024.2349275","DOIUrl":"https://doi.org/10.1080/0954898X.2024.2349275","url":null,"abstract":"<p><p>Sentiment Analysis (SA) is a technique for categorizing texts based on the sentimental polarity of people's opinions. This paper introduces a sentiment analysis (SA) model with text and emojis. The two preprocessed data's are data with text and emojis and text without emojis. Feature extraction consists text features and text with emojis features. The text features are features like N-grams, modified Term Frequency-Inverse Document Frequency (TF-IDF), and Bag-of-Words (BoW) features extracted from the text. In classification, CNN (Conventional Neural Network) and MLP (Multi-Layer Perception) use emojis and text-based SA. The CNN weight is optimized by a new Electric fish Customized Shark Smell Optimization (ECSSO) Algorithm. Similarly, the text-based SA is carried out by hybrid Long Short-Term Memory (LSTM) and Recurrent Neural Network (RNN) classifiers. The bagged data are given as input to the classification process via RNN and LSTM. Here, the weight of LSTM is optimized by the suggested ECSSO algorithm. Then, the mean of LSTM and RNN determines the final output. The specificity of the developed scheme is 29.01%, 42.75%, 23.88%,22.07%, 25.31%, 18.42%, 5.68%, 10.34%, 6.20%, 6.64%, and 6.84% better for 70% than other models. The efficiency of the proposed scheme is computed and evaluated.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"1-30"},"PeriodicalIF":7.8,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141162790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-28DOI: 10.1080/0954898X.2024.2358961
Jakkuluri Vijaya Kumar, S Maflin Shaby
The recent wireless communication systems require high gain, lightweight, low profile, and simple antenna structures to ensure high efficiency and reliability. The existing microstrip patch antenna (MPA) design approaches attain low gain and high return loss. To solve this issue, the geometric dimensions of the antenna should be optimized. The improved Particle Swarm Optimization (PSO) algorithm which is the combination of PSO and simulated annealing (SA) approach (PSO-SA) is employed in this paper to optimize the width and length of the inset-fed rectangular microstrip patch antennas for Ku-band and C-band applications. The inputs to the proposed algorithm such as substrate height, dielectric constant, and resonant frequency and outputs are optimized for width and height. The return loss and gain of the antenna are considered for the fitness function. To calculate the fitness value, the Feedforward Neural Network (FNN) is employed in the PSO-SA approach. The design and optimization of the proposed MPA are implemented in MATLAB software. The performance of the optimally designed antenna with the proposed approach is evaluated in terms of the radiation pattern, return loss, Voltage Standing Wave Ratio (VSWR), gain, computation time, directivity, and convergence speed.
最近的无线通信系统需要高增益、重量轻、外形小巧和结构简单的天线,以确保高效率和高可靠性。现有的微带贴片天线(MPA)设计方法增益低、回波损耗大。为解决这一问题,应优化天线的几何尺寸。本文采用了改进的粒子群优化(PSO)算法,即 PSO 和模拟退火(SA)方法(PSO-SA)的结合,来优化用于 Ku 波段和 C 波段应用的插馈式矩形微带贴片天线的宽度和长度。所提算法的输入(如基板高度、介电常数和谐振频率)和输出(如宽度和高度)均已优化。天线的回波损耗和增益被视为拟合函数。为了计算适配值,PSO-SA 方法采用了前馈神经网络(FNN)。拟议 MPA 的设计和优化在 MATLAB 软件中实现。通过辐射模式、回波损耗、电压驻波比 (VSWR)、增益、计算时间、指向性和收敛速度等方面,对采用所提方法优化设计的天线性能进行了评估。
{"title":"Optimizing inset-fed rectangular micro strip patch antenna by improved particle swarm optimization and simulated annealing.","authors":"Jakkuluri Vijaya Kumar, S Maflin Shaby","doi":"10.1080/0954898X.2024.2358961","DOIUrl":"https://doi.org/10.1080/0954898X.2024.2358961","url":null,"abstract":"<p><p>The recent wireless communication systems require high gain, lightweight, low profile, and simple antenna structures to ensure high efficiency and reliability. The existing microstrip patch antenna (MPA) design approaches attain low gain and high return loss. To solve this issue, the geometric dimensions of the antenna should be optimized. The improved Particle Swarm Optimization (PSO) algorithm which is the combination of PSO and simulated annealing (SA) approach (PSO-SA) is employed in this paper to optimize the width and length of the inset-fed rectangular microstrip patch antennas for Ku-band and C-band applications. The inputs to the proposed algorithm such as substrate height, dielectric constant, and resonant frequency and outputs are optimized for width and height. The return loss and gain of the antenna are considered for the fitness function. To calculate the fitness value, the Feedforward Neural Network (FNN) is employed in the PSO-SA approach. The design and optimization of the proposed MPA are implemented in MATLAB software. The performance of the optimally designed antenna with the proposed approach is evaluated in terms of the radiation pattern, return loss, Voltage Standing Wave Ratio (VSWR), gain, computation time, directivity, and convergence speed.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"1-31"},"PeriodicalIF":7.8,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141159287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-28DOI: 10.1080/0954898X.2024.2346608
David Neels Ponkumar Devadhas, Hephzi Punithavathi Isaac Sugirtharaj, Mary Harin Fernandez, Duraipandy Periyasamy
Automated diagnosis of cancer from skin lesion data has been the focus of numerous research. Despite that it can be challenging to interpret these images because of features like colour illumination changes, variation in the sizes and forms of the lesions. To tackle these problems, the proposed model develops an ensemble of deep learning techniques for skin cancer diagnosis. Initially, skin imaging data are collected and preprocessed using resizing and anisotropic diffusion to enhance the quality of the image. Preprocessed images are fed into the Fuzzy-C-Means clustering technique to segment the region of diseases. Stacking-based ensemble deep learning approach is used for classification and the LSTM acts as a meta-classifier. Deep Neural Network (DNN) and Convolutional Neural Network (CNN) are used as input for LSTM. This segmented images are utilized to be input into the CNN, and the local binary pattern (LBP) technique is employed to extract DNN features from the segments of the image. The output from these two classifiers will be fed into the LSTM Meta classifier. This LSTM classifies the input data and predicts the skin cancer disease. The proposed approach had a greater accuracy of 97%. Hence, the developed model accurately predicts skin cancer disease.
{"title":"Effective prediction of human skin cancer using stacking based ensemble deep learning algorithm.","authors":"David Neels Ponkumar Devadhas, Hephzi Punithavathi Isaac Sugirtharaj, Mary Harin Fernandez, Duraipandy Periyasamy","doi":"10.1080/0954898X.2024.2346608","DOIUrl":"https://doi.org/10.1080/0954898X.2024.2346608","url":null,"abstract":"<p><p>Automated diagnosis of cancer from skin lesion data has been the focus of numerous research. Despite that it can be challenging to interpret these images because of features like colour illumination changes, variation in the sizes and forms of the lesions. To tackle these problems, the proposed model develops an ensemble of deep learning techniques for skin cancer diagnosis. Initially, skin imaging data are collected and preprocessed using resizing and anisotropic diffusion to enhance the quality of the image. Preprocessed images are fed into the Fuzzy-C-Means clustering technique to segment the region of diseases. Stacking-based ensemble deep learning approach is used for classification and the LSTM acts as a meta-classifier. Deep Neural Network (DNN) and Convolutional Neural Network (CNN) are used as input for LSTM. This segmented images are utilized to be input into the CNN, and the local binary pattern (LBP) technique is employed to extract DNN features from the segments of the image. The output from these two classifiers will be fed into the LSTM Meta classifier. This LSTM classifies the input data and predicts the skin cancer disease. The proposed approach had a greater accuracy of 97%. Hence, the developed model accurately predicts skin cancer disease.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"1-37"},"PeriodicalIF":7.8,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141159240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-22DOI: 10.1080/0954898X.2024.2326493
David Femi, Manapakkam Anandan Mukunthan
Nowadays, Deep Learning (DL) techniques are being used to automate the identification and diagnosis of plant diseases, thereby enhancing global food security and enabling non-experts to detect these diseases. Among many DL techniques, a Deep Encoder-Decoder Cascaded Network (DEDCNet) model can precisely segment diseased areas from the leaf images to differentiate and classify multiple diseases. On the other hand, the model training depends on the appropriate selection of hyperparameters. Also, this network structure has weak robustness with different parameters. Hence, in this manuscript, an Optimized DEDCNet (ODEDCNet) model is proposed for improved leaf disease image segmentation. To choose the best DEDCNet hyperparameters, a brand-new Dingo Optimization Algorithm (DOA) is included in this model. The DOA depends on the foraging nature of dingoes, which comprises exploration and exploitation phases. In exploration, it attains many predictable decisions in the search area, whereas exploitation enables exploring the best decisions in a provided area. The segmentation accuracy is used as the fitness value of each dingo for hyperparameter selection. By configuring the chosen hyperparameters, the DEDCNet is trained to segment the leaf disease regions. The segmented images are further given to the pre-trained Convolutional Neural Networks (CNNs) followed by the Support Vector Machine (SVM) for classifying leaf diseases. ODEDCNet performs exceptionally well on the PlantVillage and Betel Leaf Image datasets, attaining an astounding 97.33% accuracy on the former and 97.42% accuracy on the latter. Both datasets achieve noteworthy recall, F-score, Dice coefficient, and precision values: the Betel Leaf Image dataset shows values of 97.4%, 97.29%, 97.35%, and 0.9897; the PlantVillage dataset shows values of 97.5%, 97.42%, 97.46%, and 0.9901, all completed in remarkably short processing times of 0.07 and 0.06 seconds, respectively. The achieved outcomes are evaluated with the contemporary optimization algorithms using the considered datasets to comprehend the efficiency of DOA.
{"title":"Optimized encoder-decoder cascaded deep convolutional network for leaf disease image segmentation.","authors":"David Femi, Manapakkam Anandan Mukunthan","doi":"10.1080/0954898X.2024.2326493","DOIUrl":"https://doi.org/10.1080/0954898X.2024.2326493","url":null,"abstract":"<p><p>Nowadays, Deep Learning (DL) techniques are being used to automate the identification and diagnosis of plant diseases, thereby enhancing global food security and enabling non-experts to detect these diseases. Among many DL techniques, a Deep Encoder-Decoder Cascaded Network (DEDCNet) model can precisely segment diseased areas from the leaf images to differentiate and classify multiple diseases. On the other hand, the model training depends on the appropriate selection of hyperparameters. Also, this network structure has weak robustness with different parameters. Hence, in this manuscript, an Optimized DEDCNet (ODEDCNet) model is proposed for improved leaf disease image segmentation. To choose the best DEDCNet hyperparameters, a brand-new Dingo Optimization Algorithm (DOA) is included in this model. The DOA depends on the foraging nature of dingoes, which comprises exploration and exploitation phases. In exploration, it attains many predictable decisions in the search area, whereas exploitation enables exploring the best decisions in a provided area. The segmentation accuracy is used as the fitness value of each dingo for hyperparameter selection. By configuring the chosen hyperparameters, the DEDCNet is trained to segment the leaf disease regions. The segmented images are further given to the pre-trained Convolutional Neural Networks (CNNs) followed by the Support Vector Machine (SVM) for classifying leaf diseases. ODEDCNet performs exceptionally well on the PlantVillage and Betel Leaf Image datasets, attaining an astounding 97.33% accuracy on the former and 97.42% accuracy on the latter. Both datasets achieve noteworthy recall, F-score, Dice coefficient, and precision values: the Betel Leaf Image dataset shows values of 97.4%, 97.29%, 97.35%, and 0.9897; the PlantVillage dataset shows values of 97.5%, 97.42%, 97.46%, and 0.9901, all completed in remarkably short processing times of 0.07 and 0.06 seconds, respectively. The achieved outcomes are evaluated with the contemporary optimization algorithms using the considered datasets to comprehend the efficiency of DOA.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"1-27"},"PeriodicalIF":7.8,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141077390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Effective management of data is a major issue in Distributed File System (DFS), like the cloud. This issue is handled by replicating files in an effective manner, which can minimize the time of data access and elevate the data availability. This paper devises a Fractional Social Optimization Algorithm (FSOA) for replica management along with balancing load in DFS in the cloud stage. Balancing the workload for DFS is the main objective. Here, the chunk creation is done by partitioning the file into a different number of chunks considering Deep Fuzzy Clustering (DFC) and then in the round-robin manner the Virtual machine (VM) is assigned. In that case for balancing the load considering certain objectives like resource use, energy consumption and migration cost thereby the load balancing is performed with the proposed FSOA. Here, the FSOA is formulated by uniting the Social optimization algorithm (SOA) and Fractional Calculus (FC). The replica management is done in DFS using the proposed FSOA by considering the various objectives. The FSOA has the smallest load of 0.299, smallest cost of 0.395, smallest energy consumption of 0.510, smallest overhead of 0.358, and smallest throughput of 0.537.
{"title":"Fractional social optimization-based migration and replica management algorithm for load balancing in distributed file system for cloud computing.","authors":"Manjula Hulagappa Nebagiri, Latha Pillappa Hnumanthappa","doi":"10.1080/0954898X.2024.2353665","DOIUrl":"https://doi.org/10.1080/0954898X.2024.2353665","url":null,"abstract":"<p><p>Effective management of data is a major issue in Distributed File System (DFS), like the cloud. This issue is handled by replicating files in an effective manner, which can minimize the time of data access and elevate the data availability. This paper devises a Fractional Social Optimization Algorithm (FSOA) for replica management along with balancing load in DFS in the cloud stage. Balancing the workload for DFS is the main objective. Here, the chunk creation is done by partitioning the file into a different number of chunks considering Deep Fuzzy Clustering (DFC) and then in the round-robin manner the Virtual machine (VM) is assigned. In that case for balancing the load considering certain objectives like resource use, energy consumption and migration cost thereby the load balancing is performed with the proposed FSOA. Here, the FSOA is formulated by uniting the Social optimization algorithm (SOA) and Fractional Calculus (FC). The replica management is done in DFS using the proposed FSOA by considering the various objectives. The FSOA has the smallest load of 0.299, smallest cost of 0.395, smallest energy consumption of 0.510, smallest overhead of 0.358, and smallest throughput of 0.537.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"1-28"},"PeriodicalIF":7.8,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141072363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}