Fault detection, classification, and location prediction are crucial for maintaining the stability and reliability of modern power systems, reducing economic losses, and enhancing system protection sensitivity. This paper presents a novel Hierarchical Deep Learning Approach (HDLA) for accurate and efficient fault diagnosis in transmission lines. HDLA leverages two-stage transformer-based classification and regression models to perform Fault Detection (FD), Fault Type Classification (FTC), and Fault Location Prediction (FLP) directly from synchronized raw three-phase current and voltage samples. By bypassing the need for feature extraction, HDLA significantly reduces computational complexity while achieving superior performance compared to existing deep learning methods. The efficacy of HDLA is validated on a comprehensive dataset encompassing various fault scenarios with diverse types, locations, resistances, inception angles, and noise levels. The results demonstrate significant improvements in accuracy, recall, precision, and F1-score metrics for classification, and Mean Absolute Errors (MAEs) and Root Mean Square Errors (RMSEs) for prediction, showcasing the effectiveness of HDLA for real-time fault diagnosis in power systems.
{"title":"Transformer-based deep learning networks for fault detection, classification, and location prediction in transmission lines.","authors":"Bousaadia Baadji, Soufiane Belagoune, Sif Eddine Boudjellal","doi":"10.1080/0954898X.2024.2393746","DOIUrl":"https://doi.org/10.1080/0954898X.2024.2393746","url":null,"abstract":"<p><p>Fault detection, classification, and location prediction are crucial for maintaining the stability and reliability of modern power systems, reducing economic losses, and enhancing system protection sensitivity. This paper presents a novel Hierarchical Deep Learning Approach (HDLA) for accurate and efficient fault diagnosis in transmission lines. HDLA leverages two-stage transformer-based classification and regression models to perform Fault Detection (FD), Fault Type Classification (FTC), and Fault Location Prediction (FLP) directly from synchronized raw three-phase current and voltage samples. By bypassing the need for feature extraction, HDLA significantly reduces computational complexity while achieving superior performance compared to existing deep learning methods. The efficacy of HDLA is validated on a comprehensive dataset encompassing various fault scenarios with diverse types, locations, resistances, inception angles, and noise levels. The results demonstrate significant improvements in accuracy, recall, precision, and F1-score metrics for classification, and Mean Absolute Errors (MAEs) and Root Mean Square Errors (RMSEs) for prediction, showcasing the effectiveness of HDLA for real-time fault diagnosis in power systems.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"1-21"},"PeriodicalIF":1.1,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142121202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In cloud computing (CC), task scheduling allocates the task to best suitable resource for execution. This article proposes a model for task scheduling utilizing the multi-objective optimization and deep learning (DL) model. Initially, the multi-objective task scheduling is carried out by the incoming user utilizing the proposed hybrid fractional flamingo beetle optimization (FFBO) which is formed by integrating dung beetle optimization (DBO), flamingo search algorithm (FSA) and fractional calculus (FC). Here, the fitness function depends on reliability, cost, predicted energy, and makespan, the predicted energy is forecasted by a deep residual network (DRN). Thereafter, task scheduling is accomplished based on DL using the proposed deep feedforward neural network fused long short-term memory (DFNN-LSTM), which is the combination of DFNN and LSTM. Moreover, when scheduling the workflow, the task parameters and the virtual machine's (VM) live parameters are taken into consideration. Task parameters are earliest finish time (EFT), earliest start time (EST), task length, task priority, and actual task running time, whereas VM parameters include memory utilization, bandwidth utilization, capacity, and central processing unit (CPU). The proposed model DFNN-LSTM+FFBO has achieved superior makespan, energy, and resource utilization of 0.188, 0.950J, and 0.238, respectively.
{"title":"Deep learning and optimization enabled multi-objective for task scheduling in cloud computing.","authors":"Dinesh Komarasamy, Siva Malar Ramaganthan, Dharani Molapalayam Kandaswamy, Gokuldhev Mony","doi":"10.1080/0954898X.2024.2391395","DOIUrl":"https://doi.org/10.1080/0954898X.2024.2391395","url":null,"abstract":"<p><p>In cloud computing (CC), task scheduling allocates the task to best suitable resource for execution. This article proposes a model for task scheduling utilizing the multi-objective optimization and deep learning (DL) model. Initially, the multi-objective task scheduling is carried out by the incoming user utilizing the proposed hybrid fractional flamingo beetle optimization (FFBO) which is formed by integrating dung beetle optimization (DBO), flamingo search algorithm (FSA) and fractional calculus (FC). Here, the fitness function depends on reliability, cost, predicted energy, and makespan, the predicted energy is forecasted by a deep residual network (DRN). Thereafter, task scheduling is accomplished based on DL using the proposed deep feedforward neural network fused long short-term memory (DFNN-LSTM), which is the combination of DFNN and LSTM. Moreover, when scheduling the workflow, the task parameters and the virtual machine's (VM) live parameters are taken into consideration. Task parameters are earliest finish time (EFT), earliest start time (EST), task length, task priority, and actual task running time, whereas VM parameters include memory utilization, bandwidth utilization, capacity, and central processing unit (CPU). The proposed model DFNN-LSTM+FFBO has achieved superior makespan, energy, and resource utilization of 0.188, 0.950J, and 0.238, respectively.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"1-30"},"PeriodicalIF":1.1,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142009908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Due to the massive growth in Internet of Things (IoT) devices, it is necessary to properly identify, authorize, and protect against attacks the devices connected to the particular network. In this manuscript, IoT Device Type Identification based on Variational Auto Encoder Wasserstein Generative Adversarial Network optimized with Pelican Optimization Algorithm (IoT-DTI-VAWGAN-POA) is proposed for Prolonging IoT Security. The proposed technique comprises three phases, such as data collection, feature extraction, and IoT device type detection. Initially, real network traffic dataset is gathered by distinct IoT device types, like baby monitor, security camera, etc. For feature extraction phase, the network traffic feature vector comprises packet sizes, Mean, Variance, Kurtosis derived by Adaptive and concise empirical wavelet transforms. Then, the extracting features are supplied to VAWGAN is used to identify the IoT devices as known or unknown. Then Pelican Optimization Algorithm (POA) is considered to optimize the weight factors of VAWGAN for better IoT device type identification. The proposed IoT-DTI-VAWGAN-POA method is implemented in Python and proficiency is examined under the performance metrics, like accuracy, precision, f-measure, sensitivity, Error rate, computational complexity, and RoC. It provides 33.41%, 32.01%, and 31.65% higher accuracy, and 44.78%, 43.24%, and 48.98% lower error rate compared to the existing methods.
{"title":"Bolstering IoT security with IoT device type Identification using optimized Variational Autoencoder Wasserstein Generative Adversarial Network.","authors":"Jothi Shri Sankar, Saravanan Dhatchnamurthy, Anitha Mary X, Keerat Kumar Gupta","doi":"10.1080/0954898X.2024.2304214","DOIUrl":"10.1080/0954898X.2024.2304214","url":null,"abstract":"<p><p>Due to the massive growth in Internet of Things (IoT) devices, it is necessary to properly identify, authorize, and protect against attacks the devices connected to the particular network. In this manuscript, IoT Device Type Identification based on Variational Auto Encoder Wasserstein Generative Adversarial Network optimized with Pelican Optimization Algorithm (IoT-DTI-VAWGAN-POA) is proposed for Prolonging IoT Security. The proposed technique comprises three phases, such as data collection, feature extraction, and IoT device type detection. Initially, real network traffic dataset is gathered by distinct IoT device types, like baby monitor, security camera, etc. For feature extraction phase, the network traffic feature vector comprises packet sizes, Mean, Variance, Kurtosis derived by Adaptive and concise empirical wavelet transforms. Then, the extracting features are supplied to VAWGAN is used to identify the IoT devices as known or unknown. Then Pelican Optimization Algorithm (POA) is considered to optimize the weight factors of VAWGAN for better IoT device type identification. The proposed IoT-DTI-VAWGAN-POA method is implemented in Python and proficiency is examined under the performance metrics, like accuracy, precision, f-measure, sensitivity, Error rate, computational complexity, and RoC. It provides 33.41%, 32.01%, and 31.65% higher accuracy, and 44.78%, 43.24%, and 48.98% lower error rate compared to the existing methods.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"278-299"},"PeriodicalIF":1.1,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139643428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2024-01-15DOI: 10.1080/0954898X.2023.2299851
Mukesh Kumar Tripathi, Shivendra
This research introduces a revolutionary machinet learning algorithm-based quality estimation and grading system. The suggested work is divided into four main parts: Ppre-processing, neutroscopic model transformation, Feature Extraction, and Grading. The raw images are first pre-processed by following five major stages: read, resize, noise removal, contrast enhancement via CLAHE, and Smoothing via filtering. The pre-processed images are then converted into a neutrosophic domain for more effective mango grading. The image is processed under a new Geometric Mean based neutrosophic approach to transforming it into the neutrosophic domain. Finally, the prediction of TSS for the different chilling conditions is done by Improved Deep Belief Network (IDBN) and based on this; the grading of mango is done automatically as the model is already trained with it. Here, the prediction of TSS is carried out under the consideration of SSC, firmness, and TAC. A comparison between the proposed and traditional methods is carried out to confirm the efficacy of various metrics.
{"title":"Improved deep belief network for estimating mango quality indices and grading: A computer vision-based neutrosophic approach.","authors":"Mukesh Kumar Tripathi, Shivendra","doi":"10.1080/0954898X.2023.2299851","DOIUrl":"10.1080/0954898X.2023.2299851","url":null,"abstract":"<p><p>This research introduces a revolutionary machinet learning algorithm-based quality estimation and grading system. The suggested work is divided into four main parts: Ppre-processing, neutroscopic model transformation, Feature Extraction, and Grading. The raw images are first pre-processed by following five major stages: read, resize, noise removal, contrast enhancement via CLAHE, and Smoothing via filtering. The pre-processed images are then converted into a neutrosophic domain for more effective mango grading. The image is processed under a new Geometric Mean based neutrosophic approach to transforming it into the neutrosophic domain. Finally, the prediction of TSS for the different chilling conditions is done by Improved Deep Belief Network (IDBN) and based on this; the grading of mango is done automatically as the model is already trained with it. Here, the prediction of TSS is carried out under the consideration of SSC, firmness, and TAC. A comparison between the proposed and traditional methods is carried out to confirm the efficacy of various metrics.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"249-277"},"PeriodicalIF":1.1,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139467171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cardiovascular diseases (CVD) represent a significant global health challenge, often remaining undetected until severe cardiac events, such as heart attacks or strokes, occur. In regions like Qatar, research focused on non-invasive CVD identification methods, such as retinal imaging and dual-energy X-ray absorptiometry (DXA), is limited. This study presents a groundbreaking system known as Multi-Modal Artificial Intelligence for Cardiovascular Disease (M2AI-CVD), designed to provide highly accurate predictions of CVD. The M2AI-CVD framework employs a four-fold methodology: First, it rigorously evaluates image quality and processes lower-quality images for further analysis. Subsequently, it uses the Entropy-based Fuzzy C Means (EnFCM) algorithm for precise image segmentation. The Multi-Modal Boltzmann Machine (MMBM) is then employed to extract relevant features from various data modalities, while the Genetic Algorithm (GA) selects the most informative features. Finally, a ZFNet Convolutional Neural Network (ZFNetCNN) classifies images, effectively distinguishing between CVD and Non-CVD cases. The research's culmination, tested across five distinct datasets, yields outstanding results, with an accuracy of 95.89%, sensitivity of 96.89%, and specificity of 98.7%. This multi-modal AI approach offers a promising solution for the accurate and early detection of cardiovascular diseases, significantly improving the prospects of timely intervention and improved patient outcomes in the realm of cardiovascular health.
心血管疾病(CVD)是全球健康面临的一项重大挑战,通常在心脏病发作或中风等严重心脏事件发生之前都不会被发现。在卡塔尔等地区,对非侵入性心血管疾病识别方法(如视网膜成像和双能 X 射线吸收测量法 (DXA))的研究十分有限。本研究提出了一种开创性的系统,称为心血管疾病多模式人工智能(M2AI-CVD),旨在提供高度准确的心血管疾病预测。M2AI-CVD 框架采用了四种方法:首先,它严格评估图像质量,并处理质量较低的图像以作进一步分析。随后,它使用基于熵的模糊 C 均值(EnFCM)算法进行精确的图像分割。然后使用多模态玻尔兹曼机(MMBM)从各种数据模态中提取相关特征,同时使用遗传算法(GA)选择信息量最大的特征。最后,ZFNet 卷积神经网络 (ZFNetCNN) 对图像进行分类,有效区分心血管疾病和非心血管疾病病例。研究成果在五个不同的数据集上进行了测试,结果非常出色,准确率达到 95.89%,灵敏度达到 96.89%,特异性达到 98.7%。这种多模式人工智能方法为准确、早期检测心血管疾病提供了一种前景广阔的解决方案,大大改善了及时干预的前景,提高了心血管健康领域的患者治疗效果。
{"title":"M2AI-CVD: Multi-modal AI approach cardiovascular risk prediction system using fundus images.","authors":"Premalatha Gurumurthy, Manjunathan Alagarsamy, Sangeetha Kuppusamy, Niranjana Chitra Ponnusamy","doi":"10.1080/0954898X.2024.2306988","DOIUrl":"10.1080/0954898X.2024.2306988","url":null,"abstract":"<p><p>Cardiovascular diseases (CVD) represent a significant global health challenge, often remaining undetected until severe cardiac events, such as heart attacks or strokes, occur. In regions like Qatar, research focused on non-invasive CVD identification methods, such as retinal imaging and dual-energy X-ray absorptiometry (DXA), is limited. This study presents a groundbreaking system known as Multi-Modal Artificial Intelligence for Cardiovascular Disease (M2AI-CVD), designed to provide highly accurate predictions of CVD. The M2AI-CVD framework employs a four-fold methodology: First, it rigorously evaluates image quality and processes lower-quality images for further analysis. Subsequently, it uses the Entropy-based Fuzzy C Means (EnFCM) algorithm for precise image segmentation. The Multi-Modal Boltzmann Machine (MMBM) is then employed to extract relevant features from various data modalities, while the Genetic Algorithm (GA) selects the most informative features. Finally, a ZFNet Convolutional Neural Network (ZFNetCNN) classifies images, effectively distinguishing between CVD and Non-CVD cases. The research's culmination, tested across five distinct datasets, yields outstanding results, with an accuracy of 95.89%, sensitivity of 96.89%, and specificity of 98.7%. This multi-modal AI approach offers a promising solution for the accurate and early detection of cardiovascular diseases, significantly improving the prospects of timely intervention and improved patient outcomes in the realm of cardiovascular health.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"319-346"},"PeriodicalIF":1.1,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139572032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2024-01-11DOI: 10.1080/0954898X.2023.2296115
Isaac Chairez, Alejandro Garcia-Gonzalez, Alberto Luviano-Juarez
This paper presents a non-parametric identification scheme for a class of uncertain switched nonlinear systems based on continuous-time neural networks. This scheme is based on a continuous neural network identifier. This adaptive identifier guaranteed the convergence of the identification errors to a small vicinity of the origin. The convergence of the identification error was determined by the Lyapunov theory supported by a practical stability variation for switched systems. The same stability analysis generated the learning laws that adjust the identifier structure. The upper bound of the convergence region was characterized in terms of uncertainties and noises affecting the switched system. A second finite-time convergence learning law was also developed to describe an alternative way of forcing the identification error's stability. The study presented in this paper described a formal technique for analysing the application of adaptive identifiers based on continuous neural networks for uncertain switched systems. The identifier was tested for two basic problems: a simple mechanical system and a switched representation of the human gait model. In both cases, accurate results for the identification problem were achieved.
{"title":"State identification for a class of uncertain switched systems by differential neural networks.","authors":"Isaac Chairez, Alejandro Garcia-Gonzalez, Alberto Luviano-Juarez","doi":"10.1080/0954898X.2023.2296115","DOIUrl":"10.1080/0954898X.2023.2296115","url":null,"abstract":"<p><p>This paper presents a non-parametric identification scheme for a class of uncertain switched nonlinear systems based on continuous-time neural networks. This scheme is based on a continuous neural network identifier. This adaptive identifier guaranteed the convergence of the identification errors to a small vicinity of the origin. The convergence of the identification error was determined by the Lyapunov theory supported by a practical stability variation for switched systems. The same stability analysis generated the learning laws that adjust the identifier structure. The upper bound of the convergence region was characterized in terms of uncertainties and noises affecting the switched system. A second finite-time convergence learning law was also developed to describe an alternative way of forcing the identification error's stability. The study presented in this paper described a formal technique for analysing the application of adaptive identifiers based on continuous neural networks for uncertain switched systems. The identifier was tested for two basic problems: a simple mechanical system and a switched representation of the human gait model. In both cases, accurate results for the identification problem were achieved.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"213-248"},"PeriodicalIF":1.1,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139418624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This research introduces an innovative solution addressing the challenge of user authentication in cloud-based systems, emphasizing heightened security and privacy. The proposed system integrates multimodal biometrics, deep learning (Instance-based learning-based DetectNet-(IL-DN), privacy-preserving techniques, and blockchain technology. Motivated by the escalating need for robust authentication methods in the face of evolving cyber threats, the research aims to overcome the struggle between accuracy and user privacy inherent in current authentication methods. The proposed system swiftly and accurately identifies users using multimodal biometric data through IL-DN. To address privacy concerns, advanced techniques are employed to encode biometric data, ensuring user privacy. Additionally, the system utilizes blockchain technology to establish a decentralized, tamper-proof, and transparent authentication system. This is reinforced by smart contracts and an enhanced Proof of Work (PoW) mechanism. The research rigorously evaluates performance metrics, encompassing authentication accuracy, privacy preservation, security, and resource utilization, offering a comprehensive solution for secure and privacy-enhanced user authentication in cloud-based environments. This work significantly contributes to filling the existing research gap in this critical domain.
{"title":"Secure and privacy improved cloud user authentication in biometric multimodal multi fusion using blockchain-based lightweight deep instance-based DetectNet.","authors":"Selvarani Poomalai, Keerthika Venkatesan, Surendran Subbaraj, Sundar Radha","doi":"10.1080/0954898X.2024.2304707","DOIUrl":"10.1080/0954898X.2024.2304707","url":null,"abstract":"<p><p>This research introduces an innovative solution addressing the challenge of user authentication in cloud-based systems, emphasizing heightened security and privacy. The proposed system integrates multimodal biometrics, deep learning (Instance-based learning-based DetectNet-(IL-DN), privacy-preserving techniques, and blockchain technology. Motivated by the escalating need for robust authentication methods in the face of evolving cyber threats, the research aims to overcome the struggle between accuracy and user privacy inherent in current authentication methods. The proposed system swiftly and accurately identifies users using multimodal biometric data through IL-DN. To address privacy concerns, advanced techniques are employed to encode biometric data, ensuring user privacy. Additionally, the system utilizes blockchain technology to establish a decentralized, tamper-proof, and transparent authentication system. This is reinforced by smart contracts and an enhanced Proof of Work (PoW) mechanism. The research rigorously evaluates performance metrics, encompassing authentication accuracy, privacy preservation, security, and resource utilization, offering a comprehensive solution for secure and privacy-enhanced user authentication in cloud-based environments. This work significantly contributes to filling the existing research gap in this critical domain.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"300-318"},"PeriodicalIF":1.1,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139643429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-31DOI: 10.1080/0954898X.2024.2383893
Smita Sandeep Mane, Vaibhav E Narawade
The rapid advancements in Agriculture 4.0 have led to the development of the continuous monitoring of the soil parameters and recommend crops based on soil fertility to improve crop yield. Accordingly, the soil parameters, such as pH, nitrogen, phosphorous, potassium, and soil moisture are exploited for irrigation control, followed by the crop recommendation of the agricultural field. The smart irrigation control is performed utilizing the Interactive guide optimizer-Deep Convolutional Neural Network (Interactive guide optimizer-DCNN), which supports the decision-making regarding the soil nutrients. Specifically, the Interactive guide optimizer-DCNN classifier is designed to replace the standard ADAM algorithm through the modeled interactive guide optimizer, which exhibits alertness and guiding characters from the nature-inspired dog and cat population. In addition, the data is down-sampled to reduce redundancy and preserve important information to improve computing performance. The designed model attains an accuracy of 93.11 % in predicting the minerals, pH value, and soil moisture thereby, exhibiting a higher recommendation accuracy of 97.12% when the model training is fixed at 90%. Further, the developed model attained the F-score, specificity, sensitivity, and accuracy values of 90.30%, 92.12%, 89.56%, and 86.36% with k-fold 10 in predicting the minerals that revealed the efficacy of the model.
{"title":"Internet-of-Things for smart irrigation control and crop recommendation using interactive guide-deep model in Agriculture 4.0 applications.","authors":"Smita Sandeep Mane, Vaibhav E Narawade","doi":"10.1080/0954898X.2024.2383893","DOIUrl":"https://doi.org/10.1080/0954898X.2024.2383893","url":null,"abstract":"<p><p>The rapid advancements in Agriculture 4.0 have led to the development of the continuous monitoring of the soil parameters and recommend crops based on soil fertility to improve crop yield. Accordingly, the soil parameters, such as pH, nitrogen, phosphorous, potassium, and soil moisture are exploited for irrigation control, followed by the crop recommendation of the agricultural field. The smart irrigation control is performed utilizing the Interactive guide optimizer-Deep Convolutional Neural Network (Interactive guide optimizer-DCNN), which supports the decision-making regarding the soil nutrients. Specifically, the Interactive guide optimizer-DCNN classifier is designed to replace the standard ADAM algorithm through the modeled interactive guide optimizer, which exhibits alertness and guiding characters from the nature-inspired dog and cat population. In addition, the data is down-sampled to reduce redundancy and preserve important information to improve computing performance. The designed model attains an accuracy of 93.11 % in predicting the minerals, pH value, and soil moisture thereby, exhibiting a higher recommendation accuracy of 97.12% when the model training is fixed at 90%. Further, the developed model attained the <i>F</i>-score, specificity, sensitivity, and accuracy values of 90.30%, 92.12%, 89.56%, and 86.36% with <i>k</i>-fold 10 in predicting the minerals that revealed the efficacy of the model.</p>","PeriodicalId":54735,"journal":{"name":"Network-Computation in Neural Systems","volume":" ","pages":"1-33"},"PeriodicalIF":1.1,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141857222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}