Parkinson’s disease (PD) is the second most prevalent long-term progressive neurodegenerative disease after Alzheimer’s. Individuals with PD experience tremors, rigidity, difficulty maintaining balance, and coordination of motion. Typically, the symptoms manifest gradually and worsen over time. As the condition progresses, individuals may experience difficulty in both movement and verbal communication. In order to employ the most effective treatment, gait analysis is regarded as one of the most important approaches to identifying and evaluating the presence of PD. Therefore, selecting the most optimal gait features for the purpose of detecting PD is a challenging endeavor. In today’s computing environment, several strategies are required to solve various challenges. Metaheuristic algorithms represent a category of methodologies that possess the ability to offer pragmatic resolutions to such challenges in various fields. In this study, we present a robust hybrid Harris Hawks and Arithmetic optimization algorithm (Hybrid HH-AO Algorithm) with a Random Forest (RF) classifier to choose the optimal gait features and classify normal and abnormal individuals. The proposed approach has been evaluated on the benchmark INIT Gait database. The proposed approach achieves a better accuracy of 98.12%, sensitivity of 99.26%, specificity of 92.00%, precision of 98.53%, and F1-score of 98.89% using an RF classifier on the Gradient Gait Energy Image (GGEI) template. The experimental results show that our proposed method can accurately distinguish PD patients’ gait patterns from healthy people with a high classification rate.
{"title":"Vision-based gait analysis to detect Parkinson’s disease using hybrid Harris hawks and Arithmetic optimization algorithm with Random Forest classifier","authors":"Sankara Rao Palla, Priyadarsan Parida, Gupteswar Sahu","doi":"10.1007/s13198-024-02508-3","DOIUrl":"https://doi.org/10.1007/s13198-024-02508-3","url":null,"abstract":"<p>Parkinson’s disease (PD) is the second most prevalent long-term progressive neurodegenerative disease after Alzheimer’s. Individuals with PD experience tremors, rigidity, difficulty maintaining balance, and coordination of motion. Typically, the symptoms manifest gradually and worsen over time. As the condition progresses, individuals may experience difficulty in both movement and verbal communication. In order to employ the most effective treatment, gait analysis is regarded as one of the most important approaches to identifying and evaluating the presence of PD. Therefore, selecting the most optimal gait features for the purpose of detecting PD is a challenging endeavor. In today’s computing environment, several strategies are required to solve various challenges. Metaheuristic algorithms represent a category of methodologies that possess the ability to offer pragmatic resolutions to such challenges in various fields. In this study, we present a robust hybrid Harris Hawks and Arithmetic optimization algorithm (Hybrid HH-AO Algorithm) with a Random Forest (RF) classifier to choose the optimal gait features and classify normal and abnormal individuals. The proposed approach has been evaluated on the benchmark INIT Gait database. The proposed approach achieves a better accuracy of 98.12%, sensitivity of 99.26%, specificity of 92.00%, precision of 98.53%, and F1-score of 98.89% using an RF classifier on the Gradient Gait Energy Image (GGEI) template. The experimental results show that our proposed method can accurately distinguish PD patients’ gait patterns from healthy people with a high classification rate.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"21 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142268330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The identification of zero-crossing points in a sinusoidal signal is critical in a variety of electrical applications, including protection of power system components and designing of controllers. In this article, 96 datasets are generated from a deformed sinusoidal waveforms using MATLAB. MATLAB generates deformed sinusoidal waves with varying amounts of noise and harmonics. In this study, a random forest model is utilized to estimate the zero crossing point in a deformed waveform using input characteristics such as the slope, intercept, correlation, and RMSE. The random forest model was developed and evaluated in the Google Colab platform. According to simulation data, the model based on random forest predicts the zero-crossing point more accurately than other models such as logistic regression and decision tree classifier.
{"title":"Zero crossing point detection in a distorted sinusoidal signal using random forest classifier","authors":"Venkataramana Veeramsetty, Pravallika Jadhav, Eslavath Ramesh, Srividya Srinivasula","doi":"10.1007/s13198-024-02484-8","DOIUrl":"https://doi.org/10.1007/s13198-024-02484-8","url":null,"abstract":"<p>The identification of zero-crossing points in a sinusoidal signal is critical in a variety of electrical applications, including protection of power system components and designing of controllers. In this article, 96 datasets are generated from a deformed sinusoidal waveforms using MATLAB. MATLAB generates deformed sinusoidal waves with varying amounts of noise and harmonics. In this study, a random forest model is utilized to estimate the zero crossing point in a deformed waveform using input characteristics such as the slope, intercept, correlation, and RMSE. The random forest model was developed and evaluated in the Google Colab platform. According to simulation data, the model based on random forest predicts the zero-crossing point more accurately than other models such as logistic regression and decision tree classifier.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"6 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142247298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The problem of spam content in YouTube comments is an ongoing issue, and detecting such content is a critical task to maintain the quality of user experience on the platform. In this study, we propose a Federated Learning Inspired XG-Boost Tuned Classifier, FL-XGBTC, for YouTube spam content detection. The proposed model leverages the advantages of federated learning, which enables the training of a model collaboratively across multiple devices without sharing raw data. The FL-XGBTC model is based on the XGBoost algorithm, which is a powerful and widely used ensemble learning algorithm for classification tasks. The proposed model was trained on a large and diverse dataset of YouTube comments, which includes both spam and non-spam comments. The results demonstrate that the FL-XGBTC model achieved a high level of accuracy in detecting spam content in YouTube comments, outperforming several baseline models. Additionally, the proposed model provides the benefit of preserving user privacy, which is a critical consideration in modern machine-learning applications. Overall, the proposed Federated Learning Inspired XG-Boost Tuned Classifier provides a promising solution for YouTube spam content detection that leverages the benefits of federated learning and ensemble learning algorithms. The major contribution of this work is to demonstrate and propose a framework for showing a distributed federated classifier for the multiscale classification of youtube spam comments using the Ensemble learning method.
{"title":"FL-XGBTC: federated learning inspired with XG-boost tuned classifier for YouTube spam content detection","authors":"Vandana Sharma, Anurag Sinha, Ahmed Alkhayyat, Ankit Agarwal, Peddi Nikitha, Sable Ramkumar, Tripti Rathee, Mopuru Bhargavi, Nitish Kumar","doi":"10.1007/s13198-024-02502-9","DOIUrl":"https://doi.org/10.1007/s13198-024-02502-9","url":null,"abstract":"<p>The problem of spam content in YouTube comments is an ongoing issue, and detecting such content is a critical task to maintain the quality of user experience on the platform. In this study, we propose a Federated Learning Inspired XG-Boost Tuned Classifier, FL-XGBTC, for YouTube spam content detection. The proposed model leverages the advantages of federated learning, which enables the training of a model collaboratively across multiple devices without sharing raw data. The FL-XGBTC model is based on the XGBoost algorithm, which is a powerful and widely used ensemble learning algorithm for classification tasks. The proposed model was trained on a large and diverse dataset of YouTube comments, which includes both spam and non-spam comments. The results demonstrate that the FL-XGBTC model achieved a high level of accuracy in detecting spam content in YouTube comments, outperforming several baseline models. Additionally, the proposed model provides the benefit of preserving user privacy, which is a critical consideration in modern machine-learning applications. Overall, the proposed Federated Learning Inspired XG-Boost Tuned Classifier provides a promising solution for YouTube spam content detection that leverages the benefits of federated learning and ensemble learning algorithms. The major contribution of this work is to demonstrate and propose a framework for showing a distributed federated classifier for the multiscale classification of youtube spam comments using the Ensemble learning method.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"18 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142247370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-11DOI: 10.1007/s13198-024-02499-1
Shiva, Neetu Gupta, Anu G. Aggarwal
In marketing research, diffusion models are extensively utilized to predict the trend of new product adoption over time. These models are categorized based on their deterministic or stochastic characteristics. While deterministic models disregard the stochasticity of the adoption rate influenced by environmental and internal factors, we aim to address this limitation by proposing a generalized innovation diffusion model that accounts for such uncertainties. We validate our approach using the particle swarm optimization (PSO) technique on actual sales data from technological products. Our findings suggest that the proposed model outperforms existing diffusion models in forecasting accuracy.
{"title":"A generalized product adoption model under random marketing conditions","authors":"Shiva, Neetu Gupta, Anu G. Aggarwal","doi":"10.1007/s13198-024-02499-1","DOIUrl":"https://doi.org/10.1007/s13198-024-02499-1","url":null,"abstract":"<p>In marketing research, diffusion models are extensively utilized to predict the trend of new product adoption over time. These models are categorized based on their deterministic or stochastic characteristics. While deterministic models disregard the stochasticity of the adoption rate influenced by environmental and internal factors, we aim to address this limitation by proposing a generalized innovation diffusion model that accounts for such uncertainties. We validate our approach using the particle swarm optimization (PSO) technique on actual sales data from technological products. Our findings suggest that the proposed model outperforms existing diffusion models in forecasting accuracy.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"2 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142194368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-10DOI: 10.1007/s13198-024-02497-3
Harendra Singh, Vikrant Vikram Singh, Aditya Kumar Gupta, P. K. Kapur
In the wake of the digital revolution transforming the landscape of higher education, e-learning has emerged as a pivotal model for knowledge dissemination, reshaping traditional pedagogical methodologies and fostering an unprecedented transition to virtual learning environments. This transformative shift, necessitated by global crises and the rapid evolution of technology, has spotlighted the urgency to evaluate and enhance the effectiveness and user satisfaction of online learning platforms. Particularly in the context of Indian higher education, where the demographic expanse and diverse educational needs present unique challenges and opportunities, understanding the drivers of student satisfaction in e-learning is paramount. This empirical investigation explores the factors influencing students’ satisfaction with online education in Indian universities and higher education institutions. Data were collected from 460 postgraduates and undergraduates across 30 institutions offering programs in management, engineering, and commerce. Utilizing Structural Equation Modeling, the study identified key variables impacting learner satisfaction: learner inspiration and motivation, potential obstacles to e-learning, group and professor interaction, and the use of technology (including AI and other tools) in e-learning. Results indicate that potential obstacles to e-learning and the integration of technology had the most significant impact on student satisfaction, emphasizing the importance of overcoming barriers and leveraging technology effectively in e-learning environments. This study offers insights for higher education institutions seeking to enhance virtual learning experiences and underscores the imperative of addressing technological challenges to ensure sustained student satisfaction.
{"title":"Assessing e-learning platforms in higher education with reference to student satisfaction: a PLS-SEM approach","authors":"Harendra Singh, Vikrant Vikram Singh, Aditya Kumar Gupta, P. K. Kapur","doi":"10.1007/s13198-024-02497-3","DOIUrl":"https://doi.org/10.1007/s13198-024-02497-3","url":null,"abstract":"<p>In the wake of the digital revolution transforming the landscape of higher education, e-learning has emerged as a pivotal model for knowledge dissemination, reshaping traditional pedagogical methodologies and fostering an unprecedented transition to virtual learning environments. This transformative shift, necessitated by global crises and the rapid evolution of technology, has spotlighted the urgency to evaluate and enhance the effectiveness and user satisfaction of online learning platforms. Particularly in the context of Indian higher education, where the demographic expanse and diverse educational needs present unique challenges and opportunities, understanding the drivers of student satisfaction in e-learning is paramount. This empirical investigation explores the factors influencing students’ satisfaction with online education in Indian universities and higher education institutions. Data were collected from 460 postgraduates and undergraduates across 30 institutions offering programs in management, engineering, and commerce. Utilizing Structural Equation Modeling, the study identified key variables impacting learner satisfaction: learner inspiration and motivation, potential obstacles to e-learning, group and professor interaction, and the use of technology (including AI and other tools) in e-learning. Results indicate that potential obstacles to e-learning and the integration of technology had the most significant impact on student satisfaction, emphasizing the importance of overcoming barriers and leveraging technology effectively in e-learning environments. This study offers insights for higher education institutions seeking to enhance virtual learning experiences and underscores the imperative of addressing technological challenges to ensure sustained student satisfaction.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"30 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142194335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-06DOI: 10.1007/s13198-024-02442-4
Raghuram Shivram, B. G. Prasad, S. Vishwa Kiran
This research explores the concept of Worldwide one network (WON), a hypothetical ultra-large scale ad-hoc wireless network characterized by its non-hierarchical, open, scalable, homogeneous, and autopoiesis nature. The primary objectives are to address challenges in network formation, individual node unique addressing, and network management. This paper proposes a novel addressing mechanism named ‘Cubid’, which utilizes geo-coordinates as the primary identifier for network nodes with 1 m resolution and aims for at least 512 unique node addresses per cubic meter space on Earth. Unique three-dimensional address space, received signal strength based trilateration for network formation, address negotiation, and the use of Cubid as a MAC address to bypass traditional Layer 2–3 Internet Protocol activities are few of the differentiator aspects involved in this research work. Preliminary tests of this hypothetical network yield in practical viability of identifying network node’s geographical coordinates with an accuracy of 3 m without GPS devices, and corresponding simulations results in an average frame delivery time of 27 ms over a 100-hop, varying hop length network path. These findings indicate that WON could serve as a viable alternative communication network, especially when substantial infrastructure-based networks, such as the Internet fails.
{"title":"WON: A hypothetical multi-hop ad-hoc wireless ultra-large scale worldwide one network","authors":"Raghuram Shivram, B. G. Prasad, S. Vishwa Kiran","doi":"10.1007/s13198-024-02442-4","DOIUrl":"https://doi.org/10.1007/s13198-024-02442-4","url":null,"abstract":"<p>This research explores the concept of Worldwide one network (WON), a hypothetical ultra-large scale ad-hoc wireless network characterized by its non-hierarchical, open, scalable, homogeneous, and autopoiesis nature. The primary objectives are to address challenges in network formation, individual node unique addressing, and network management. This paper proposes a novel addressing mechanism named ‘Cubid’, which utilizes geo-coordinates as the primary identifier for network nodes with 1 m resolution and aims for at least 512 unique node addresses per cubic meter space on Earth. Unique three-dimensional address space, received signal strength based trilateration for network formation, address negotiation, and the use of Cubid as a MAC address to bypass traditional Layer 2–3 Internet Protocol activities are few of the differentiator aspects involved in this research work. Preliminary tests of this hypothetical network yield in practical viability of identifying network node’s geographical coordinates with an accuracy of 3 m without GPS devices, and corresponding simulations results in an average frame delivery time of 27 ms over a 100-hop, varying hop length network path. These findings indicate that WON could serve as a viable alternative communication network, especially when substantial infrastructure-based networks, such as the Internet fails.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"9 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142194365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-04DOI: 10.1007/s13198-024-02447-z
C. V. Prasshanth, S. Naveen Venkatesh, Tapan K. Mahanta, N. R. Sakthivel, V. Sugumaran
Fault detection in monoblock centrifugal pumps plays an important role in ensuring the safe and efficient use of mechanical equipment. This study proposes a deep learning-based method using transfer learning for fault detection in monoblock centrifugal pumps. A MEMS sensor was used to acquire vibration signals from the experimental setup and these signals were subsequently processed and stored as Hilbert-Huang transform images. By leveraging 15 pretrained networks such as InceptionResNetV2, DenseNet-201, GoogLeNet, ResNet-50, VGG-19, Xception, VGG-16, EfficientNetb0, ShuffleNet, InceptionV3, ResNet101, MobileNet-v2, AlexNet, NasNetmobile and ResNet-18, fault diagnosis was performed on the acquired data. To achieve high classification accuracy, various hyperparameters including, batch size, learning rate, train-test split ratio and optimizer were systematically varied and optimized. The aim was to identify the most suitable configuration for the deep learning model. By leveraging transfer learning and preprocessing the acquired vibration signals into Hilbert–Huang transform images, the classification accuracy was significantly improved. Optimizing hyperparameters through extensive experimentation proved instrumental in elevating the models performance. Following thorough trials and meticulous tuning, the GoogleNet architecture emerged as the optimal setup, attaining a peak classification accuracy of 100.00%, all while upholding computational efficiency at 80 s.
{"title":"Deep learning for fault diagnosis of monoblock centrifugal pumps: a Hilbert–Huang transform approach","authors":"C. V. Prasshanth, S. Naveen Venkatesh, Tapan K. Mahanta, N. R. Sakthivel, V. Sugumaran","doi":"10.1007/s13198-024-02447-z","DOIUrl":"https://doi.org/10.1007/s13198-024-02447-z","url":null,"abstract":"<p>Fault detection in monoblock centrifugal pumps plays an important role in ensuring the safe and efficient use of mechanical equipment. This study proposes a deep learning-based method using transfer learning for fault detection in monoblock centrifugal pumps. A MEMS sensor was used to acquire vibration signals from the experimental setup and these signals were subsequently processed and stored as Hilbert-Huang transform images. By leveraging 15 pretrained networks such as InceptionResNetV2, DenseNet-201, GoogLeNet, ResNet-50, VGG-19, Xception, VGG-16, EfficientNetb0, ShuffleNet, InceptionV3, ResNet101, MobileNet-v2, AlexNet, NasNetmobile and ResNet-18, fault diagnosis was performed on the acquired data. To achieve high classification accuracy, various hyperparameters including, batch size, learning rate, train-test split ratio and optimizer were systematically varied and optimized. The aim was to identify the most suitable configuration for the deep learning model. By leveraging transfer learning and preprocessing the acquired vibration signals into Hilbert–Huang transform images, the classification accuracy was significantly improved. Optimizing hyperparameters through extensive experimentation proved instrumental in elevating the models performance. Following thorough trials and meticulous tuning, the GoogleNet architecture emerged as the optimal setup, attaining a peak classification accuracy of 100.00%, all while upholding computational efficiency at 80 s.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"1 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142194336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-30DOI: 10.1007/s13198-024-02486-6
Shashank Chaudhary, Upendra kumar
The timely detection and identification of crop diseases is a crucial aspect of the agricultural sector. It contributes significantly to the by and large productivity of the plant. One of the most crucial factors that we need to consider while determining a plant’s susceptibility to a particular disease is the visual characteristics of the affected plant. The increasing popularity of automation and availability of efficient techniques for disease identification has led to the development of novel methods and engraved impactful technologies in field of automated disease detection. The traditional methods have not been able to provide the researchers with the most accurate results. The proposed model in this work can identify the rice crop disease without relying on subjective data and have many advantages over traditional approaches as evident from the results derived. It has the potential to improve the efficiency of the process and aid in early detection. Machine learning method presents real-time automated decision support systems and can help improve crop or plant growth productivity and quality. This work aims to introduce a new and enhanced method as Neuro-GA, which is a combination of both the artificial neural network (ANN) and the genetic algorithm (GA). It has been claimed that it is more powerful and accurate than the traditional methods. The pioneer and nascent stages of this analysis includes preprocessing of the data was carried out. The features were then extracted using Gray-level co-occurrence matrix (GLCM) and subsequently the finally extracted features were cascaded to the Neuro-GA classifier. The digital image processing (DIP) techniques used in this study for rendering visual images along with Neuro-GA classifier resulted in skyrocket accuracy level of 90% and above. The technique validated in this study has allowed the automated monitoring of various aspects of crop production and farming and an omnipotent promising efficiency hence this approach can be magnanimously effective in monitoring agricultural production and thereby plummeting waste allied with crop damage.
{"title":"Identification of rice crop diseases using gray level co-occurrence matrix (GLCM) and Neuro-GA classifier","authors":"Shashank Chaudhary, Upendra kumar","doi":"10.1007/s13198-024-02486-6","DOIUrl":"https://doi.org/10.1007/s13198-024-02486-6","url":null,"abstract":"<p>The timely detection and identification of crop diseases is a crucial aspect of the agricultural sector. It contributes significantly to the by and large productivity of the plant. One of the most crucial factors that we need to consider while determining a plant’s susceptibility to a particular disease is the visual characteristics of the affected plant. The increasing popularity of automation and availability of efficient techniques for disease identification has led to the development of novel methods and engraved impactful technologies in field of automated disease detection. The traditional methods have not been able to provide the researchers with the most accurate results. The proposed model in this work can identify the rice crop disease without relying on subjective data and have many advantages over traditional approaches as evident from the results derived. It has the potential to improve the efficiency of the process and aid in early detection. Machine learning method presents real-time automated decision support systems and can help improve crop or plant growth productivity and quality. This work aims to introduce a new and enhanced method as Neuro-GA, which is a combination of both the artificial neural network (ANN) and the genetic algorithm (GA). It has been claimed that it is more powerful and accurate than the traditional methods. The pioneer and nascent stages of this analysis includes preprocessing of the data was carried out. The features were then extracted using Gray-level co-occurrence matrix (GLCM) and subsequently the finally extracted features were cascaded to the Neuro-GA classifier. The digital image processing (DIP) techniques used in this study for rendering visual images along with Neuro-GA classifier resulted in skyrocket accuracy level of 90% and above. The technique validated in this study has allowed the automated monitoring of various aspects of crop production and farming and an omnipotent promising efficiency hence this approach can be magnanimously effective in monitoring agricultural production and thereby plummeting waste allied with crop damage.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"35 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142194364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-29DOI: 10.1007/s13198-024-02444-2
K. Britto Alex, K. Selvan
Presently the growing digitalization of healthcare systems implies appropriate safety measures that are necessary to protect sensitive patient data and the accuracy of medical records. During this paper, an individual blockchain-based security upgrade plan customized for healthcare applications is proposed. The blockchain is a distributed ledger technology that is secure and distributed. Initially, we gathered the healthcare dataset from standardization was used to create effective data partitioning and for image de-noising and quality improvement, blur-removal is first accomplished in raw samples using standardization. To suggest an encryption scheme that relies on blockchain technology to improve data transmission security, this study demonstrates the fundamentals of contemporary cryptography by introducing a revolutionary technique that enhances the integration of the Firefly optimized Elliptic Curve Digital Signature Algorithm (FOECDSA) with lightweight advanced decryption. FOECDSA improves digital signature efficiency by optimizing elliptic curve parameters using the firefly method. Its use in healthcare systems enhances security and computational efficiency, guaranteeing strong protection of sensitive patient data in blockchain-based environments. In this study, Microsoft’s SQL server is used to manage and store structured data. The simulated results demonstrated that the suggested method’s enhanced identification outcomes, as measured by Encryption Time (22.27), decryption Time (22.76), Execution time (47.35), and Security Level (99) metrics, are compared to the existing methods. The enhanced encryption methodology is assessed and tested using particular standard parameters, and the suggested approach is contrasted with the current procedures.
{"title":"Developing a security enhancement for healthcare applications using blockchain-based firefly-optimized elliptic curve digital signature algorithm","authors":"K. Britto Alex, K. Selvan","doi":"10.1007/s13198-024-02444-2","DOIUrl":"https://doi.org/10.1007/s13198-024-02444-2","url":null,"abstract":"<p>Presently the growing digitalization of healthcare systems implies appropriate safety measures that are necessary to protect sensitive patient data and the accuracy of medical records. During this paper, an individual blockchain-based security upgrade plan customized for healthcare applications is proposed. The blockchain is a distributed ledger technology that is secure and distributed. Initially, we gathered the healthcare dataset from standardization was used to create effective data partitioning and for image de-noising and quality improvement, blur-removal is first accomplished in raw samples using standardization. To suggest an encryption scheme that relies on blockchain technology to improve data transmission security, this study demonstrates the fundamentals of contemporary cryptography by introducing a revolutionary technique that enhances the integration of the Firefly optimized Elliptic Curve Digital Signature Algorithm (FOECDSA) with lightweight advanced decryption. FOECDSA improves digital signature efficiency by optimizing elliptic curve parameters using the firefly method. Its use in healthcare systems enhances security and computational efficiency, guaranteeing strong protection of sensitive patient data in blockchain-based environments. In this study, Microsoft’s SQL server is used to manage and store structured data. The simulated results demonstrated that the suggested method’s enhanced identification outcomes, as measured by Encryption Time (22.27), decryption Time (22.76), Execution time (47.35), and Security Level (99) metrics, are compared to the existing methods. The enhanced encryption methodology is assessed and tested using particular standard parameters, and the suggested approach is contrasted with the current procedures.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"26 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142194240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-29DOI: 10.1007/s13198-024-02475-9
Zhinian Shu, Xiaorong Li
To enhance the effective detection of abnormal points in complex network data flow, perform multi-dimensional dynamic detection, and establish a more stable and reliable data flow abnormal detection method, a continuous abnormal point detection method for complex network data flow based on C-LSTM is proposed. The features of continuous outliers in complex network data streams are extracted, and a data anomaly detection model is established according to the features. The input features of continuous outliers in complex network data streams are qualitatively and quantitatively transformed into multi-scale anomalies, and the outlier detection based on C-LSTM is realized. The experimental results show that the maximum sensitivity of the proposed method reaches 42%, and the average routing overhead is less than 24 Mb. Regardless of the data in any scenario, the detection accuracy is higher than 0.92, the recall is higher than 0.81, and the F1 value is higher than 0.62. Although there may be some misjudgments or omissions due to noise, the overall detection performance is good.
{"title":"The detection method of continuous outliers in complex network data streams based on C-LSTM","authors":"Zhinian Shu, Xiaorong Li","doi":"10.1007/s13198-024-02475-9","DOIUrl":"https://doi.org/10.1007/s13198-024-02475-9","url":null,"abstract":"<p>To enhance the effective detection of abnormal points in complex network data flow, perform multi-dimensional dynamic detection, and establish a more stable and reliable data flow abnormal detection method, a continuous abnormal point detection method for complex network data flow based on C-LSTM is proposed. The features of continuous outliers in complex network data streams are extracted, and a data anomaly detection model is established according to the features. The input features of continuous outliers in complex network data streams are qualitatively and quantitatively transformed into multi-scale anomalies, and the outlier detection based on C-LSTM is realized. The experimental results show that the maximum sensitivity of the proposed method reaches 42%, and the average routing overhead is less than 24 Mb. Regardless of the data in any scenario, the detection accuracy is higher than 0.92, the recall is higher than 0.81, and the F1 value is higher than 0.62. Although there may be some misjudgments or omissions due to noise, the overall detection performance is good.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":"25 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142194400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}