Pub Date : 2024-09-11DOI: 10.1007/s13198-024-02499-1
Shiva, Neetu Gupta, Anu G. Aggarwal
In marketing research, diffusion models are extensively utilized to predict the trend of new product adoption over time. These models are categorized based on their deterministic or stochastic characteristics. While deterministic models disregard the stochasticity of the adoption rate influenced by environmental and internal factors, we aim to address this limitation by proposing a generalized innovation diffusion model that accounts for such uncertainties. We validate our approach using the particle swarm optimization (PSO) technique on actual sales data from technological products. Our findings suggest that the proposed model outperforms existing diffusion models in forecasting accuracy.
{"title":"A generalized product adoption model under random marketing conditions","authors":"Shiva, Neetu Gupta, Anu G. Aggarwal","doi":"10.1007/s13198-024-02499-1","DOIUrl":"https://doi.org/10.1007/s13198-024-02499-1","url":null,"abstract":"<p>In marketing research, diffusion models are extensively utilized to predict the trend of new product adoption over time. These models are categorized based on their deterministic or stochastic characteristics. While deterministic models disregard the stochasticity of the adoption rate influenced by environmental and internal factors, we aim to address this limitation by proposing a generalized innovation diffusion model that accounts for such uncertainties. We validate our approach using the particle swarm optimization (PSO) technique on actual sales data from technological products. Our findings suggest that the proposed model outperforms existing diffusion models in forecasting accuracy.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142194368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-10DOI: 10.1007/s13198-024-02497-3
Harendra Singh, Vikrant Vikram Singh, Aditya Kumar Gupta, P. K. Kapur
In the wake of the digital revolution transforming the landscape of higher education, e-learning has emerged as a pivotal model for knowledge dissemination, reshaping traditional pedagogical methodologies and fostering an unprecedented transition to virtual learning environments. This transformative shift, necessitated by global crises and the rapid evolution of technology, has spotlighted the urgency to evaluate and enhance the effectiveness and user satisfaction of online learning platforms. Particularly in the context of Indian higher education, where the demographic expanse and diverse educational needs present unique challenges and opportunities, understanding the drivers of student satisfaction in e-learning is paramount. This empirical investigation explores the factors influencing students’ satisfaction with online education in Indian universities and higher education institutions. Data were collected from 460 postgraduates and undergraduates across 30 institutions offering programs in management, engineering, and commerce. Utilizing Structural Equation Modeling, the study identified key variables impacting learner satisfaction: learner inspiration and motivation, potential obstacles to e-learning, group and professor interaction, and the use of technology (including AI and other tools) in e-learning. Results indicate that potential obstacles to e-learning and the integration of technology had the most significant impact on student satisfaction, emphasizing the importance of overcoming barriers and leveraging technology effectively in e-learning environments. This study offers insights for higher education institutions seeking to enhance virtual learning experiences and underscores the imperative of addressing technological challenges to ensure sustained student satisfaction.
{"title":"Assessing e-learning platforms in higher education with reference to student satisfaction: a PLS-SEM approach","authors":"Harendra Singh, Vikrant Vikram Singh, Aditya Kumar Gupta, P. K. Kapur","doi":"10.1007/s13198-024-02497-3","DOIUrl":"https://doi.org/10.1007/s13198-024-02497-3","url":null,"abstract":"<p>In the wake of the digital revolution transforming the landscape of higher education, e-learning has emerged as a pivotal model for knowledge dissemination, reshaping traditional pedagogical methodologies and fostering an unprecedented transition to virtual learning environments. This transformative shift, necessitated by global crises and the rapid evolution of technology, has spotlighted the urgency to evaluate and enhance the effectiveness and user satisfaction of online learning platforms. Particularly in the context of Indian higher education, where the demographic expanse and diverse educational needs present unique challenges and opportunities, understanding the drivers of student satisfaction in e-learning is paramount. This empirical investigation explores the factors influencing students’ satisfaction with online education in Indian universities and higher education institutions. Data were collected from 460 postgraduates and undergraduates across 30 institutions offering programs in management, engineering, and commerce. Utilizing Structural Equation Modeling, the study identified key variables impacting learner satisfaction: learner inspiration and motivation, potential obstacles to e-learning, group and professor interaction, and the use of technology (including AI and other tools) in e-learning. Results indicate that potential obstacles to e-learning and the integration of technology had the most significant impact on student satisfaction, emphasizing the importance of overcoming barriers and leveraging technology effectively in e-learning environments. This study offers insights for higher education institutions seeking to enhance virtual learning experiences and underscores the imperative of addressing technological challenges to ensure sustained student satisfaction.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142194335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-06DOI: 10.1007/s13198-024-02442-4
Raghuram Shivram, B. G. Prasad, S. Vishwa Kiran
This research explores the concept of Worldwide one network (WON), a hypothetical ultra-large scale ad-hoc wireless network characterized by its non-hierarchical, open, scalable, homogeneous, and autopoiesis nature. The primary objectives are to address challenges in network formation, individual node unique addressing, and network management. This paper proposes a novel addressing mechanism named ‘Cubid’, which utilizes geo-coordinates as the primary identifier for network nodes with 1 m resolution and aims for at least 512 unique node addresses per cubic meter space on Earth. Unique three-dimensional address space, received signal strength based trilateration for network formation, address negotiation, and the use of Cubid as a MAC address to bypass traditional Layer 2–3 Internet Protocol activities are few of the differentiator aspects involved in this research work. Preliminary tests of this hypothetical network yield in practical viability of identifying network node’s geographical coordinates with an accuracy of 3 m without GPS devices, and corresponding simulations results in an average frame delivery time of 27 ms over a 100-hop, varying hop length network path. These findings indicate that WON could serve as a viable alternative communication network, especially when substantial infrastructure-based networks, such as the Internet fails.
{"title":"WON: A hypothetical multi-hop ad-hoc wireless ultra-large scale worldwide one network","authors":"Raghuram Shivram, B. G. Prasad, S. Vishwa Kiran","doi":"10.1007/s13198-024-02442-4","DOIUrl":"https://doi.org/10.1007/s13198-024-02442-4","url":null,"abstract":"<p>This research explores the concept of Worldwide one network (WON), a hypothetical ultra-large scale ad-hoc wireless network characterized by its non-hierarchical, open, scalable, homogeneous, and autopoiesis nature. The primary objectives are to address challenges in network formation, individual node unique addressing, and network management. This paper proposes a novel addressing mechanism named ‘Cubid’, which utilizes geo-coordinates as the primary identifier for network nodes with 1 m resolution and aims for at least 512 unique node addresses per cubic meter space on Earth. Unique three-dimensional address space, received signal strength based trilateration for network formation, address negotiation, and the use of Cubid as a MAC address to bypass traditional Layer 2–3 Internet Protocol activities are few of the differentiator aspects involved in this research work. Preliminary tests of this hypothetical network yield in practical viability of identifying network node’s geographical coordinates with an accuracy of 3 m without GPS devices, and corresponding simulations results in an average frame delivery time of 27 ms over a 100-hop, varying hop length network path. These findings indicate that WON could serve as a viable alternative communication network, especially when substantial infrastructure-based networks, such as the Internet fails.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142194365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-04DOI: 10.1007/s13198-024-02447-z
C. V. Prasshanth, S. Naveen Venkatesh, Tapan K. Mahanta, N. R. Sakthivel, V. Sugumaran
Fault detection in monoblock centrifugal pumps plays an important role in ensuring the safe and efficient use of mechanical equipment. This study proposes a deep learning-based method using transfer learning for fault detection in monoblock centrifugal pumps. A MEMS sensor was used to acquire vibration signals from the experimental setup and these signals were subsequently processed and stored as Hilbert-Huang transform images. By leveraging 15 pretrained networks such as InceptionResNetV2, DenseNet-201, GoogLeNet, ResNet-50, VGG-19, Xception, VGG-16, EfficientNetb0, ShuffleNet, InceptionV3, ResNet101, MobileNet-v2, AlexNet, NasNetmobile and ResNet-18, fault diagnosis was performed on the acquired data. To achieve high classification accuracy, various hyperparameters including, batch size, learning rate, train-test split ratio and optimizer were systematically varied and optimized. The aim was to identify the most suitable configuration for the deep learning model. By leveraging transfer learning and preprocessing the acquired vibration signals into Hilbert–Huang transform images, the classification accuracy was significantly improved. Optimizing hyperparameters through extensive experimentation proved instrumental in elevating the models performance. Following thorough trials and meticulous tuning, the GoogleNet architecture emerged as the optimal setup, attaining a peak classification accuracy of 100.00%, all while upholding computational efficiency at 80 s.
{"title":"Deep learning for fault diagnosis of monoblock centrifugal pumps: a Hilbert–Huang transform approach","authors":"C. V. Prasshanth, S. Naveen Venkatesh, Tapan K. Mahanta, N. R. Sakthivel, V. Sugumaran","doi":"10.1007/s13198-024-02447-z","DOIUrl":"https://doi.org/10.1007/s13198-024-02447-z","url":null,"abstract":"<p>Fault detection in monoblock centrifugal pumps plays an important role in ensuring the safe and efficient use of mechanical equipment. This study proposes a deep learning-based method using transfer learning for fault detection in monoblock centrifugal pumps. A MEMS sensor was used to acquire vibration signals from the experimental setup and these signals were subsequently processed and stored as Hilbert-Huang transform images. By leveraging 15 pretrained networks such as InceptionResNetV2, DenseNet-201, GoogLeNet, ResNet-50, VGG-19, Xception, VGG-16, EfficientNetb0, ShuffleNet, InceptionV3, ResNet101, MobileNet-v2, AlexNet, NasNetmobile and ResNet-18, fault diagnosis was performed on the acquired data. To achieve high classification accuracy, various hyperparameters including, batch size, learning rate, train-test split ratio and optimizer were systematically varied and optimized. The aim was to identify the most suitable configuration for the deep learning model. By leveraging transfer learning and preprocessing the acquired vibration signals into Hilbert–Huang transform images, the classification accuracy was significantly improved. Optimizing hyperparameters through extensive experimentation proved instrumental in elevating the models performance. Following thorough trials and meticulous tuning, the GoogleNet architecture emerged as the optimal setup, attaining a peak classification accuracy of 100.00%, all while upholding computational efficiency at 80 s.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142194336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-30DOI: 10.1007/s13198-024-02486-6
Shashank Chaudhary, Upendra kumar
The timely detection and identification of crop diseases is a crucial aspect of the agricultural sector. It contributes significantly to the by and large productivity of the plant. One of the most crucial factors that we need to consider while determining a plant’s susceptibility to a particular disease is the visual characteristics of the affected plant. The increasing popularity of automation and availability of efficient techniques for disease identification has led to the development of novel methods and engraved impactful technologies in field of automated disease detection. The traditional methods have not been able to provide the researchers with the most accurate results. The proposed model in this work can identify the rice crop disease without relying on subjective data and have many advantages over traditional approaches as evident from the results derived. It has the potential to improve the efficiency of the process and aid in early detection. Machine learning method presents real-time automated decision support systems and can help improve crop or plant growth productivity and quality. This work aims to introduce a new and enhanced method as Neuro-GA, which is a combination of both the artificial neural network (ANN) and the genetic algorithm (GA). It has been claimed that it is more powerful and accurate than the traditional methods. The pioneer and nascent stages of this analysis includes preprocessing of the data was carried out. The features were then extracted using Gray-level co-occurrence matrix (GLCM) and subsequently the finally extracted features were cascaded to the Neuro-GA classifier. The digital image processing (DIP) techniques used in this study for rendering visual images along with Neuro-GA classifier resulted in skyrocket accuracy level of 90% and above. The technique validated in this study has allowed the automated monitoring of various aspects of crop production and farming and an omnipotent promising efficiency hence this approach can be magnanimously effective in monitoring agricultural production and thereby plummeting waste allied with crop damage.
{"title":"Identification of rice crop diseases using gray level co-occurrence matrix (GLCM) and Neuro-GA classifier","authors":"Shashank Chaudhary, Upendra kumar","doi":"10.1007/s13198-024-02486-6","DOIUrl":"https://doi.org/10.1007/s13198-024-02486-6","url":null,"abstract":"<p>The timely detection and identification of crop diseases is a crucial aspect of the agricultural sector. It contributes significantly to the by and large productivity of the plant. One of the most crucial factors that we need to consider while determining a plant’s susceptibility to a particular disease is the visual characteristics of the affected plant. The increasing popularity of automation and availability of efficient techniques for disease identification has led to the development of novel methods and engraved impactful technologies in field of automated disease detection. The traditional methods have not been able to provide the researchers with the most accurate results. The proposed model in this work can identify the rice crop disease without relying on subjective data and have many advantages over traditional approaches as evident from the results derived. It has the potential to improve the efficiency of the process and aid in early detection. Machine learning method presents real-time automated decision support systems and can help improve crop or plant growth productivity and quality. This work aims to introduce a new and enhanced method as Neuro-GA, which is a combination of both the artificial neural network (ANN) and the genetic algorithm (GA). It has been claimed that it is more powerful and accurate than the traditional methods. The pioneer and nascent stages of this analysis includes preprocessing of the data was carried out. The features were then extracted using Gray-level co-occurrence matrix (GLCM) and subsequently the finally extracted features were cascaded to the Neuro-GA classifier. The digital image processing (DIP) techniques used in this study for rendering visual images along with Neuro-GA classifier resulted in skyrocket accuracy level of 90% and above. The technique validated in this study has allowed the automated monitoring of various aspects of crop production and farming and an omnipotent promising efficiency hence this approach can be magnanimously effective in monitoring agricultural production and thereby plummeting waste allied with crop damage.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142194364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-29DOI: 10.1007/s13198-024-02444-2
K. Britto Alex, K. Selvan
Presently the growing digitalization of healthcare systems implies appropriate safety measures that are necessary to protect sensitive patient data and the accuracy of medical records. During this paper, an individual blockchain-based security upgrade plan customized for healthcare applications is proposed. The blockchain is a distributed ledger technology that is secure and distributed. Initially, we gathered the healthcare dataset from standardization was used to create effective data partitioning and for image de-noising and quality improvement, blur-removal is first accomplished in raw samples using standardization. To suggest an encryption scheme that relies on blockchain technology to improve data transmission security, this study demonstrates the fundamentals of contemporary cryptography by introducing a revolutionary technique that enhances the integration of the Firefly optimized Elliptic Curve Digital Signature Algorithm (FOECDSA) with lightweight advanced decryption. FOECDSA improves digital signature efficiency by optimizing elliptic curve parameters using the firefly method. Its use in healthcare systems enhances security and computational efficiency, guaranteeing strong protection of sensitive patient data in blockchain-based environments. In this study, Microsoft’s SQL server is used to manage and store structured data. The simulated results demonstrated that the suggested method’s enhanced identification outcomes, as measured by Encryption Time (22.27), decryption Time (22.76), Execution time (47.35), and Security Level (99) metrics, are compared to the existing methods. The enhanced encryption methodology is assessed and tested using particular standard parameters, and the suggested approach is contrasted with the current procedures.
{"title":"Developing a security enhancement for healthcare applications using blockchain-based firefly-optimized elliptic curve digital signature algorithm","authors":"K. Britto Alex, K. Selvan","doi":"10.1007/s13198-024-02444-2","DOIUrl":"https://doi.org/10.1007/s13198-024-02444-2","url":null,"abstract":"<p>Presently the growing digitalization of healthcare systems implies appropriate safety measures that are necessary to protect sensitive patient data and the accuracy of medical records. During this paper, an individual blockchain-based security upgrade plan customized for healthcare applications is proposed. The blockchain is a distributed ledger technology that is secure and distributed. Initially, we gathered the healthcare dataset from standardization was used to create effective data partitioning and for image de-noising and quality improvement, blur-removal is first accomplished in raw samples using standardization. To suggest an encryption scheme that relies on blockchain technology to improve data transmission security, this study demonstrates the fundamentals of contemporary cryptography by introducing a revolutionary technique that enhances the integration of the Firefly optimized Elliptic Curve Digital Signature Algorithm (FOECDSA) with lightweight advanced decryption. FOECDSA improves digital signature efficiency by optimizing elliptic curve parameters using the firefly method. Its use in healthcare systems enhances security and computational efficiency, guaranteeing strong protection of sensitive patient data in blockchain-based environments. In this study, Microsoft’s SQL server is used to manage and store structured data. The simulated results demonstrated that the suggested method’s enhanced identification outcomes, as measured by Encryption Time (22.27), decryption Time (22.76), Execution time (47.35), and Security Level (99) metrics, are compared to the existing methods. The enhanced encryption methodology is assessed and tested using particular standard parameters, and the suggested approach is contrasted with the current procedures.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142194240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-29DOI: 10.1007/s13198-024-02475-9
Zhinian Shu, Xiaorong Li
To enhance the effective detection of abnormal points in complex network data flow, perform multi-dimensional dynamic detection, and establish a more stable and reliable data flow abnormal detection method, a continuous abnormal point detection method for complex network data flow based on C-LSTM is proposed. The features of continuous outliers in complex network data streams are extracted, and a data anomaly detection model is established according to the features. The input features of continuous outliers in complex network data streams are qualitatively and quantitatively transformed into multi-scale anomalies, and the outlier detection based on C-LSTM is realized. The experimental results show that the maximum sensitivity of the proposed method reaches 42%, and the average routing overhead is less than 24 Mb. Regardless of the data in any scenario, the detection accuracy is higher than 0.92, the recall is higher than 0.81, and the F1 value is higher than 0.62. Although there may be some misjudgments or omissions due to noise, the overall detection performance is good.
{"title":"The detection method of continuous outliers in complex network data streams based on C-LSTM","authors":"Zhinian Shu, Xiaorong Li","doi":"10.1007/s13198-024-02475-9","DOIUrl":"https://doi.org/10.1007/s13198-024-02475-9","url":null,"abstract":"<p>To enhance the effective detection of abnormal points in complex network data flow, perform multi-dimensional dynamic detection, and establish a more stable and reliable data flow abnormal detection method, a continuous abnormal point detection method for complex network data flow based on C-LSTM is proposed. The features of continuous outliers in complex network data streams are extracted, and a data anomaly detection model is established according to the features. The input features of continuous outliers in complex network data streams are qualitatively and quantitatively transformed into multi-scale anomalies, and the outlier detection based on C-LSTM is realized. The experimental results show that the maximum sensitivity of the proposed method reaches 42%, and the average routing overhead is less than 24 Mb. Regardless of the data in any scenario, the detection accuracy is higher than 0.92, the recall is higher than 0.81, and the F1 value is higher than 0.62. Although there may be some misjudgments or omissions due to noise, the overall detection performance is good.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142194400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-29DOI: 10.1007/s13198-024-02472-y
Jyothi Padmaja Koduru, T. Vijay Kumar, Kedar Mallik Mantrala
Because of the reasonability of economically generating large-scale metal equipment with a very large rate of deposition, important development has been conducted in the learning of the “wire arc additive manufacturing (WAAM)” approach also the mechanical and microstructure features of the fabricated elements. The WAAM has emerged highly so the large range of the materials has accompanied the operation and its development fighting. It has enhanced as a very significant mechanism for the large metal equipment in various manufacturing organizations. Because of its arc-assisted deposition, high process cycle time, process stability, defect monitoring, and management are severe for the WAAM device to be employed in the organization. High improvements have been performed in the development of the process, control system, comprehensive operation monitoring, material evaluation, path slicing, and programming but still, it demands the improvement. Therefore, this article aims to give a detailed review of the WAAM systems to facilitate an easy and quick understanding of the current status and future prospects of WAAM. The stage-wise implementation of WAAM, usage of metals and alloys, process parameter effects, and methodologies used for improving the quality of WAAM components are discussed. The usage of hardware systems and technological parameters used for understanding the physical mechanism are also described in this research work. In addition, the monitoring systems such as acoustic sensing, optical inspection, thermal sensing, electrical sensing, and multi-sensor sensing are analyzed and the property characterization techniques also be evaluated in this study. On the other hand, the additive as well as the subtractive technologies and the artificial intelligence techniques utilized for improving the manufacturing level are discussed. Finally, the possible future research directions are provided for making further developments in WAAM by the researchers.
{"title":"A review of wire and arc additive manufacturing using different property characterization, challenges and future trends","authors":"Jyothi Padmaja Koduru, T. Vijay Kumar, Kedar Mallik Mantrala","doi":"10.1007/s13198-024-02472-y","DOIUrl":"https://doi.org/10.1007/s13198-024-02472-y","url":null,"abstract":"<p>Because of the reasonability of economically generating large-scale metal equipment with a very large rate of deposition, important development has been conducted in the learning of the “wire arc additive manufacturing (WAAM)” approach also the mechanical and microstructure features of the fabricated elements. The WAAM has emerged highly so the large range of the materials has accompanied the operation and its development fighting. It has enhanced as a very significant mechanism for the large metal equipment in various manufacturing organizations. Because of its arc-assisted deposition, high process cycle time, process stability, defect monitoring, and management are severe for the WAAM device to be employed in the organization. High improvements have been performed in the development of the process, control system, comprehensive operation monitoring, material evaluation, path slicing, and programming but still, it demands the improvement. Therefore, this article aims to give a detailed review of the WAAM systems to facilitate an easy and quick understanding of the current status and future prospects of WAAM. The stage-wise implementation of WAAM, usage of metals and alloys, process parameter effects, and methodologies used for improving the quality of WAAM components are discussed. The usage of hardware systems and technological parameters used for understanding the physical mechanism are also described in this research work. In addition, the monitoring systems such as acoustic sensing, optical inspection, thermal sensing, electrical sensing, and multi-sensor sensing are analyzed and the property characterization techniques also be evaluated in this study. On the other hand, the additive as well as the subtractive technologies and the artificial intelligence techniques utilized for improving the manufacturing level are discussed. Finally, the possible future research directions are provided for making further developments in WAAM by the researchers.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142225047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-28DOI: 10.1007/s13198-024-02461-1
Manish Verma, Parma Nand
Android protects user privacy through its permission system and explains permission usage in privacy disclosure. Privacy disclosure often fails to predict app behavior accurately and leading to potential exploitation by malicious applications. To address this, we propose the VADER-RF technique, which combines VADER sentiment analysis with Random Forest machine learning to correlate privacy disclosures with app behavior. Our model analyzes privacy disclosure documents using sentiment analysis, extracting permissions from AndroidManifest.xml file, and explore the data flow analysis of Java files. These features were evaluated on Naive Bayes, SVM, Decision Tree and Random Forest machine learning models. The Random Forest model demonstrated superior performance with the highest accuracy (81.6%), precision (85.3%) and recall (89.4%). Kendall's Tau correlation coefficient is 0.54, which indicates that our model is moderate to strongly effective at predicting whether an app is malicious based on the selected features. Sentiment analysis significantly enhanced all models’ performance, underscoring the effectiveness of integrating sentiment analysis with traditional feature sets for advanced malware detection.
安卓通过权限系统保护用户隐私,并在隐私披露中解释权限的使用。隐私披露往往不能准确预测应用程序的行为,从而导致恶意应用程序的潜在利用。针对这一问题,我们提出了 VADER-RF 技术,该技术将 VADER 情感分析与随机森林机器学习相结合,将隐私披露与应用程序行为关联起来。我们的模型利用情感分析来分析隐私披露文件,从 AndroidManifest.xml 文件中提取权限,并探索 Java 文件的数据流分析。在 Naive Bayes、SVM、决策树和随机森林机器学习模型上对这些特征进行了评估。随机森林模型表现优异,准确率(81.6%)、精确率(85.3%)和召回率(89.4%)最高。Kendall's Tau 相关系数为 0.54,这表明我们的模型在根据所选特征预测应用程序是否为恶意应用程序方面具有中度到高度的有效性。情感分析大大提高了所有模型的性能,突出表明了将情感分析与传统特征集整合用于高级恶意软件检测的有效性。
{"title":"VADER-RF: a novel scheme for protecting user privacy on android devices","authors":"Manish Verma, Parma Nand","doi":"10.1007/s13198-024-02461-1","DOIUrl":"https://doi.org/10.1007/s13198-024-02461-1","url":null,"abstract":"<p>Android protects user privacy through its permission system and explains permission usage in privacy disclosure. Privacy disclosure often fails to predict app behavior accurately and leading to potential exploitation by malicious applications. To address this, we propose the VADER-RF technique, which combines VADER sentiment analysis with Random Forest machine learning to correlate privacy disclosures with app behavior. Our model analyzes privacy disclosure documents using sentiment analysis, extracting permissions from AndroidManifest.xml file, and explore the data flow analysis of Java files. These features were evaluated on Naive Bayes, SVM, Decision Tree and Random Forest machine learning models. The Random Forest model demonstrated superior performance with the highest accuracy (81.6%), precision (85.3%) and recall (89.4%). Kendall's Tau correlation coefficient is 0.54, which indicates that our model is moderate to strongly effective at predicting whether an app is malicious based on the selected features. Sentiment analysis significantly enhanced all models’ performance, underscoring the effectiveness of integrating sentiment analysis with traditional feature sets for advanced malware detection.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142194366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-28DOI: 10.1007/s13198-024-02485-7
Wasiur Rhmann, Amaan Ishrat
Web services are a novel method of web application development. They allow business to adapt to a new environment and change quickly according to customer needs. The client requires high-quality web services with minimal response time, more security, and high availability. With the increasing demand for web services, the introduction of web services rapidly in the business environment has influenced rapidly the web service quality. In the present work, a novel model for web service classification is proposed. Three metaheuristic techniques: Whale optimization algorithm, Simulated annealing algorithm, and Ant colony optimization are used to select the best subset of features. Web service-based imbalanced dataset is balanced using SMOTETomek (Synthetic minority oversampling + Tomek link). Ensemble Adaboost and Gradient boosting algorithms are used for the creation of a web service prediction model. The publicly available QWS dataset is used for experimental purposes. The results of the proposed models are compared with machine learning techniques. It was observed that the Ant colony algorithm performed best for relevant feature selection and the Ensemble Adaboost and Gradient boosting algorithm outperformed all other machine learning techniques for web service classification.
{"title":"Imbalanced data preprocessing model for web service classification","authors":"Wasiur Rhmann, Amaan Ishrat","doi":"10.1007/s13198-024-02485-7","DOIUrl":"https://doi.org/10.1007/s13198-024-02485-7","url":null,"abstract":"<p>Web services are a novel method of web application development. They allow business to adapt to a new environment and change quickly according to customer needs. The client requires high-quality web services with minimal response time, more security, and high availability. With the increasing demand for web services, the introduction of web services rapidly in the business environment has influenced rapidly the web service quality. In the present work, a novel model for web service classification is proposed. Three metaheuristic techniques: Whale optimization algorithm, Simulated annealing algorithm, and Ant colony optimization are used to select the best subset of features. Web service-based imbalanced dataset is balanced using SMOTETomek (Synthetic minority oversampling + Tomek link). Ensemble Adaboost and Gradient boosting algorithms are used for the creation of a web service prediction model. The publicly available QWS dataset is used for experimental purposes. The results of the proposed models are compared with machine learning techniques. It was observed that the Ant colony algorithm performed best for relevant feature selection and the Ensemble Adaboost and Gradient boosting algorithm outperformed all other machine learning techniques for web service classification.</p>","PeriodicalId":14463,"journal":{"name":"International Journal of System Assurance Engineering and Management","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142194337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}