Mohammad AlKhatib, Fahed Jubair, Mohammad Al Mashagbeh, Moath Khaleel, Samah Rahamneh
This paper presents a fully automated chess-playing system that integrates computer vision, artificial intelligence, and robotics to enable autonomous gameplay. The system comprises three core components: (i) YOLOv8, a fine-tuned deep learning model for chess piece recognition; (ii) Stockfish, a high-performance chess engine for strategic move selection; and (iii) the Quanser QArm, a robotic manipulator for precise move execution. The methodology involves fine-tuning YOLOv8 on a custom dataset, using FEN notation to represent board states, and executing moves via pre-calibrated robotic waypoints. The system was evaluated in real-world settings, achieving high detection accuracy and reliable robotic control, with the QArm completing 87% of moves on the first attempt and 97% by the second. Our system contributes a unified, modular architecture that enables reliable autonomous chess gameplay under real-world conditions.
{"title":"Automated Chess Gameplay With Computer Vision and A Robotic Arm","authors":"Mohammad AlKhatib, Fahed Jubair, Mohammad Al Mashagbeh, Moath Khaleel, Samah Rahamneh","doi":"10.1002/eng2.70607","DOIUrl":"10.1002/eng2.70607","url":null,"abstract":"<p>This paper presents a fully automated chess-playing system that integrates computer vision, artificial intelligence, and robotics to enable autonomous gameplay. The system comprises three core components: (i) YOLOv8, a fine-tuned deep learning model for chess piece recognition; (ii) Stockfish, a high-performance chess engine for strategic move selection; and (iii) the Quanser QArm, a robotic manipulator for precise move execution. The methodology involves fine-tuning YOLOv8 on a custom dataset, using FEN notation to represent board states, and executing moves via pre-calibrated robotic waypoints. The system was evaluated in real-world settings, achieving high detection accuracy and reliable robotic control, with the QArm completing 87% of moves on the first attempt and 97% by the second. Our system contributes a unified, modular architecture that enables reliable autonomous chess gameplay under real-world conditions.</p>","PeriodicalId":72922,"journal":{"name":"Engineering reports : open access","volume":"8 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/eng2.70607","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146155115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jonas Dhom, Eric Cordes, Christoph Berger, Florian Steinlehner, Rüdiger Daub
High energy densities are vital to satisfy the increasing demand for battery storage systems for electric vehicles. One innovative battery type of the next generation is the solid-state battery, which is characterized by the high expected energy density. The polymer-based solid-state battery is notable for its high machinability in production and, therefore, offers great potential for industrial scale. One component of the polymer-based solid-state battery is the composite cathode, which faces particular challenges in the individual production processes. The calendering process is essential, as it can increase the ionic conductivity through a reduction of the composite cathode porosity. For this reason, the calendering process for polymer-based composite cathodes with different compositions of active material and solid electrolyte has been analyzed in depth in this work. This enabled extensive analysis of the calendering process with different material compositions of polymer-based composite cathodes to provide a profound understanding of the causal-effect relationships.
{"title":"Influence of Calendering on the Variation in Material Compositions of the Composite Cathode of Polymer-Based Solid-State Batteries","authors":"Jonas Dhom, Eric Cordes, Christoph Berger, Florian Steinlehner, Rüdiger Daub","doi":"10.1002/eng2.70591","DOIUrl":"https://doi.org/10.1002/eng2.70591","url":null,"abstract":"<p>High energy densities are vital to satisfy the increasing demand for battery storage systems for electric vehicles. One innovative battery type of the next generation is the solid-state battery, which is characterized by the high expected energy density. The polymer-based solid-state battery is notable for its high machinability in production and, therefore, offers great potential for industrial scale. One component of the polymer-based solid-state battery is the composite cathode, which faces particular challenges in the individual production processes. The calendering process is essential, as it can increase the ionic conductivity through a reduction of the composite cathode porosity. For this reason, the calendering process for polymer-based composite cathodes with different compositions of active material and solid electrolyte has been analyzed in depth in this work. This enabled extensive analysis of the calendering process with different material compositions of polymer-based composite cathodes to provide a profound understanding of the causal-effect relationships.</p>","PeriodicalId":72922,"journal":{"name":"Engineering reports : open access","volume":"8 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/eng2.70591","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146136672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The rapid development of big data and mobile Internet technologies has significantly influenced the instructional growth of college instructors. This study investigated how big data influences college instructors' pedagogical creativity, professional development, and teaching tactics. Personalized learning experiences and data-driven decision-making are made possible for educators using big data, which gives them access to a wealth of resources, real-time feedback, and predictive analytics. Teachers and students may communicate more dynamically because of the mobile Internet, improving accessibility and interactivity. However, issues, such as privacy concerns, data literacy, and obstacles to technology adoption, continue to exist. This study goes beyond identifying challenges by proposing concrete solutions, including enhanced data protection protocols and targeted training for faculty members. The results indicate that big data integration improves instructional quality, facilitates adaptive learning, and fosters continuous professional development. However, challenges remain in terms of accessibility, ethical considerations, and technological infrastructure. This study concludes with recommendations for strengthening digital literacy programs and institutional frameworks to optimize data-driven educational practices. This study used a mixed-methods approach to investigate the effect of big data on curriculum design, teacher-student engagement, and instructional effectiveness. The results showed that incorporating big data into instruction greatly enhanced instructional quality, supported adaptive learning, and motivated instructors to pursue lifelong learning. Recommendations include improving infrastructure and offering focused professional development to facilitate the smooth implementation of significant data-driven practices in higher education.
{"title":"The Influence of Big Data-Driven Educational Technologies on College Teaching Development","authors":"Ling Yu, Wenye Li, Ying Luo","doi":"10.1002/eng2.70535","DOIUrl":"https://doi.org/10.1002/eng2.70535","url":null,"abstract":"<p>The rapid development of big data and mobile Internet technologies has significantly influenced the instructional growth of college instructors. This study investigated how big data influences college instructors' pedagogical creativity, professional development, and teaching tactics. Personalized learning experiences and data-driven decision-making are made possible for educators using big data, which gives them access to a wealth of resources, real-time feedback, and predictive analytics. Teachers and students may communicate more dynamically because of the mobile Internet, improving accessibility and interactivity. However, issues, such as privacy concerns, data literacy, and obstacles to technology adoption, continue to exist. This study goes beyond identifying challenges by proposing concrete solutions, including enhanced data protection protocols and targeted training for faculty members. The results indicate that big data integration improves instructional quality, facilitates adaptive learning, and fosters continuous professional development. However, challenges remain in terms of accessibility, ethical considerations, and technological infrastructure. This study concludes with recommendations for strengthening digital literacy programs and institutional frameworks to optimize data-driven educational practices. This study used a mixed-methods approach to investigate the effect of big data on curriculum design, teacher-student engagement, and instructional effectiveness. The results showed that incorporating big data into instruction greatly enhanced instructional quality, supported adaptive learning, and motivated instructors to pursue lifelong learning. Recommendations include improving infrastructure and offering focused professional development to facilitate the smooth implementation of significant data-driven practices in higher education.</p>","PeriodicalId":72922,"journal":{"name":"Engineering reports : open access","volume":"8 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/eng2.70535","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146140167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This research has introduced a hybrid model that integrates the long short-term memory (LSTM) and extreme gradient boosting (XGBoost) models to assess students' mental health states, particularly to identify students' levels of stress, mood, and fatigue. The physiological measures measured were heart rate (HR), heart rate variability (HRV), electrodermal activity (EDA), and skin temperature. All measures were recorded using wearable sensors and underwent processing, such as normalization, noise filtering, and feature extraction, to ensure the signal quality was fit for analysis and interpretability. While the LSTM network can accurately represent the temporal dynamics present in the physiological sequences, the XGBoost model is critical in obtaining high accuracy through the classification of features' non-linear interactions and decision boundary optimization. The experimental validation through the technique of fivefold cross-validation shows that the hybrid model performs with high accuracy of 0.98 on average, F1-score of 0.98, and consistently low false-positive and false-negative rates when compared to SVM, Random Forest, and single deep learning model methods that serve as baseline methods. The results assure the framework's reliability, consistency, and clarity in reasoning over different data conditions. This novel method provides a strong platform for the real-time, data-driven monitoring and early detection of psychological distress, thus allowing educators, mental-health professionals, and caregivers to make timely interventions and improve the overall well-being of students.
{"title":"AI-Enabled Intelligent Monitoring of Mental Health Indicators During Physical Activity Among Jiangsu Vocational College Students","authors":"Yanfeng Shang, Yanxia Shang, Yutong Shang","doi":"10.1002/eng2.70612","DOIUrl":"https://doi.org/10.1002/eng2.70612","url":null,"abstract":"<p>This research has introduced a hybrid model that integrates the long short-term memory (LSTM) and extreme gradient boosting (XGBoost) models to assess students' mental health states, particularly to identify students' levels of stress, mood, and fatigue. The physiological measures measured were heart rate (HR), heart rate variability (HRV), electrodermal activity (EDA), and skin temperature. All measures were recorded using wearable sensors and underwent processing, such as normalization, noise filtering, and feature extraction, to ensure the signal quality was fit for analysis and interpretability. While the LSTM network can accurately represent the temporal dynamics present in the physiological sequences, the XGBoost model is critical in obtaining high accuracy through the classification of features' non-linear interactions and decision boundary optimization. The experimental validation through the technique of fivefold cross-validation shows that the hybrid model performs with high accuracy of 0.98 on average, F1-score of 0.98, and consistently low false-positive and false-negative rates when compared to SVM, Random Forest, and single deep learning model methods that serve as baseline methods. The results assure the framework's reliability, consistency, and clarity in reasoning over different data conditions. This novel method provides a strong platform for the real-time, data-driven monitoring and early detection of psychological distress, thus allowing educators, mental-health professionals, and caregivers to make timely interventions and improve the overall well-being of students.</p>","PeriodicalId":72922,"journal":{"name":"Engineering reports : open access","volume":"8 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/eng2.70612","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146148229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Parivash Khalili, Mehrdad Kargari, Mohammad Ali Rastegar, Abdollah Eshghi
The rapid growth of e-commerce and the emergence of BNPL (Buy Now, Pay Later) financial products have significantly increased loan applications. However, the accumulation of losses from customer defaults poses a serious bankruptcy risk for BNPL providers. Unlike most studies that focus on credit scoring for traditional microloans, this research specifically uses BNPL loan data. A major concern in this domain is the imbalanced nature of the data, which can adversely affect model performance. To this end, we compared ensemble learning models in combination with data balancing methods and proposed a novel combination of logistic regression, SMOTE-NC, and LightGBM, which has not been extensively explored in previous studies. Additionally, we introduced two new variables— ‘active internet banking’ and ‘active mobile banking’—to investigate whether the use of digital banking platforms can indicate creditworthiness. Regression analysis confirmed the significance of the new variables, alongside key predictors such as ‘Education’, ‘Collateral type’, ‘Long-term accounts count’, ‘Received loans count’, ‘Active loans count’, and ‘Loan amount’. The proposed method achieved an F1-score of 84.66% for the default class, a 23% improvement over models without balancing techniques. Implementing this model could reduce realized BNPL losses by 26.84%, underscoring its potential to mitigate risks in this sector.
{"title":"Predicting BNPL Loan Defaults: A Comparison of Ensemble Learning Models Combined With Balancing Techniques and an Analysis of the Impact of Digital Literacy","authors":"Parivash Khalili, Mehrdad Kargari, Mohammad Ali Rastegar, Abdollah Eshghi","doi":"10.1002/eng2.70601","DOIUrl":"10.1002/eng2.70601","url":null,"abstract":"<p>The rapid growth of e-commerce and the emergence of BNPL (Buy Now, Pay Later) financial products have significantly increased loan applications. However, the accumulation of losses from customer defaults poses a serious bankruptcy risk for BNPL providers. Unlike most studies that focus on credit scoring for traditional microloans, this research specifically uses BNPL loan data. A major concern in this domain is the imbalanced nature of the data, which can adversely affect model performance. To this end, we compared ensemble learning models in combination with data balancing methods and proposed a novel combination of logistic regression, SMOTE-NC, and LightGBM, which has not been extensively explored in previous studies. Additionally, we introduced two new variables— ‘active internet banking’ and ‘active mobile banking’—to investigate whether the use of digital banking platforms can indicate creditworthiness. Regression analysis confirmed the significance of the new variables, alongside key predictors such as ‘Education’, ‘Collateral type’, ‘Long-term accounts count’, ‘Received loans count’, ‘Active loans count’, and ‘Loan amount’. The proposed method achieved an F1-score of 84.66% for the default class, a 23% improvement over models without balancing techniques. Implementing this model could reduce realized BNPL losses by 26.84%, underscoring its potential to mitigate risks in this sector.</p>","PeriodicalId":72922,"journal":{"name":"Engineering reports : open access","volume":"8 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/eng2.70601","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146140168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anterior Cervical Corpectomy and Fusion (ACCF), which is one of the common surgeries used to treat cervical spine diseases, has been widely applied in clinical practice. The commonly used internal fixation forms in ACCF surgery include the traditional Anterior Vertebral Body Screw-Plate (AVBSP) structure and the Anterior Cervical Pedicle Screw-Plate (APSP) structure, both of which are combined with titanium mesh to achieve support and bone fusion. The purpose was to investigate the effects of different surgical plans on cervical spine biomechanics and the interplay between internal fixation instruments after surgery. In this study, a finite element model of the human lower cervical spine (C3-C7) after ACCF surgery was established. The surgical plan consisted of two internal fixation forms (AVBSP and APSP) and two titanium mesh forms (linear and curved), combined in different ways. The mechanical sensitivity of adjacent intervertebral disc nuclei to different surgical plans was significantly different. The stress concentration areas on the vertebral body entry surface varied with different entry methods, and the stress values were greatly affected by cervical movements. The related instrument studies showed that the choice of anterior fixation method would affect the stress level and distribution of the titanium mesh. Theoretically, the combination of curved titanium mesh and AVBSP is beneficial to reducing the overall stress level of the internal fixation instruments and titanium mesh. The research provides a theoretical basis for the selection of clinical surgical plans. It is advantageous in enhancing postoperative stability of cervical vertebrae while reducing the risk of recurrence or other complications. Clinically, when selecting the excision fusion surgical plan based on the condition of the patient's cervical lesion, consideration should be given to the matching characteristics between internal fixation methods and titanium mesh forms, as well as their effects on the biomechanics of adjacent segments.
前路颈椎椎体切除术融合术(Anterior Cervical Corpectomy and Fusion, ACCF)是治疗颈椎疾病的常用手术之一,已广泛应用于临床。ACCF手术中常用的内固定形式有传统的椎体前路螺钉-板(AVBSP)结构和颈椎前路椎弓根螺钉-板(APSP)结构,两者均结合钛网实现支撑和骨融合。目的是探讨不同手术方案对颈椎生物力学的影响以及术后内固定器械间的相互作用。本研究建立了ACCF手术后人下颈椎(C3-C7)的有限元模型。手术方案包括两种内固定形式(AVBSP和APSP)和两种钛网形式(线性和弯曲),以不同的方式组合。相邻椎间盘核对不同手术方案的力学敏感性有显著差异。不同入路方式椎体入路表面应力集中区不同,应力值受颈椎运动影响较大。相关仪器研究表明,前路固定方式的选择会影响钛网的应力水平和分布。理论上,弯曲钛网与AVBSP的结合有利于降低内固定器械与钛网的整体应力水平。本研究为临床手术方案的选择提供了理论依据。它有利于增强颈椎术后的稳定性,同时减少复发或其他并发症的风险。临床上根据患者颈椎病变情况选择切除融合手术方案时,应考虑内固定方式与钛网形式的匹配特点及其对相邻节段生物力学的影响。
{"title":"Biomechanical Effects of Different Approaches and Titanium Mesh in Combined Anterior Cervical Corpectomy Decompression and Fusion: A Finite Element Study","authors":"Dan Li, Ke Wang, Chao Dong, Lingyi Deng","doi":"10.1002/eng2.70621","DOIUrl":"https://doi.org/10.1002/eng2.70621","url":null,"abstract":"<p>Anterior Cervical Corpectomy and Fusion (ACCF), which is one of the common surgeries used to treat cervical spine diseases, has been widely applied in clinical practice. The commonly used internal fixation forms in ACCF surgery include the traditional Anterior Vertebral Body Screw-Plate (AVBSP) structure and the Anterior Cervical Pedicle Screw-Plate (APSP) structure, both of which are combined with titanium mesh to achieve support and bone fusion. The purpose was to investigate the effects of different surgical plans on cervical spine biomechanics and the interplay between internal fixation instruments after surgery. In this study, a finite element model of the human lower cervical spine (C3-C7) after ACCF surgery was established. The surgical plan consisted of two internal fixation forms (AVBSP and APSP) and two titanium mesh forms (linear and curved), combined in different ways. The mechanical sensitivity of adjacent intervertebral disc nuclei to different surgical plans was significantly different. The stress concentration areas on the vertebral body entry surface varied with different entry methods, and the stress values were greatly affected by cervical movements. The related instrument studies showed that the choice of anterior fixation method would affect the stress level and distribution of the titanium mesh. Theoretically, the combination of curved titanium mesh and AVBSP is beneficial to reducing the overall stress level of the internal fixation instruments and titanium mesh. The research provides a theoretical basis for the selection of clinical surgical plans. It is advantageous in enhancing postoperative stability of cervical vertebrae while reducing the risk of recurrence or other complications. Clinically, when selecting the excision fusion surgical plan based on the condition of the patient's cervical lesion, consideration should be given to the matching characteristics between internal fixation methods and titanium mesh forms, as well as their effects on the biomechanics of adjacent segments.</p>","PeriodicalId":72922,"journal":{"name":"Engineering reports : open access","volume":"8 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/eng2.70621","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146140166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hajira Bashir, Waseem Ullah Khan, Safdar Nawaz Khan Marwat, Shahid Khan, Imran Baig, Yasir Mehmood, Hammad Atta
The exponential growth of the Internet has led to a dramatic rise in the use of web applications, making them integral to businesses, industries, education, financial institutions, and daily life. However, this widespread rise has introduced significant security issues, exposing web applications to various vulnerabilities capable of compromising the confidentiality, integrity, and availability of sensitive data. Therefore, mitigating these vulnerabilities has become vital to ensuring robust information security. Among the myriad of vulnerabilities, Structured Query Language injection (SQLi) is one of the foremost prevalent types of vulnerabilities affecting web-based apps, essential to detect Structured Query Language (SQL) injection vulnerabilities. In practice, penetration testers utilize tools for automated vulnerability assessment with varying strengths and limitations to evaluate the security of web applications. However, these security scanners have certain flaws, such as failing to scan entire web apps and producing inaccurate test results. Furthermore, significant research has been conducted to quantitatively list the outcomes of web application security scanners to examine their limitations and efficacy. Yet, a standardized methodology or criteria for assessing their performance remains elusive. To overcome these challenges, this paper proposes the SQLi-ScanEval Framework, a standardized SQLi detection system that integrates vulnerability and penetration testing scanners into a standardized framework. The proposed framework provides a standardized evaluation environment, thereby overcoming the drawbacks of individual scanners, including insufficient coverage and erroneous data. The proposed SQLi-ScanEval Framework tested seven prominent SQLi vulnerability scanners including OWASP ZAP, Wapiti, Vega, Acunetix, Invicti, Burp Suite and Arachni, on two prominent vulnerable testing applications i.e., Test PHP and Bricks from OWASP Broken Web Applications (BWA). The framework successfully evaluated the performance of each scanner on the basis of recall, accuracy, and precision. The results showed that Acunetix exhibits the highest accuracy i.e., 90.48% on Bricks and 86.96% on Test PHP, with the lowest false positive rates and a recall of 88.89%. The results also reveal notable variations in scanner performance, with scan times varying from 00:02:13 (OWASP ZAP) to 00:43:33 (Invicti) with the Bricks application. The SQLi-ScanEval results also provide valuable insights with the strengths and shortcomings for each scanner, giving penetration testers a practical roadmap for selecting the best tools. As cyber-attacks keep evolving, this study not only enhances decision-making but also extends SQLi techniques for detection, unlocking the way to more secure web applications.
{"title":"SQLi-ScanEval: A Framework for Design and Evaluation of SQLi Detection Using Vulnerability and Penetration Testing Scanners","authors":"Hajira Bashir, Waseem Ullah Khan, Safdar Nawaz Khan Marwat, Shahid Khan, Imran Baig, Yasir Mehmood, Hammad Atta","doi":"10.1002/eng2.70618","DOIUrl":"https://doi.org/10.1002/eng2.70618","url":null,"abstract":"<p>The exponential growth of the Internet has led to a dramatic rise in the use of web applications, making them integral to businesses, industries, education, financial institutions, and daily life. However, this widespread rise has introduced significant security issues, exposing web applications to various vulnerabilities capable of compromising the confidentiality, integrity, and availability of sensitive data. Therefore, mitigating these vulnerabilities has become vital to ensuring robust information security. Among the myriad of vulnerabilities, Structured Query Language injection (SQLi) is one of the foremost prevalent types of vulnerabilities affecting web-based apps, essential to detect Structured Query Language (SQL) injection vulnerabilities. In practice, penetration testers utilize tools for automated vulnerability assessment with varying strengths and limitations to evaluate the security of web applications. However, these security scanners have certain flaws, such as failing to scan entire web apps and producing inaccurate test results. Furthermore, significant research has been conducted to quantitatively list the outcomes of web application security scanners to examine their limitations and efficacy. Yet, a standardized methodology or criteria for assessing their performance remains elusive. To overcome these challenges, this paper proposes the SQLi-ScanEval Framework, a standardized SQLi detection system that integrates vulnerability and penetration testing scanners into a standardized framework. The proposed framework provides a standardized evaluation environment, thereby overcoming the drawbacks of individual scanners, including insufficient coverage and erroneous data. The proposed SQLi-ScanEval Framework tested seven prominent SQLi vulnerability scanners including OWASP ZAP, Wapiti, Vega, Acunetix, Invicti, Burp Suite and Arachni, on two prominent vulnerable testing applications i.e., Test PHP and Bricks from OWASP Broken Web Applications (BWA). The framework successfully evaluated the performance of each scanner on the basis of recall, accuracy, and precision. The results showed that Acunetix exhibits the highest accuracy i.e., 90.48% on Bricks and 86.96% on Test PHP, with the lowest false positive rates and a recall of 88.89%. The results also reveal notable variations in scanner performance, with scan times varying from 00:02:13 (OWASP ZAP) to 00:43:33 (Invicti) with the Bricks application. The SQLi-ScanEval results also provide valuable insights with the strengths and shortcomings for each scanner, giving penetration testers a practical roadmap for selecting the best tools. As cyber-attacks keep evolving, this study not only enhances decision-making but also extends SQLi techniques for detection, unlocking the way to more secure web applications.</p>","PeriodicalId":72922,"journal":{"name":"Engineering reports : open access","volume":"8 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2026-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/eng2.70618","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146083277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the rapid development of enterprise digitalization and the global economic environment, enterprise compliance risks and economic management issues have become increasingly complex. Accurate risk identification and efficient economic decision-making are crucial for the sustainable development of enterprises. To address this challenge, this study proposes an enterprise compliance risk identification and economic management optimization model that combines Dynamic Bayesian Network (DBN) with Artificial Intelligence (AI)-driven Reinforcement Learning (RL). The study utilizes a financial risk insight dataset and begins with data preprocessing, including missing value imputation, feature standardization, and time series alignment. Subsequently, a DBN model is constructed, with risk variable conditional probability distributions learned via the Markov Chain Monte Carlo (MCMC) method. An AI-driven RL algorithm is integrated to optimize network parameters, enabling the capture of dynamic evolution and dependencies among risk factors. Empirical results indicate that: (1) The proposed model achieves an AUC-ROC of 0.981 in risk identification tasks, representing an 8.9% improvement over the best baseline model, with an F1-score of 0.937. (2) AI-assisted auditing significantly enhances operational efficiency: the average working hours per case are reduced by 15.3%, the detection rate of high-risk cases is increased by 15.8%, and customer satisfaction is raised to 4.76 points (out of 5). (3) In terms of economic management optimization, the RL strategy increases the risk-adjusted return on capital (RAROC) to 0.236, maintains it at 0.187 under crisis scenarios, and achieves a compliance cost savings ratio of 0.237. These results verify the effectiveness of the DBN-DRL collaborative framework in balancing risk control and economic benefits. This study provides a data-driven intelligent tool enabling dynamic risk identification and economic decision optimization, offering theoretical foundations and practical references for enterprises to construct efficient and sustainable compliance risk management systems.
{"title":"Enterprise Compliance Risk Identification and Economic Management Optimization Based on Dynamic Bayesian Network","authors":"Zhuhan Sun","doi":"10.1002/eng2.70592","DOIUrl":"https://doi.org/10.1002/eng2.70592","url":null,"abstract":"<p>With the rapid development of enterprise digitalization and the global economic environment, enterprise compliance risks and economic management issues have become increasingly complex. Accurate risk identification and efficient economic decision-making are crucial for the sustainable development of enterprises. To address this challenge, this study proposes an enterprise compliance risk identification and economic management optimization model that combines Dynamic Bayesian Network (DBN) with Artificial Intelligence (AI)-driven Reinforcement Learning (RL). The study utilizes a financial risk insight dataset and begins with data preprocessing, including missing value imputation, feature standardization, and time series alignment. Subsequently, a DBN model is constructed, with risk variable conditional probability distributions learned via the Markov Chain Monte Carlo (MCMC) method. An AI-driven RL algorithm is integrated to optimize network parameters, enabling the capture of dynamic evolution and dependencies among risk factors. Empirical results indicate that: (1) The proposed model achieves an AUC-ROC of 0.981 in risk identification tasks, representing an 8.9% improvement over the best baseline model, with an F1-score of 0.937. (2) AI-assisted auditing significantly enhances operational efficiency: the average working hours per case are reduced by 15.3%, the detection rate of high-risk cases is increased by 15.8%, and customer satisfaction is raised to 4.76 points (out of 5). (3) In terms of economic management optimization, the RL strategy increases the risk-adjusted return on capital (RAROC) to 0.236, maintains it at 0.187 under crisis scenarios, and achieves a compliance cost savings ratio of 0.237. These results verify the effectiveness of the DBN-DRL collaborative framework in balancing risk control and economic benefits. This study provides a data-driven intelligent tool enabling dynamic risk identification and economic decision optimization, offering theoretical foundations and practical references for enterprises to construct efficient and sustainable compliance risk management systems.</p>","PeriodicalId":72922,"journal":{"name":"Engineering reports : open access","volume":"8 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2026-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/eng2.70592","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146057995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. M. Ferdous Azam, Dipak Patel, Ripendeep Singh, Sikata Samantaray, Aravindan M. K., Nivin Joy Thykattusserry, Jasgurpreet Singh Chohan, Yashwant Singh Bisht, Abhijit Bhowmik, Yalew Tamene
This study experimentally investigates the performance, combustion, and emission characteristics of a Reactivity-Controlled Compression Ignition (RCCI) engine fueled with diesel and the high-viscosity oxygenated alcohol 2-Methyl-1-butanol. Experiments were conducted on a single-cylinder common-rail direct injection engine operating at brake mean effective pressures of 3 and 5 bar and fuel injection pressures of 400, 600, and 800 bar. Diesel was directly injected as the high-reactivity fuel, while 2-Methyl-1-butanol was port-injected to establish RCCI combustion. Fuel blends containing 10%, 20%, and 30% alcohol were evaluated and compared with neat diesel operation. Results indicate that increasing injection pressure improves fuel atomization, advances combustion phasing, and enhances heat release characteristics. At full load and 800 bar injection pressure, the D70MB30 blend achieved the highest brake thermal efficiency of 37.4%, compared to 26.0% for neat diesel. Significant emission reductions were also observed, with NOx decreasing from 4.5 ppm (diesel) to 3.1 ppm and smoke opacity showing a consistent declining trend due to improved charge homogeneity and oxygen availability. However, higher alcohol content resulted in increased CO and HC emissions at part-load conditions because of low-temperature combustion and evaporative cooling effects. These penalties were substantially mitigated at higher injection pressures. Overall, the D70MB30 blend at 800 bar provided the best trade-off between performance and emissions, demonstrating the potential of 2-Methyl-1-butanol as a sustainable alternative fuel for advanced RCCI engine operation.
{"title":"Performance–Emission Trade-Offs in RCCI Engines Using High-Viscosity Alcohol–Diesel Blends Under Variable Injection Pressures","authors":"S. M. Ferdous Azam, Dipak Patel, Ripendeep Singh, Sikata Samantaray, Aravindan M. K., Nivin Joy Thykattusserry, Jasgurpreet Singh Chohan, Yashwant Singh Bisht, Abhijit Bhowmik, Yalew Tamene","doi":"10.1002/eng2.70619","DOIUrl":"https://doi.org/10.1002/eng2.70619","url":null,"abstract":"<p>This study experimentally investigates the performance, combustion, and emission characteristics of a Reactivity-Controlled Compression Ignition (RCCI) engine fueled with diesel and the high-viscosity oxygenated alcohol 2-Methyl-1-butanol. Experiments were conducted on a single-cylinder common-rail direct injection engine operating at brake mean effective pressures of 3 and 5 bar and fuel injection pressures of 400, 600, and 800 bar. Diesel was directly injected as the high-reactivity fuel, while 2-Methyl-1-butanol was port-injected to establish RCCI combustion. Fuel blends containing 10%, 20%, and 30% alcohol were evaluated and compared with neat diesel operation. Results indicate that increasing injection pressure improves fuel atomization, advances combustion phasing, and enhances heat release characteristics. At full load and 800 bar injection pressure, the D70MB30 blend achieved the highest brake thermal efficiency of 37.4%, compared to 26.0% for neat diesel. Significant emission reductions were also observed, with NO<sub>x</sub> decreasing from 4.5 ppm (diesel) to 3.1 ppm and smoke opacity showing a consistent declining trend due to improved charge homogeneity and oxygen availability. However, higher alcohol content resulted in increased CO and HC emissions at part-load conditions because of low-temperature combustion and evaporative cooling effects. These penalties were substantially mitigated at higher injection pressures. Overall, the D70MB30 blend at 800 bar provided the best trade-off between performance and emissions, demonstrating the potential of 2-Methyl-1-butanol as a sustainable alternative fuel for advanced RCCI engine operation.</p>","PeriodicalId":72922,"journal":{"name":"Engineering reports : open access","volume":"8 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/eng2.70619","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146057990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhenghao Qian, Fengzheng Liu, Mingdong He, Bo Li, Xuewu Li, Chuangye Zhao, Gehua Fu, Yifan Hu
With the rapid development of technologies such as cloud computing and the Internet of Things, organizations face the thorny reality that network attacks are becoming increasingly diverse, covert, and intelligent. Traditional signature-based intrusion detection systems (IDSs) struggle to address zero-day attacks and advanced persistent threats (APTs), often resulting in low detection rates and high false-positive rates. To address this, this paper proposes an adaptive network intrusion detection system that integrates random forest (RF) and real-time reputation evaluation. The system first preprocesses and normalizes the original network traffic and behavior logs, and then uses a random forest to perform preliminary multi-category classification. It then introduces a historical behavior risk metric, weighting the error rate of the current detection with the device's historical risk profile using exponential decay. A comprehensive reputation score is generated using a continuously differentiable “four-stage” smoothing function: sigmoid in the low-confidence zone, cosine in the medium-low zone, inverse sigmoid in the medium-high zone, and exponential decay in the extremely high zone. Finally, RRF-IPS's reputation scoring system executes automated policies such as bandwidth throttling, warning notifications, and session isolation or blocking, forming a closed “detect-assess-respond-archive” loop. Experimental results demonstrate that, on CICIDS2017, our system improves accuracy by 0.6% and F1 score by 5.9% compared to state-of-the-art methods.
{"title":"RRF-IPS: A Real-Time Reputation-Based Intrusion Prevention System","authors":"Zhenghao Qian, Fengzheng Liu, Mingdong He, Bo Li, Xuewu Li, Chuangye Zhao, Gehua Fu, Yifan Hu","doi":"10.1002/eng2.70605","DOIUrl":"https://doi.org/10.1002/eng2.70605","url":null,"abstract":"<p>With the rapid development of technologies such as cloud computing and the Internet of Things, organizations face the thorny reality that network attacks are becoming increasingly diverse, covert, and intelligent. Traditional signature-based intrusion detection systems (IDSs) struggle to address zero-day attacks and advanced persistent threats (APTs), often resulting in low detection rates and high false-positive rates. To address this, this paper proposes an adaptive network intrusion detection system that integrates random forest (RF) and real-time reputation evaluation. The system first preprocesses and normalizes the original network traffic and behavior logs, and then uses a random forest to perform preliminary multi-category classification. It then introduces a historical behavior risk metric, weighting the error rate of the current detection with the device's historical risk profile using exponential decay. A comprehensive reputation score is generated using a continuously differentiable “four-stage” smoothing function: sigmoid in the low-confidence zone, cosine in the medium-low zone, inverse sigmoid in the medium-high zone, and exponential decay in the extremely high zone. Finally, RRF-IPS's reputation scoring system executes automated policies such as bandwidth throttling, warning notifications, and session isolation or blocking, forming a closed “detect-assess-respond-archive” loop. Experimental results demonstrate that, on CICIDS2017, our system improves accuracy by 0.6% and F1 score by 5.9% compared to state-of-the-art methods.</p>","PeriodicalId":72922,"journal":{"name":"Engineering reports : open access","volume":"8 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/eng2.70605","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146058026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}