Efficient thermal management is essential for the reliability and performance of traction inverters. However, direct optimization via Computational Fluid Dynamics (CFD) is often impractical due to the high dimensionality of the design space and the high computational cost of each simulation. To overcome this limitation, a surrogate-based optimization framework is developed to enhance the thermal and hydraulic performance of an automotive traction inverter cooling system. The methodology integrates CFD, deep neural networks (DNNs), and a multi-objective evolutionary algorithm. A simplified representation of the ACEPACKTM DRIVE power module is employed to generate an extensive dataset through automated, GPU-accelerated CFD simulations, making data generation computationally feasible while avoiding the prohibitive cost of direct optimization. A DNN surrogate model is trained to accurately predict pressure drop and heated-wall temperature, achieving mean relative errors below 3% and 1%, respectively. This surrogate model then guides a Non-Dominated Sorting Genetic Algorithm III in the optimization of key geometric parameters, including pin-fin diameter, spacing, height, wall clearance, as well as of physical parameter such as the surface roughness of the pin-fins. CFD-based validation of the Pareto-optimal designs, performed on the full inverter geometry, indicates reductions of up to 25% in pressure drop and approximately 2% in junction temperature. These results suggest that the proposed methodology promises robustness and generalizability, showing good potential for further application in data-driven thermal design optimization.
{"title":"Optimization of pin-fin arrangement in traction inverter cooling systems: A framework based on CFD simulations, deep neural networks and evolutionary algorithms","authors":"Luca Donetti , Gaetano Patti , Stefano Mauro , Gaetano Sequenzia , Michele Calabretta","doi":"10.1016/j.jestch.2025.102238","DOIUrl":"10.1016/j.jestch.2025.102238","url":null,"abstract":"<div><div>Efficient thermal management is essential for the reliability and performance of traction inverters. However, direct optimization via Computational Fluid Dynamics (CFD) is often impractical due to the high dimensionality of the design space and the high computational cost of each simulation. To overcome this limitation, a surrogate-based optimization framework is developed to enhance the thermal and hydraulic performance of an automotive traction inverter cooling system. The methodology integrates CFD, deep neural networks (DNNs), and a multi-objective evolutionary algorithm. A simplified representation of the ACEPACK<sup>TM</sup> DRIVE power module is employed to generate an extensive dataset through automated, GPU-accelerated CFD simulations, making data generation computationally feasible while avoiding the prohibitive cost of direct optimization. A DNN surrogate model is trained to accurately predict pressure drop and heated-wall temperature, achieving mean relative errors below 3% and 1%, respectively. This surrogate model then guides a Non-Dominated Sorting Genetic Algorithm III in the optimization of key geometric parameters, including pin-fin diameter, spacing, height, wall clearance, as well as of physical parameter such as the surface roughness of the pin-fins. CFD-based validation of the Pareto-optimal designs, performed on the full inverter geometry, indicates reductions of up to 25% in pressure drop and approximately 2% in junction temperature. These results suggest that the proposed methodology promises robustness and generalizability, showing good potential for further application in data-driven thermal design optimization.</div></div>","PeriodicalId":48609,"journal":{"name":"Engineering Science and Technology-An International Journal-Jestech","volume":"72 ","pages":"Article 102238"},"PeriodicalIF":5.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-11-19DOI: 10.1016/j.jestch.2025.102227
M. Murat Tezcan , Ebru Efeoğlu
According to the 2023 Wind Energy Report published by the Global Energy Council, the total installed power of wind energy conversion systems worldwide is around 1 TW. In addition, in 2024 and the following years, an average annual increase of around 15% on this installed capacity is envisaged. This situation reveals the importance and rapid development of wind energy conversion systems (WECS) in renewable energy systems. Accordingly, during the design, modeling and production of AC generators at different power levels used in wind turbines, new generation design and modeling techniques are used in addition to classical modeling methods, and wind turbine generator R&D is developing rapidly. New design and optimization methods have begun to be used in the modeling and performance analysis of Double Fed Asynchronous Generators (DFIG), which are frequently used in the field for different output powers. Modeling DFIG with classical numerical modeling and FEA-based magnetic simulation programs is a time-consuming operation, especially in transient or dynamic analysis. Depending on the performance of the computer, obtaining a transient field distribution solution may take hours or even days to obtain iteration-based field distribution solutions that use the finite difference method as a reference. Therefore, machine learning and deep learning-based iterative optimization and prediction methods stand out as a powerful alternative.
In this study, electromagnetic torque values obtained through FEA-based simulations for three different DFIGs numerically modeled at medium power levels (250 kVA) with different winding materials (copper and aluminum) were used as reference. These torque curves were estimated using deep neural network algorithms based on K Nearest Neighbors (KNN), Support Vector Regression (SVR), Extra Tree (ET), Random Forest (RF), and Long Short-Term Memory (LSTM). Thus, the FEA results were compared with the predictions obtained from these algorithms, and the predictive performance of the algorithms was evaluated. The performances of the aforementioned algorithms in trainings and cross-validations were compared using R2, MAE, and RMSE metrics. The LSTM-based deep neural network outperformed the other algorithms for electromagnetic torque estimation. Using this approach, R2 values of 0.990, 0.976 and 0.994 were obtained for DFIG-1, DFIG-2 and DFIG-3 in cross-validation, respectively.
{"title":"Electromagnetic Torque Prediction and Modeling of a Doubly Fed Induction Generator for Wind Energy Conversion Systems Using Machine Learning and Deep Learning Algorithms","authors":"M. Murat Tezcan , Ebru Efeoğlu","doi":"10.1016/j.jestch.2025.102227","DOIUrl":"10.1016/j.jestch.2025.102227","url":null,"abstract":"<div><div>According to the 2023 Wind Energy Report published by the Global Energy Council, the total installed power of wind energy conversion systems worldwide is around 1 TW. In addition, in 2024 and the following years, an average annual increase of around 15% on this installed capacity is envisaged. This situation reveals the importance and rapid development of wind energy conversion systems (WECS) in renewable energy systems. Accordingly, during the design, modeling and production of AC generators at different power levels used in wind turbines, new generation design and modeling techniques are used in addition to classical modeling methods, and wind turbine generator R&D is developing rapidly. New design and optimization methods have begun to be used in the modeling and performance analysis of Double Fed Asynchronous Generators (DFIG), which are frequently used in the field for different output powers. Modeling DFIG with classical numerical modeling and FEA-based magnetic simulation programs is a time-consuming operation, especially in transient or dynamic analysis. Depending on the performance of the computer, obtaining a transient field distribution solution may take hours or even days to obtain iteration-based field distribution solutions that use the finite difference method as a reference. Therefore, machine learning and deep learning-based iterative optimization and prediction methods stand out as a powerful alternative.</div><div>In this study, electromagnetic torque values obtained through FEA-based simulations for three different DFIGs numerically modeled at medium power levels (250 kVA) with different winding materials (copper and aluminum) were used as reference. These torque curves were estimated using deep neural network algorithms based on K Nearest Neighbors (KNN), Support Vector Regression (SVR), Extra Tree (ET), Random Forest (RF), and Long Short-Term Memory (LSTM). Thus, the FEA results were compared with the predictions obtained from these algorithms, and the predictive performance of the algorithms was evaluated. The performances of the aforementioned algorithms in trainings and cross-validations were compared using R<sup>2</sup>, MAE, and RMSE metrics. The LSTM-based deep neural network outperformed the other algorithms for electromagnetic torque estimation. Using this approach, R<sup>2</sup> values of 0.990, 0.976 and 0.994 were obtained for DFIG-1, DFIG-2 and DFIG-3 in cross-validation, respectively.</div></div>","PeriodicalId":48609,"journal":{"name":"Engineering Science and Technology-An International Journal-Jestech","volume":"72 ","pages":"Article 102227"},"PeriodicalIF":5.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-12-09DOI: 10.1016/j.jestch.2025.102246
Saleem Riaz, Bingqiang Li
Predefined-time stability (PDTS) controllers are sought for industrial nonlinear systems as they guarantee convergence within a user-defined time, independent of initial conditions. However, conventional PDTS sliding mode controllers often suffer from a trade-off between convergence speed and robustness, are prone to singularity issues, and typically lack adaptive mechanisms for handling uncertain dynamics. Achieving faster error convergence and ensuring stability in a nonlinear system (NLS) under disturbances for industrial control applications are challenging tasks. In this article a novel predefined time stability (PDTS) based adaptive sliding mode controller is designed for such NLS. Using this new PDTS, a novel theorem is presented which includes an extra square term that increases convergence speed, making the controller more robust and guaranteeing the predefined time convergence. Then a new PDTS sliding surface has been designed in which a novel piecewise function has been added to address the singularity issue which is commonly encountered in conventional predefined time sliding mode controllers (SMCs). This controller is further improved by including a novel adaptive law which offers more flexibility when setting up the controller stability. Moreover, a general form of the PDTS theorem is presented which is useful for implementing the proposed adaptive control law with other modified sophisticated control algorithms. The paper extends this new theoretical development to adaptive predefined time SMCs, and the simulation and experimental study reveal that the proposed method can achieve better control performance compared to existing predefined time controllers.
{"title":"Design, analysis and experimental validation of novel accelerated adaptive predefined time SMC for nonlinear systems","authors":"Saleem Riaz, Bingqiang Li","doi":"10.1016/j.jestch.2025.102246","DOIUrl":"10.1016/j.jestch.2025.102246","url":null,"abstract":"<div><div>Predefined-time stability (PDTS) controllers are sought for industrial nonlinear systems as they guarantee convergence within a user-defined time, independent of initial conditions. However, conventional PDTS sliding mode controllers often suffer from a trade-off between convergence speed and robustness, are prone to singularity issues, and typically lack adaptive mechanisms for handling uncertain dynamics. Achieving faster error convergence and ensuring stability in a nonlinear system (NLS) under disturbances for industrial control applications are challenging tasks. In this article a novel predefined time stability (PDTS) based adaptive sliding mode controller is designed for such NLS. Using this new PDTS, a novel theorem is presented which includes an extra square term that increases convergence speed, making the controller more robust and guaranteeing the predefined time convergence. Then a new PDTS sliding surface has been designed in which a novel piecewise function has been added to address the singularity issue which is commonly encountered in conventional predefined time sliding mode controllers (SMCs). This controller is further improved by including a novel adaptive law which offers more flexibility when setting up the controller stability. Moreover, a general form of the PDTS theorem is presented which is useful for implementing the proposed adaptive control law with other modified sophisticated control algorithms. The paper extends this new theoretical development to adaptive predefined time SMCs, and the simulation and experimental study reveal that the proposed method can achieve better control performance compared to existing predefined time controllers.</div></div>","PeriodicalId":48609,"journal":{"name":"Engineering Science and Technology-An International Journal-Jestech","volume":"72 ","pages":"Article 102246"},"PeriodicalIF":5.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145747888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-11-14DOI: 10.1016/j.jestch.2025.102208
Tao Fu, Guoxin Han, Xuming Qin, Jinfang Li, Weiting Lin
The fast growth of the Internet of Things (IoT) into mission-critical applications requires secure and efficient routing protocols. Nevertheless, the resource limitations of IoT devices and their susceptibility to attacks require smart, dynamic solutions. To overcome these challenges, this paper introduces a new, safe, multi-hop routing algorithm that combines the use of fuzzy logic and reinforcement learning. We initially build a high-performance communication backbone over a Connected Dominating Set (CDS) to reduce network overhead. A fuzzy inference system then intelligently considers the possible paths using path energy, distance, and node credibility to choose the best path to transmit the data. A Q-learning model is used to dynamically evaluate the reliability of each node to provide security, and to identify and isolate malicious actors. Our algorithm is shown to be better in experimental results, with the ability to increase the ratio of packet delivery by up to 2.4 percent and at the same time lower the average energy consumption by about 6.53 percent of the current state-of-the-art protocols. These results demonstrate that our hybrid solution has a great potential to improve the reliability and safety of data routing in contemporary IoT networks.
{"title":"A secure multi-hop routing algorithm based-on fuzzy logic for IoT communication","authors":"Tao Fu, Guoxin Han, Xuming Qin, Jinfang Li, Weiting Lin","doi":"10.1016/j.jestch.2025.102208","DOIUrl":"10.1016/j.jestch.2025.102208","url":null,"abstract":"<div><div>The fast growth of the Internet of Things (IoT) into mission-critical applications requires secure and efficient routing protocols. Nevertheless, the resource limitations of IoT devices and their susceptibility to attacks require smart, dynamic solutions. To overcome these challenges, this paper introduces a new, safe, multi-hop routing algorithm that combines the use of fuzzy logic and reinforcement learning. We initially build a high-performance communication backbone over a Connected Dominating Set (CDS) to reduce network overhead. A fuzzy inference system then intelligently considers the possible paths using path energy, distance, and node credibility to choose the best path to transmit the data. A Q-learning model is used to dynamically evaluate the reliability of each node to provide security, and to identify and isolate malicious actors. Our algorithm is shown to be better in experimental results, with the ability to increase the ratio of packet delivery by up to 2.4 percent and at the same time lower the average energy consumption by about 6.53 percent of the current state-of-the-art protocols. These results demonstrate that our hybrid solution has a great potential to improve the reliability and safety of data routing in contemporary IoT networks.</div></div>","PeriodicalId":48609,"journal":{"name":"Engineering Science and Technology-An International Journal-Jestech","volume":"72 ","pages":"Article 102208"},"PeriodicalIF":5.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-11-10DOI: 10.1016/j.jestch.2025.102225
Burcu Acar Demirci , Mehmet Engin , Erkan Zeki Engin
Breast cancer is the most diagnosed cancer among women worldwide. Early detection substantially improves treatment outcomes, especially when lesions are small and localized. Although conventional imaging modalities such as mammography, CT, MRI, and ultrasonography play a vital role in diagnosis, they often entail radiation exposure, high cost, and the use of contrast agents. These drawbacks have motivated increasing interest in non-invasive and cost-effective alternatives such as Infrared Thermal Imaging (ITI), which captures surface temperature variations that may indicate malignancy. This study proposes a novel ITI-based diagnostic framework integrating deep learning-driven feature extraction with conventional machine learning classifiers. Three autoencoder architectures—Vanilla Autoencoder (VanAE), Convolutional Autoencoder (CAE), and Variational Autoencoder (VAE)—were utilized to extract discriminative latent features from dynamic breast thermograms. The extracted features were subsequently classified using Support Vector Machine (SVM) and Random Forest (RF) algorithms. Experimental evaluation on a balanced DMR-IR dynamic dataset comprising 3,600 thermograms demonstrated that the CAE-SVM combination achieved the highest performance, reaching 92.28% accuracy, 89.11% sensitivity, 95.94% specificity, and a 92.26% F1-score. In addition to its superior classification performance, the CAE model exhibited the shortest training time, underscoring its potential for practical clinical implementation. Overall, the findings confirm the effectiveness of autoencoder-based architectures in learning meaningful representations directly from raw thermograms without relying on handcrafted or pre-trained features.
{"title":"Comparative analysis of autoencoder architectures for breast cancer detection using dynamic infrared thermography","authors":"Burcu Acar Demirci , Mehmet Engin , Erkan Zeki Engin","doi":"10.1016/j.jestch.2025.102225","DOIUrl":"10.1016/j.jestch.2025.102225","url":null,"abstract":"<div><div>Breast cancer is the most diagnosed cancer among women worldwide. Early detection substantially improves treatment outcomes, especially when lesions are small and localized. Although conventional imaging modalities such as mammography, CT, MRI, and ultrasonography play a vital role in diagnosis, they often entail radiation exposure, high cost, and the use of contrast agents. These drawbacks have motivated increasing interest in non-invasive and cost-effective alternatives such as Infrared Thermal Imaging (ITI), which captures surface temperature variations that may indicate malignancy. This study proposes a novel ITI-based diagnostic framework integrating deep learning-driven feature extraction with conventional machine learning classifiers. Three autoencoder architectures—Vanilla Autoencoder (VanAE), Convolutional Autoencoder (CAE), and Variational Autoencoder (VAE)—were utilized to extract discriminative latent features from dynamic breast thermograms. The extracted features were subsequently classified using Support Vector Machine (SVM) and Random Forest (RF) algorithms. Experimental evaluation on a balanced DMR-IR dynamic dataset comprising 3,600 thermograms demonstrated that the CAE-SVM combination achieved the highest performance, reaching 92.28% accuracy, 89.11% sensitivity, 95.94% specificity, and a 92.26% F1-score. In addition to its superior classification performance, the CAE model exhibited the shortest training time, underscoring its potential for practical clinical implementation. Overall, the findings confirm the effectiveness of autoencoder-based architectures in learning meaningful representations directly from raw thermograms without relying on handcrafted or pre-trained features.</div></div>","PeriodicalId":48609,"journal":{"name":"Engineering Science and Technology-An International Journal-Jestech","volume":"72 ","pages":"Article 102225"},"PeriodicalIF":5.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-10-29DOI: 10.1016/j.jestch.2025.102211
Yaşar Emre Topaloğlu , Yaşar Nuhoğlu , Ömer Apaydin
In this study, the purification or bioremediation of synthetic water prepared using Fe2+, one of the heavy metals, by growing microorganisms was investigated. For this purpose, it was thought that microorganism species that live in scarce nutrient environments and grow rapidly would be effective in heavy metal purification. Microorganisms living on the stone surface in historical artifacts were preferred as scarce nutrient medium. Among approximately 20 bacterial and fungal species isolated from the stone surface, two that grew very rapidly were preferred for this study. Bioremediation studies were conducted with Penicillium jensenii and Penicillium frequentans. Penicillium jensenii and Penicillium frequentans use elements such as iron in the mineralogical structure of the stone for their growth in very scarce nutrient conditions. In this study, iron removal from the solution was simultaneously achieved during the process of ensuring the viability and proliferation of two different fungal species for ten days. In the preliminary experiments, the purification and bioremediation of the heavy metal in the FeSO4·7H2O compound with the help of the mentioned fungus were investigated. As a result of the analysis, it was determined that the Fe2+ removal efficiency for 100 mg Fe2+/L synthetic sample was 93.45 % and 91.90 % for Penicillium jensenii and Penicillium frequentans, respectively. Moreover, maximum specific uptake rate (Sm) were calculated as 0.139 mg Fe2+/g Penicillium jensenii and 0.124 mg Fe2+/g Penicillium frequentans fungus dry weight.
{"title":"In vivo bioremediation of Fe2+ from batch culture medium using Penicillium jensenii and Penicillium frequentans isolated from historical stone surfaces","authors":"Yaşar Emre Topaloğlu , Yaşar Nuhoğlu , Ömer Apaydin","doi":"10.1016/j.jestch.2025.102211","DOIUrl":"10.1016/j.jestch.2025.102211","url":null,"abstract":"<div><div>In this study, the purification or bioremediation of synthetic water prepared using Fe<sup>2+</sup>, one of the heavy metals, by growing microorganisms was investigated. For this purpose, it was thought that microorganism species that live in scarce nutrient environments and grow rapidly would be effective in heavy metal purification. Microorganisms living on the stone surface in historical artifacts were preferred as scarce nutrient medium. Among approximately 20 bacterial and fungal species isolated from the stone surface, two that grew very rapidly were preferred for this study. Bioremediation studies were conducted with <em>Penicillium jensenii</em> and <em>Penicillium frequentans</em>. <em>Penicillium jensenii</em> and <em>Penicillium frequentans</em> use elements such as iron in the mineralogical structure of the stone for their growth in very scarce nutrient conditions. In this study, iron removal from the solution was simultaneously achieved during the process of ensuring the viability and proliferation of two different fungal species for ten days. In the preliminary experiments, the purification and bioremediation of the heavy metal in the FeSO<sub>4</sub>·7H<sub>2</sub>O compound with the help of the mentioned fungus were investigated. As a result of the analysis, it was determined that the Fe<sup>2+</sup> removal efficiency for 100 mg Fe<sup>2+</sup>/L synthetic sample was 93.45 % and 91.90 % for <em>Penicillium jensenii</em> and <em>Penicillium frequentans</em>, respectively. Moreover, maximum specific uptake rate (S<sub>m</sub>) were calculated as 0.139 mg Fe<sup>2+</sup>/g <em>Penicillium jensenii</em> and 0.124 mg Fe<sup>2+</sup>/g <em>Penicillium frequentans</em> fungus dry weight<em>.</em></div></div>","PeriodicalId":48609,"journal":{"name":"Engineering Science and Technology-An International Journal-Jestech","volume":"72 ","pages":"Article 102211"},"PeriodicalIF":5.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145425129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-11-20DOI: 10.1016/j.jestch.2025.102230
Xiaoxu Wen , Yan Wang , Menghao Yuan , Aihui Wang , Ge Zheng , Hongnian Yu , Lin Meng
Human Activity Recognition (HAR) is essential in pervasive computing, healthcare, and human–computer interaction, where accurate interpretation of motion data underpins intelligent decision-making. Federated Learning (FL) enables privacy-preserving model training across distributed clients without sharing raw data, but suffers from degraded performance under Non-Independent and Identically Distributed (Non-IID) data, a common challenge in HAR due to user diversity and device heterogeneity. To address this, Personalized Federated Learning (PFL) introduces client-specific modeling, often via clustering. However, most existing approaches adopt static clustering strategies, lacking adaptability to dynamic changes in client data distributions. In this work, we propose DC-PFL, a Dynamic Clustering-based Personalized Federated Learning framework that performs round-wise client clustering using lightweight statistical features, like Average Peak Frequency (APF), percentiles, and Median Absolute Deviation (MAD) derived from local model parameters. This design ensures efficient and privacy-preserving similarity estimation across clients. By dynamically adjusting clusters during training, DC-PFL enables fine-grained personalization, better generalization, and improved robustness to Non-IID conditions. Experimental results on HAR benchmarks demonstrate that DC-PFL achieves superior performance in both accuracy and convergence speed compared to existing methods, including FedCHAR and standard FL baselines, validating its effectiveness in real-world federated HAR scenarios.
{"title":"DC-PFL: A dynamic clustering-based personalized federated learning method for human activity recognition","authors":"Xiaoxu Wen , Yan Wang , Menghao Yuan , Aihui Wang , Ge Zheng , Hongnian Yu , Lin Meng","doi":"10.1016/j.jestch.2025.102230","DOIUrl":"10.1016/j.jestch.2025.102230","url":null,"abstract":"<div><div>Human Activity Recognition (HAR) is essential in pervasive computing, healthcare, and human–computer interaction, where accurate interpretation of motion data underpins intelligent decision-making. Federated Learning (FL) enables privacy-preserving model training across distributed clients without sharing raw data, but suffers from degraded performance under Non-Independent and Identically Distributed (Non-IID) data, a common challenge in HAR due to user diversity and device heterogeneity. To address this, Personalized Federated Learning (PFL) introduces client-specific modeling, often via clustering. However, most existing approaches adopt static clustering strategies, lacking adaptability to dynamic changes in client data distributions. In this work, we propose DC-PFL, a Dynamic Clustering-based Personalized Federated Learning framework that performs round-wise client clustering using lightweight statistical features, like Average Peak Frequency (APF), percentiles, and Median Absolute Deviation (MAD) derived from local model parameters. This design ensures efficient and privacy-preserving similarity estimation across clients. By dynamically adjusting clusters during training, DC-PFL enables fine-grained personalization, better generalization, and improved robustness to Non-IID conditions. Experimental results on HAR benchmarks demonstrate that DC-PFL achieves superior performance in both accuracy and convergence speed compared to existing methods, including FedCHAR and standard FL baselines, validating its effectiveness in real-world federated HAR scenarios.</div></div>","PeriodicalId":48609,"journal":{"name":"Engineering Science and Technology-An International Journal-Jestech","volume":"72 ","pages":"Article 102230"},"PeriodicalIF":5.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-12-16DOI: 10.1016/S2215-0986(25)00312-X
{"title":"Front Matter 1 - Full Title Page (regular issues)/Special Issue Title page (special issues)","authors":"","doi":"10.1016/S2215-0986(25)00312-X","DOIUrl":"10.1016/S2215-0986(25)00312-X","url":null,"abstract":"","PeriodicalId":48609,"journal":{"name":"Engineering Science and Technology-An International Journal-Jestech","volume":"72 ","pages":"Article 102257"},"PeriodicalIF":5.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145797017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-11-05DOI: 10.1016/j.jestch.2025.102226
Hosham Wahballa , Mohammednour Gibreel , Abubker Ahmed , Xiaohu Chen , Lei Weining
Effective motion control techniques are necessary to achieve high precision in contact applications, especially for complex geometries in the manufacturing sector. This paper presents a novel method for smooth trajectory planning under constant admittance force control for the polishing process. The proposed method aims to improve polishing accuracy while minimizing processing time and effort. It combines mixed-degree B-spline trajectories with an Online Admittance Controller to produce a novel force–position controller, named the BOAC algorithm. The B-spline trajectory directs the online admittance controller to regulate both contact force and trajectory accuracy. Simulation studies on three complex geometries a vase, a star, and an iPad demonstrate the robustness of the BOAC controller in tracking the actual trajectory and maintaining the applied contact force. For experimental validation, the BOAC method was compared with a conventional admittance controller (CAC) during real-time polishing of complex iPad edges using a 6-axis polishing machine. The results show that BOAC consistently achieves precise trajectories while maintaining accurate contact forces, leading to a significant reduction in force errors compared to CAC. This method enhances automation in processes such as grinding and polishing by enabling precise control of contact force and ensuring smooth motion.
{"title":"Precise contact tracking on complex geometries using polishing machine tools via smooth trajectories within online constant force control","authors":"Hosham Wahballa , Mohammednour Gibreel , Abubker Ahmed , Xiaohu Chen , Lei Weining","doi":"10.1016/j.jestch.2025.102226","DOIUrl":"10.1016/j.jestch.2025.102226","url":null,"abstract":"<div><div>Effective motion control techniques are necessary to achieve high precision in contact applications, especially for complex geometries in the manufacturing sector. This paper presents a novel method for smooth trajectory planning under constant admittance force control for the polishing process. The proposed method aims to improve polishing accuracy while minimizing processing time and effort. It combines mixed-degree B-spline trajectories with an Online Admittance Controller to produce a novel force–position controller, named the BOAC algorithm. The B-spline trajectory directs the online admittance controller to regulate both contact force and trajectory accuracy. Simulation studies on three complex geometries a vase, a star, and an iPad demonstrate the robustness of the BOAC controller in tracking the actual trajectory and maintaining the applied contact force. For experimental validation, the BOAC method was compared with a conventional admittance controller (CAC) during real-time polishing of complex iPad edges using a 6-axis polishing machine. The results show that BOAC consistently achieves precise trajectories while maintaining accurate contact forces, leading to a significant reduction in force errors compared to CAC. This method enhances automation in processes such as grinding and polishing by enabling precise control of contact force and ensuring smooth motion.</div></div>","PeriodicalId":48609,"journal":{"name":"Engineering Science and Technology-An International Journal-Jestech","volume":"72 ","pages":"Article 102226"},"PeriodicalIF":5.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145475332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-11-05DOI: 10.1016/j.jestch.2025.102199
Lu Shi, Xiaoguang Wang, Qi Lin
For the multi-solution problem of tension distribution optimization in a class of n + 2 cable-driven redundant parallel robots (n-Dof), iterative optimization or geometric methods are typically used to solve this problem. The iterative optimization method fails to meet the real-time requirements due to the influence of factors such as the initial point and the number of cables. This paper proposes a novel non-iterative search algorithm based on the geometric features of the tension feasible region (TFR). This algorithm can optimize the tension optimal solution (TOS) in real time through geometric search, overcoming the limitations of traditional iterative search algorithms. The algorithm solves the TOS in two steps. Firstly, based on the analysis of the geometric features of the TFR, the TFR is searched through the proposed translation and rotation calculation rules. Secondly, the TOS is solved through the geometric feature points of the TFR. Specifically, this paper improves the traditional solution method of 1-Norm TOS, and analyzes the vertical geometric conditions for obtaining the min/max 2-Norm TOS, as well as the TOS calculation formulas of the centroid and weighted barycenter. Finally, numerical simulations and prototype experiments are conducted for two multi-degree-of-freedom coupled motion examples, and analyzes the experimental time consumption of the algorithm in each control cycle. Both numerical simulations and prototype experiments show that the proposed algorithm in this paper can quickly obtain the TOS and fully meet the requirements of real-time control.
{"title":"Non-iterative optimization algorithm for cable tension distribution of a class of n + 2 cable-driven redundant parallel robots based on computational geometry","authors":"Lu Shi, Xiaoguang Wang, Qi Lin","doi":"10.1016/j.jestch.2025.102199","DOIUrl":"10.1016/j.jestch.2025.102199","url":null,"abstract":"<div><div>For the multi-solution problem of tension distribution optimization in a class of n + 2 cable-driven redundant parallel robots (n-Dof), iterative optimization or geometric methods are typically used to solve this problem. The iterative optimization method fails to meet the real-time requirements due to the influence of factors such as the initial point and the number of cables. This paper proposes a novel non-iterative search algorithm based on the geometric features of the tension feasible region (TFR). This algorithm can optimize the tension optimal solution (TOS) in real time through geometric search, overcoming the limitations of traditional iterative search algorithms. The algorithm solves the TOS in two steps. Firstly, based on the analysis of the geometric features of the TFR, the TFR is searched through the proposed translation and rotation calculation rules. Secondly, the TOS is solved through the geometric feature points of the TFR. Specifically, this paper improves the traditional solution method of 1-Norm TOS, and analyzes the vertical geometric conditions for obtaining the min/max 2-Norm TOS, as well as the TOS calculation formulas of the centroid and weighted barycenter. Finally, numerical simulations and prototype experiments are conducted for two multi-degree-of-freedom coupled motion examples, and analyzes the experimental time consumption of the algorithm in each control cycle. Both numerical simulations and prototype experiments show that the proposed algorithm in this paper can quickly obtain the TOS and fully meet the requirements of real-time control.</div></div>","PeriodicalId":48609,"journal":{"name":"Engineering Science and Technology-An International Journal-Jestech","volume":"72 ","pages":"Article 102199"},"PeriodicalIF":5.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145475334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}