Pub Date : 2025-12-05DOI: 10.1007/s40747-025-02188-x
Nguyen Hoang Vu, Tran Van Duc, Pham Quang Tien, Nguyen Thi Ngoc Anh, Nguyen Tien Dat
{"title":"A real-time mobile solution for shoe try-on using foot pose estimation and 3D processing techniques","authors":"Nguyen Hoang Vu, Tran Van Duc, Pham Quang Tien, Nguyen Thi Ngoc Anh, Nguyen Tien Dat","doi":"10.1007/s40747-025-02188-x","DOIUrl":"https://doi.org/10.1007/s40747-025-02188-x","url":null,"abstract":"","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"69 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145680382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-04DOI: 10.1007/s40747-025-02195-y
Md. Najmul Mowla, Davood Asadi, Ferdous Sohel
Robust fault detection and diagnosis (FDD) in multirotor unmanned aerial vehicles (UAVs) remains challenging due to limited actuator redundancy, nonlinear dynamics, and environmental disturbances. This work introduces two lightweight deep learning architectures: the Convolutional-LSTM Fault Detection Network (CLFDNet), which combines multi-scale one-dimensional convolutional neural networks (1D-CNN), long short-term memory (LSTM) units, and an adaptive attention mechanism for spatio-temporal fault feature extraction; and the Autoencoder LSTM Multi-loss Fusion Network (AELMFNet), a soft attention–enhanced LSTM autoencoder optimized via multi-loss fusion for fine-grained fault severity estimation. Both models are trained and evaluated on UAV-Fault Magnitude V1, a high-fidelity simulation dataset containing 114,230 labeled samples with motor degradation levels ranging from 5% to 40% in the take-off, hover, navigation, and descent phases, representing the most probable and recoverable fault scenarios in quadrotor UAVs. Including coupled faults enables models to learn correlated degradation patterns and actuator interactions while maintaining controllability under standard flight laws. CLFDNet achieves 96.81% precision in fault severity classification and 100% accuracy in motor fault localization with only 19.6K parameters, demonstrating suitability for real-time onboard applications. AELMFNet achieves the lowest reconstruction loss of 0.001 with Huber loss and an inference latency of 6 ms/step, underscoring its efficiency for embedded deployment. Comparative experiments against 15 baselines, including five classical machine learning models, five state-of-the-art fault detection methods, and five attention-based deep learning variants, validate the effectiveness of the proposed architectures. These findings confirm that lightweight deep models enable accurate and efficient diagnosis of UAV faults with minimal sensing.
{"title":"Real-time fault detection in multirotor UAVs using lightweight deep learning and high-fidelity simulation data with single and double fault magnitudes","authors":"Md. Najmul Mowla, Davood Asadi, Ferdous Sohel","doi":"10.1007/s40747-025-02195-y","DOIUrl":"https://doi.org/10.1007/s40747-025-02195-y","url":null,"abstract":"Robust fault detection and diagnosis (FDD) in multirotor unmanned aerial vehicles (UAVs) remains challenging due to limited actuator redundancy, nonlinear dynamics, and environmental disturbances. This work introduces two lightweight deep learning architectures: the Convolutional-LSTM Fault Detection Network (CLFDNet), which combines multi-scale one-dimensional convolutional neural networks (1D-CNN), long short-term memory (LSTM) units, and an adaptive attention mechanism for spatio-temporal fault feature extraction; and the Autoencoder LSTM Multi-loss Fusion Network (AELMFNet), a soft attention–enhanced LSTM autoencoder optimized via multi-loss fusion for fine-grained fault severity estimation. Both models are trained and evaluated on UAV-Fault Magnitude V1, a high-fidelity simulation dataset containing 114,230 labeled samples with motor degradation levels ranging from 5% to 40% in the take-off, hover, navigation, and descent phases, representing the most probable and recoverable fault scenarios in quadrotor UAVs. Including coupled faults enables models to learn correlated degradation patterns and actuator interactions while maintaining controllability under standard flight laws. CLFDNet achieves 96.81% precision in fault severity classification and 100% accuracy in motor fault localization with only 19.6K parameters, demonstrating suitability for real-time onboard applications. AELMFNet achieves the lowest reconstruction loss of 0.001 with Huber loss and an inference latency of 6 ms/step, underscoring its efficiency for embedded deployment. Comparative experiments against 15 baselines, including five classical machine learning models, five state-of-the-art fault detection methods, and five attention-based deep learning variants, validate the effectiveness of the proposed architectures. These findings confirm that lightweight deep models enable accurate and efficient diagnosis of UAV faults with minimal sensing.","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"141 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145680385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-02DOI: 10.1007/s40747-025-02151-w
Edward B. Ssekulima, Amir H. Etemadi
{"title":"Stochastic optimization framework for capacity planning of hybrid solar PV–small hydropower systems using metaheuristic algorithms","authors":"Edward B. Ssekulima, Amir H. Etemadi","doi":"10.1007/s40747-025-02151-w","DOIUrl":"https://doi.org/10.1007/s40747-025-02151-w","url":null,"abstract":"","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"11 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145657751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-02DOI: 10.1007/s40747-025-02133-y
Younes Akbari, Faseela Abdullakutty, Somaya Al Maadeed, Ahmed Bouridane, Rifat Hamoudi
Accurate breast cancer survival prediction using multi-modal data is vital for enhancing clinical decisions. This study evaluates deep learning based fusion strategies, early, intermediate, late, and a hybrid approach, to integrate histopathology images and genomic data for one year survival prediction. We developed a robust evaluation framework, employing tailored deep learning architectures and metrics including accuracy, precision, recall, F1 score, and AUC. Model performance was validated using Kaplan–Meier curves and log-rank tests, with SHAP-based feature importance analysis enhancing interpretability. Results highlight the strengths and limitations of each fusion strategy, offering insights into optimal multi-modal learning approaches for breast cancer prognosis. Our findings underscore the importance of selecting task specific fusion methods, providing a reproducible, interpretable framework to advance survival prediction. All code and configurations are publicly available.
{"title":"Integrating histopathology and genomic data: a comparative study of fusion methods for breast cancer survival prediction","authors":"Younes Akbari, Faseela Abdullakutty, Somaya Al Maadeed, Ahmed Bouridane, Rifat Hamoudi","doi":"10.1007/s40747-025-02133-y","DOIUrl":"https://doi.org/10.1007/s40747-025-02133-y","url":null,"abstract":"Accurate breast cancer survival prediction using multi-modal data is vital for enhancing clinical decisions. This study evaluates deep learning based fusion strategies, early, intermediate, late, and a hybrid approach, to integrate histopathology images and genomic data for one year survival prediction. We developed a robust evaluation framework, employing tailored deep learning architectures and metrics including accuracy, precision, recall, F1 score, and AUC. Model performance was validated using Kaplan–Meier curves and log-rank tests, with SHAP-based feature importance analysis enhancing interpretability. Results highlight the strengths and limitations of each fusion strategy, offering insights into optimal multi-modal learning approaches for breast cancer prognosis. Our findings underscore the importance of selecting task specific fusion methods, providing a reproducible, interpretable framework to advance survival prediction. All code and configurations are publicly available.","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"30 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145657748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01DOI: 10.1007/s40747-025-02185-0
Yu Lu, Bin Wang, Wen Du, Xiong Li, Botao Jiang
{"title":"Decoding digital footprints: user re-identification through mobility pattern decomposition and collaborative fusion","authors":"Yu Lu, Bin Wang, Wen Du, Xiong Li, Botao Jiang","doi":"10.1007/s40747-025-02185-0","DOIUrl":"https://doi.org/10.1007/s40747-025-02185-0","url":null,"abstract":"","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"75 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145658096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}