Pub Date : 2025-12-18DOI: 10.1007/s40747-025-02209-9
Mo-Ce Gao
{"title":"StockCI: a hybrid model integrating CEEMDAN and informer for enhanced long-term stock price forecasting","authors":"Mo-Ce Gao","doi":"10.1007/s40747-025-02209-9","DOIUrl":"https://doi.org/10.1007/s40747-025-02209-9","url":null,"abstract":"","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"30 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145770787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Physics-informed neural network and momentum contrastive learning for battery state of health estimation","authors":"Jiwoo Jung, Yipene Cedric Francois Bassole, Yunsick Sung","doi":"10.1007/s40747-025-02194-z","DOIUrl":"https://doi.org/10.1007/s40747-025-02194-z","url":null,"abstract":"","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"5 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145770790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-16DOI: 10.1007/s40747-025-02201-3
He Xiao, Ziyang Liu, Fugui Luo, Xue Chen, Liping Deng
In resource-constrained environments like embedded devices, unmanned platforms, and edge computing systems, lightweight camouflage object detection (LCOD) is critical for efficient and accurate target detection, as it effectively facilitates the extraction of discriminative features in challenging scenes where the target is visually blended into the background. Existing LCOD models reduce computational demands but often struggle to balance detection accuracy and parameter efficiency in complex scenarios. To address this, we propose ULCOD-Net, an ultra-lightweight COD framework integrating gate-based multi-feature fusion and dual-constraint (including boundary and region). Specifically, we introduce a lightweight boundary-region decoder (LBRD) to leverage initial region and boundary cues, enhancing object localization. A gate-based multi-level feature fusion module (GMFFM) enables multi-level feature interaction via an attention-based gating mechanism, improving global information propagation and compensating for the limited capacity of lightweight networks. Additionally, a region-constrained feature refinement module (RFRM) progressively refines multi-layer features to produce high-quality camouflage maps. Extensive experiments on four benchmark datasets demonstrate that ULCOD-Net, with only 2.5 million (M) parameters and 3.1 giga (G) computational complexity, achieves F-measure scores of 0.837, 0.758, 0.714, and 0.787 on CHAMELEON, CAMO, COD10K, and NC4K, respectively, outperforming existing lightweight COD models and even surpassing several state-of-the-art heavyweight methods. These results highlight ULCOD-Net’s significant potential for real-time application in resource-limited settings.
{"title":"Ulcod-net: an ultra-lightweight camouflage object detection framework with gated multi-level feature fusion and dual-constraint refinement","authors":"He Xiao, Ziyang Liu, Fugui Luo, Xue Chen, Liping Deng","doi":"10.1007/s40747-025-02201-3","DOIUrl":"https://doi.org/10.1007/s40747-025-02201-3","url":null,"abstract":"In resource-constrained environments like embedded devices, unmanned platforms, and edge computing systems, lightweight camouflage object detection (LCOD) is critical for efficient and accurate target detection, as it effectively facilitates the extraction of discriminative features in challenging scenes where the target is visually blended into the background. Existing LCOD models reduce computational demands but often struggle to balance detection accuracy and parameter efficiency in complex scenarios. To address this, we propose ULCOD-Net, an ultra-lightweight COD framework integrating gate-based multi-feature fusion and dual-constraint (including boundary and region). Specifically, we introduce a lightweight boundary-region decoder (LBRD) to leverage initial region and boundary cues, enhancing object localization. A gate-based multi-level feature fusion module (GMFFM) enables multi-level feature interaction via an attention-based gating mechanism, improving global information propagation and compensating for the limited capacity of lightweight networks. Additionally, a region-constrained feature refinement module (RFRM) progressively refines multi-layer features to produce high-quality camouflage maps. Extensive experiments on four benchmark datasets demonstrate that ULCOD-Net, with only 2.5 million (M) parameters and 3.1 giga (G) computational complexity, achieves F-measure scores of 0.837, 0.758, 0.714, and 0.787 on CHAMELEON, CAMO, COD10K, and NC4K, respectively, outperforming existing lightweight COD models and even surpassing several state-of-the-art heavyweight methods. These results highlight ULCOD-Net’s significant potential for real-time application in resource-limited settings.","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"44 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145770791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate traffic flow forecasting serves as a cornerstone for intelligent transportation systems, enabling proactive accident prevention and metropolitan mobility optimization. However, existing approaches face fundamental limitations in modeling the spatiotemporal heterogeneity of traffic dynamics, particularly in simultaneously addressing (1) the decaying significance of temporal dependencies across input sequences and prediction horizons, (2) multi-scale spatial interactions spanning local congestion patterns and global functional correlations, and (3) inter-sample temporal variance in evolving traffic states. To address these limitations, this paper proposes MVA-DCNet (Multi-View Attention Dilated Convolutional Network), a novel deep learning architecture incorporating a multidimensional temporal analysis framework that systematically examines temporal influence mechanisms through three complementary perspectives: inter-sample variance, intra-sequence temporal importance, and output sequence temporal propagation. The proposed model systematically addresses temporal data heterogeneity through three innovative mechanisms: variance-aware data augmentation, adaptive temporal attention, and decaying loss weighting. For enhanced spatial correlation modeling, we develop a dilated convolutional architecture with enhanced receptive field coverage and multi-scale spatial pattern recognition capabilities. Empirical validation on two urban traffic datasets demonstrates superior efficacy in capturing complex spatiotemporal evolution patterns, achieving relative reductions of 12.7% and 9.3% in Root Mean Square Error (RMSE) respectively compared with state-of-the-art benchmarks.
{"title":"Enhancing traffic flow prediction through multi-view attention mechanism and dilated convolutional networks","authors":"Wei Li, Hao Wei, Xin Liu, Jialin Liu, Dazhi Zhan, Xiao Han, Wei Tao","doi":"10.1007/s40747-025-02146-7","DOIUrl":"https://doi.org/10.1007/s40747-025-02146-7","url":null,"abstract":"Accurate traffic flow forecasting serves as a cornerstone for intelligent transportation systems, enabling proactive accident prevention and metropolitan mobility optimization. However, existing approaches face fundamental limitations in modeling the spatiotemporal heterogeneity of traffic dynamics, particularly in simultaneously addressing (1) the decaying significance of temporal dependencies across input sequences and prediction horizons, (2) multi-scale spatial interactions spanning local congestion patterns and global functional correlations, and (3) inter-sample temporal variance in evolving traffic states. To address these limitations, this paper proposes MVA-DCNet (Multi-View Attention Dilated Convolutional Network), a novel deep learning architecture incorporating a multidimensional temporal analysis framework that systematically examines temporal influence mechanisms through three complementary perspectives: inter-sample variance, intra-sequence temporal importance, and output sequence temporal propagation. The proposed model systematically addresses temporal data heterogeneity through three innovative mechanisms: variance-aware data augmentation, adaptive temporal attention, and decaying loss weighting. For enhanced spatial correlation modeling, we develop a dilated convolutional architecture with enhanced receptive field coverage and multi-scale spatial pattern recognition capabilities. Empirical validation on two urban traffic datasets demonstrates superior efficacy in capturing complex spatiotemporal evolution patterns, achieving relative reductions of 12.7% and 9.3% in Root Mean Square Error (RMSE) respectively compared with state-of-the-art benchmarks.","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"17 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145752822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-15DOI: 10.1007/s40747-025-02143-w
Summra Saleem, Muhammad Nabeel Asim, Andreas Dengel
Within software development life-cycle, requirements guide the entire development process from inception to completion by ensuring alignment between stakeholder expectations and the final product. Requirements extraction from miscellaneous information is a challenging and complex task. Manual extraction of requirements is not only prone to human error but also contributes to increased project costs and delayed project timelines. To automate the requirement extraction process, researchers have investigated the potential of deep learning architectures, large language models (LLM) and generative language models such as ChatGPT and Gemini. However, the performance of requirements extraction could be further enhanced through the development of predictive pipelines by utilizing the combined potential of language models and deep learning architectures. To develop a powerful AI application for requirements extraction by utilizing the combined potential of LLMs and DL architectures, this study presents ReqNet framework. The framework encompasses 7 most widely used LLMs variants (small, large, Xlarge, XXlarge) and 2 DL architectures (LSTM, GRU). The framework facilitates the development of three distinct types predictive pipelines, namely standalone LLMs, LLMs + external classifiers and an ensemble of multiple LLMs representation + external classifiers. Extensive experimentation of 48 predictive pipelines across 2 public core datasets and 1 independent test set, demonstrates that predictive pipelines made up from LLMs and DL architectures generally exhibited superior performance compared to pipelines solely reliant on LLMs. In addition, a ensemble of three distinct LLMs (ALBERT, BERT and XLNet) and LSTM classifier achieved a 3% improvement in F1-score over state-of-the-art predictors on the PURE dataset, a 10% improvement on the Dronology dataset and a 3% improvement on the RFI independent test set.
{"title":"ReqNet: an LLM-driven computational framework for automated requirements extraction from unstructured documents","authors":"Summra Saleem, Muhammad Nabeel Asim, Andreas Dengel","doi":"10.1007/s40747-025-02143-w","DOIUrl":"https://doi.org/10.1007/s40747-025-02143-w","url":null,"abstract":"Within software development life-cycle, requirements guide the entire development process from inception to completion by ensuring alignment between stakeholder expectations and the final product. Requirements extraction from miscellaneous information is a challenging and complex task. Manual extraction of requirements is not only prone to human error but also contributes to increased project costs and delayed project timelines. To automate the requirement extraction process, researchers have investigated the potential of deep learning architectures, large language models (LLM) and generative language models such as ChatGPT and Gemini. However, the performance of requirements extraction could be further enhanced through the development of predictive pipelines by utilizing the combined potential of language models and deep learning architectures. To develop a powerful AI application for requirements extraction by utilizing the combined potential of LLMs and DL architectures, this study presents ReqNet framework. The framework encompasses 7 most widely used LLMs variants (small, large, Xlarge, XXlarge) and 2 DL architectures (LSTM, GRU). The framework facilitates the development of three distinct types predictive pipelines, namely standalone LLMs, LLMs + external classifiers and an ensemble of multiple LLMs representation + external classifiers. Extensive experimentation of 48 predictive pipelines across 2 public core datasets and 1 independent test set, demonstrates that predictive pipelines made up from LLMs and DL architectures generally exhibited superior performance compared to pipelines solely reliant on LLMs. In addition, a ensemble of three distinct LLMs (ALBERT, BERT and XLNet) and LSTM classifier achieved a 3% improvement in F1-score over state-of-the-art predictors on the PURE dataset, a 10% improvement on the Dronology dataset and a 3% improvement on the RFI independent test set.","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"1 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145753180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-15DOI: 10.1007/s40747-025-02152-9
Wanchun Sun, Shujia Li, Xinyu Duan
The core challenge in image-level weakly supervised semantic segmentation lies in generating high-quality object localization maps from simple image labels. Class Activation Map (CAM) produced by existing methods commonly suffer from two major flaws: incomplete coverage of target regions and severe background interference. To address these issues, we present a CAM-native perception-optimization framework for weakly supervised semantic segmentation. First, design a CAM generation mechanism guided by image-level weak supervision, which refines activated regions via discriminative region enhancement and spatial noise suppression. This process promotes fine-grained pixel clustering and improves the completeness of object localization. Second, introduce a spatial cue generator to enhance the adaptability of class representations, coupled with an inter-class relation propagation module that explicitly models inter-class relationships to suppress erroneous activations and significantly reduce spatial noise. Additionally, incorporate a dynamic contrastive matching strategy to eliminate background activations closely associated with the target object, ultimately producing class activation maps that are both complete and compact. Extensive experiments on PASCAL VOC 2012 and MS COCO 2014 show that our method substantially outperforms existing weakly supervised approaches, validating the effectiveness of class-aware guidance and inter-class relational modeling in improving segmentation accuracy.
{"title":"Seed perception learning for weakly supervised semantic segmentation","authors":"Wanchun Sun, Shujia Li, Xinyu Duan","doi":"10.1007/s40747-025-02152-9","DOIUrl":"https://doi.org/10.1007/s40747-025-02152-9","url":null,"abstract":"The core challenge in image-level weakly supervised semantic segmentation lies in generating high-quality object localization maps from simple image labels. Class Activation Map (CAM) produced by existing methods commonly suffer from two major flaws: incomplete coverage of target regions and severe background interference. To address these issues, we present a CAM-native perception-optimization framework for weakly supervised semantic segmentation. First, design a CAM generation mechanism guided by image-level weak supervision, which refines activated regions via discriminative region enhancement and spatial noise suppression. This process promotes fine-grained pixel clustering and improves the completeness of object localization. Second, introduce a spatial cue generator to enhance the adaptability of class representations, coupled with an inter-class relation propagation module that explicitly models inter-class relationships to suppress erroneous activations and significantly reduce spatial noise. Additionally, incorporate a dynamic contrastive matching strategy to eliminate background activations closely associated with the target object, ultimately producing class activation maps that are both complete and compact. Extensive experiments on PASCAL VOC 2012 and MS COCO 2014 show that our method substantially outperforms existing weakly supervised approaches, validating the effectiveness of class-aware guidance and inter-class relational modeling in improving segmentation accuracy.","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"148 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145752823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}