Extended Reality (XR) technologies are becoming integral to daily life. However, password-based authentication in XR disrupts immersion due to poor usability, as entering credentials with XR controllers is cumbersome and error-prone. This leads users to choose weaker passwords, compromising security. To improve both usability and security, we introduce a multimodal biometric authentication system that combines eye movements and brainwave patterns using consumer-grade sensors that can be integrated into XR devices. Our prototype, developed and evaluated with 30 participants, achieves an Equal Error Rate (EER) of 0.298%, outperforming eye movement (1.820%) and brainwave (4.920%) modalities alone, as well as state-of-the-art biometric alternatives (EERs between 2.5% and 7%). Furthermore, this system enables seamless authentication through visual stimuli without complex interaction.
{"title":"Beyond gaze points: augmenting eye movement with brainwave data for multimodal user authentication in extended reality","authors":"Matin Fallahi, Patricia Arias-Cabarcos, Thorsten Strufe","doi":"10.1007/s40747-025-02157-4","DOIUrl":"https://doi.org/10.1007/s40747-025-02157-4","url":null,"abstract":"Extended Reality (XR) technologies are becoming integral to daily life. However, password-based authentication in XR disrupts immersion due to poor usability, as entering credentials with XR controllers is cumbersome and error-prone. This leads users to choose weaker passwords, compromising security. To improve both usability and security, we introduce a multimodal biometric authentication system that combines eye movements and brainwave patterns using consumer-grade sensors that can be integrated into XR devices. Our prototype, developed and evaluated with 30 participants, achieves an Equal Error Rate (EER) of 0.298%, outperforming eye movement (1.820%) and brainwave (4.920%) modalities alone, as well as state-of-the-art biometric alternatives (EERs between 2.5% and 7%). Furthermore, this system enables seamless authentication through visual stimuli without complex interaction.","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"93 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145807715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Time series prediction model based on transformer and LSTM for predicting the occurrence rate of mountain torrents","authors":"Hongtao Zhang, Peng Zhi, Longhao Jiang, Yan Li, Rui Zhou, Qingguo Zhou, Zhaxi Lengben","doi":"10.1007/s40747-025-02153-8","DOIUrl":"https://doi.org/10.1007/s40747-025-02153-8","url":null,"abstract":"","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"32 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145807376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-18DOI: 10.1007/s40747-025-02209-9
Mo-Ce Gao
{"title":"StockCI: a hybrid model integrating CEEMDAN and informer for enhanced long-term stock price forecasting","authors":"Mo-Ce Gao","doi":"10.1007/s40747-025-02209-9","DOIUrl":"https://doi.org/10.1007/s40747-025-02209-9","url":null,"abstract":"","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"30 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145770787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Physics-informed neural network and momentum contrastive learning for battery state of health estimation","authors":"Jiwoo Jung, Yipene Cedric Francois Bassole, Yunsick Sung","doi":"10.1007/s40747-025-02194-z","DOIUrl":"https://doi.org/10.1007/s40747-025-02194-z","url":null,"abstract":"","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"5 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145770790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-16DOI: 10.1007/s40747-025-02201-3
He Xiao, Ziyang Liu, Fugui Luo, Xue Chen, Liping Deng
In resource-constrained environments like embedded devices, unmanned platforms, and edge computing systems, lightweight camouflage object detection (LCOD) is critical for efficient and accurate target detection, as it effectively facilitates the extraction of discriminative features in challenging scenes where the target is visually blended into the background. Existing LCOD models reduce computational demands but often struggle to balance detection accuracy and parameter efficiency in complex scenarios. To address this, we propose ULCOD-Net, an ultra-lightweight COD framework integrating gate-based multi-feature fusion and dual-constraint (including boundary and region). Specifically, we introduce a lightweight boundary-region decoder (LBRD) to leverage initial region and boundary cues, enhancing object localization. A gate-based multi-level feature fusion module (GMFFM) enables multi-level feature interaction via an attention-based gating mechanism, improving global information propagation and compensating for the limited capacity of lightweight networks. Additionally, a region-constrained feature refinement module (RFRM) progressively refines multi-layer features to produce high-quality camouflage maps. Extensive experiments on four benchmark datasets demonstrate that ULCOD-Net, with only 2.5 million (M) parameters and 3.1 giga (G) computational complexity, achieves F-measure scores of 0.837, 0.758, 0.714, and 0.787 on CHAMELEON, CAMO, COD10K, and NC4K, respectively, outperforming existing lightweight COD models and even surpassing several state-of-the-art heavyweight methods. These results highlight ULCOD-Net’s significant potential for real-time application in resource-limited settings.
{"title":"Ulcod-net: an ultra-lightweight camouflage object detection framework with gated multi-level feature fusion and dual-constraint refinement","authors":"He Xiao, Ziyang Liu, Fugui Luo, Xue Chen, Liping Deng","doi":"10.1007/s40747-025-02201-3","DOIUrl":"https://doi.org/10.1007/s40747-025-02201-3","url":null,"abstract":"In resource-constrained environments like embedded devices, unmanned platforms, and edge computing systems, lightweight camouflage object detection (LCOD) is critical for efficient and accurate target detection, as it effectively facilitates the extraction of discriminative features in challenging scenes where the target is visually blended into the background. Existing LCOD models reduce computational demands but often struggle to balance detection accuracy and parameter efficiency in complex scenarios. To address this, we propose ULCOD-Net, an ultra-lightweight COD framework integrating gate-based multi-feature fusion and dual-constraint (including boundary and region). Specifically, we introduce a lightweight boundary-region decoder (LBRD) to leverage initial region and boundary cues, enhancing object localization. A gate-based multi-level feature fusion module (GMFFM) enables multi-level feature interaction via an attention-based gating mechanism, improving global information propagation and compensating for the limited capacity of lightweight networks. Additionally, a region-constrained feature refinement module (RFRM) progressively refines multi-layer features to produce high-quality camouflage maps. Extensive experiments on four benchmark datasets demonstrate that ULCOD-Net, with only 2.5 million (M) parameters and 3.1 giga (G) computational complexity, achieves F-measure scores of 0.837, 0.758, 0.714, and 0.787 on CHAMELEON, CAMO, COD10K, and NC4K, respectively, outperforming existing lightweight COD models and even surpassing several state-of-the-art heavyweight methods. These results highlight ULCOD-Net’s significant potential for real-time application in resource-limited settings.","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"44 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145770791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}