首页 > 最新文献

Complex & Intelligent Systems最新文献

英文 中文
A multi-UAV rapid post-disaster search and rescue method based on deep reinforcement learning 一种基于深度强化学习的多无人机快速灾后搜救方法
IF 5.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-29 DOI: 10.1007/s40747-025-02166-3
Li Tan, Haixia Zhao
Deep reinforcement learning shows broad prospects in multi-unmanned aerial vehicle(UAV) collaborative search and rescue tasks. However, in the face of high-dimensional collaborative decision-making spaces and limited computing resources, its performance is vulnerable to limitations. This paper proposes a deep deterministic policy gradient method based on linear attention. By introducing the linear attention mechanism based on random feature mapping, while effectively modeling the interaction among UAVs, the computational and storage overcosts caused by the increase in the number of UAVs have been significantly reduced. Furthermore, by combining smooth experience replay and adaptive importance sampling mechanism, the training efficiency and strategy stability have been further improved. The simulation experiments on both post-disaster response search and dynamic containment tasks demonstrate that the proposed algorithm consistently outperforms existing methods. In small-scale scenarios, it maintains nearly perfect success rates, while in medium- and large-scale settings it achieves up to 90.6% and 85.2% success rates in the post-disaster response search task and up to 90.1% and 80.2% in the containment task, corresponding to relative improvements of 15–21% over baselines. These results highlight both the robustness of the method in simple cases and its clear advantage under more challenging multi-UAV conditions.
深度强化学习在多无人机协同搜救任务中具有广阔的应用前景。然而,面对高维协同决策空间和有限的计算资源,其性能容易受到限制。提出了一种基于线性注意力的深度确定性策略梯度方法。通过引入基于随机特征映射的线性关注机制,在有效建模无人机间相互作用的同时,显著降低了无人机数量增加所带来的计算和存储开销。此外,通过将平滑经验回放与自适应重要性抽样机制相结合,进一步提高了训练效率和策略稳定性。在灾后响应搜索和动态遏制任务上的仿真实验表明,该算法始终优于现有方法。在小规模场景中,它保持了近乎完美的成功率,而在中型和大型场景中,它在灾后响应搜索任务中实现了高达90.6%和85.2%的成功率,在遏制任务中实现了高达90.1%和80.2%的成功率,相对于基线提高了15-21%。这些结果突出了该方法在简单情况下的鲁棒性,以及在更具挑战性的多无人机条件下的明显优势。
{"title":"A multi-UAV rapid post-disaster search and rescue method based on deep reinforcement learning","authors":"Li Tan, Haixia Zhao","doi":"10.1007/s40747-025-02166-3","DOIUrl":"https://doi.org/10.1007/s40747-025-02166-3","url":null,"abstract":"Deep reinforcement learning shows broad prospects in multi-unmanned aerial vehicle(UAV) collaborative search and rescue tasks. However, in the face of high-dimensional collaborative decision-making spaces and limited computing resources, its performance is vulnerable to limitations. This paper proposes a deep deterministic policy gradient method based on linear attention. By introducing the linear attention mechanism based on random feature mapping, while effectively modeling the interaction among UAVs, the computational and storage overcosts caused by the increase in the number of UAVs have been significantly reduced. Furthermore, by combining smooth experience replay and adaptive importance sampling mechanism, the training efficiency and strategy stability have been further improved. The simulation experiments on both post-disaster response search and dynamic containment tasks demonstrate that the proposed algorithm consistently outperforms existing methods. In small-scale scenarios, it maintains nearly perfect success rates, while in medium- and large-scale settings it achieves up to 90.6% and 85.2% success rates in the post-disaster response search task and up to 90.1% and 80.2% in the containment task, corresponding to relative improvements of 15–21% over baselines. These results highlight both the robustness of the method in simple cases and its clear advantage under more challenging multi-UAV conditions.","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"30 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145847121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An improved generalized evolutionary algorithm for constrained multimodal multiobjective optimization 约束多模态多目标优化的改进广义进化算法
IF 5.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-29 DOI: 10.1007/s40747-025-02164-5
Caitong Yue, Wenhao Wu, Jing Liang, Ying Bi, Kunjie Yu, Ke Chen, Weifeng Guo
{"title":"An improved generalized evolutionary algorithm for constrained multimodal multiobjective optimization","authors":"Caitong Yue, Wenhao Wu, Jing Liang, Ying Bi, Kunjie Yu, Ke Chen, Weifeng Guo","doi":"10.1007/s40747-025-02164-5","DOIUrl":"https://doi.org/10.1007/s40747-025-02164-5","url":null,"abstract":"","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"32 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145847124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An entropy-regularized counterfactual framework for robust and generalizable ABSA 鲁棒可推广ABSA的熵正则反事实框架
IF 5.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-29 DOI: 10.1007/s40747-025-02170-7
Qian Deng, Haitong Yang, Jun Shen, Jinguang Gu, Jinshuo Liu, Meng Wang, Youcheng Yan
{"title":"An entropy-regularized counterfactual framework for robust and generalizable ABSA","authors":"Qian Deng, Haitong Yang, Jun Shen, Jinguang Gu, Jinshuo Liu, Meng Wang, Youcheng Yan","doi":"10.1007/s40747-025-02170-7","DOIUrl":"https://doi.org/10.1007/s40747-025-02170-7","url":null,"abstract":"","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"22 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145847114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
I2D-SGG: scene graph generation via joint modeling of intra- and inter-relationship dependencies I2D-SGG:通过内部和相互关系依赖的联合建模来生成场景图
IF 5.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-28 DOI: 10.1007/s40747-025-02208-w
Juan Lei, Jiangpeng Tian, Xiong You, Zhiwei He
Scene graph generation (SGG), which involves jointly detecting entities and inferring their relationships from images, plays a critical role in high-level visual scene understanding and reasoning tasks. Most existing SGG methods primarily focus on learning dependencies within individual triplets and follow a unidirectional reasoning paradigm, thereby overlooking the reverse constraints from predicates to entities. Moreover, they generally fail to capture inter-relationship dependencies, resulting in isolated predictions that ignore the global contextual information formed by shared entities or semantic associations. To address these limitations, this paper proposes I 2 D-SGG, a novel framework that jointly models both I ntra- and I nter-relationship Dependencies to improve the accuracy and efficacy of SGG. First, we introduce a triple-decoder architecture with dedicated modules for decoding subject, object, and predicate, connected through a prior-enhanced sparse relation matrix. Second, decoupled conditional queries comprising position queries and content queries are strengthened via cross-layer fusion and bidirectional attention, facilitating deeper geometric and semantic interaction within each triplet. Third, a global correlation graph-based reasoning module is employed to model inter-relationships across triplets. This module utilizes Graph Convolutional Networks (GCNs) to enable cross-triplet message passing and dynamic feature aggregation, thereby supporting global context-aware relational reasoning beyond isolated triplet. Experiments on the VG-150 dataset demonstrate that I 2 D-SGG achieves a mean Recall@100 (mR@100) of 35.41%, outperforming the state-of-the-art one-stage method by 1.57%. Qualitative analyses further confirm its superior capability in fine-grained scene understanding. Ablation studies validate the effectiveness and generalizability of our proposed dual dependency modeling mechanism. I 2 D-SGG enhances the model's capacity to comprehend both intra- and inter-relationship, overcoming limitations of unidirectional propagation, entangled query design, and isolated triplet reasoning in conventional approaches, thereby offering a new perspective for fine-grained relational modeling in complex visual scenes.
场景图生成(Scene graph generation, SGG)涉及到从图像中共同检测实体并推断它们之间的关系,在高级视觉场景理解和推理任务中起着至关重要的作用。大多数现有的SGG方法主要关注于学习单个三元组中的依赖关系,并遵循单向推理范式,从而忽略了从谓词到实体的反向约束。此外,它们通常无法捕获相互关系依赖,导致孤立的预测,忽略了由共享实体或语义关联形成的全局上下文信息。为了解决这些限制,本文提出了一种新的框架I 2 D-SGG,该框架联合建模I间关系依赖和I间关系依赖,以提高SGG的准确性和有效性。首先,我们引入了一个三解码器架构,其中包含用于解码主题、对象和谓词的专用模块,这些模块通过先验增强的稀疏关系矩阵连接。其次,通过跨层融合和双向关注加强了由位置查询和内容查询组成的解耦条件查询,促进了每个三元组内部更深层次的几何和语义交互。第三,采用基于全局关联图的推理模块对三元组间的相互关系进行建模。该模块利用图形卷积网络(GCNs)实现跨三元组消息传递和动态特征聚合,从而支持超越孤立三元组的全局上下文感知关系推理。在VG-150数据集上的实验表明,I 2 D-SGG的平均准确率Recall@100 (mR@100)为35.41%,比最先进的单阶段方法高1.57%。定性分析进一步证实了其在细粒度场景理解方面的优越能力。消融研究验证了我们提出的双依赖建模机制的有效性和可泛化性。2d - sgg增强了模型理解内部关系和相互关系的能力,克服了传统方法中单向传播、纠缠查询设计和孤立三元组推理的局限性,从而为复杂视觉场景中的细粒度关系建模提供了新的视角。
{"title":"I2D-SGG: scene graph generation via joint modeling of intra- and inter-relationship dependencies","authors":"Juan Lei, Jiangpeng Tian, Xiong You, Zhiwei He","doi":"10.1007/s40747-025-02208-w","DOIUrl":"https://doi.org/10.1007/s40747-025-02208-w","url":null,"abstract":"Scene graph generation (SGG), which involves jointly detecting entities and inferring their relationships from images, plays a critical role in high-level visual scene understanding and reasoning tasks. Most existing SGG methods primarily focus on learning dependencies within individual triplets and follow a unidirectional reasoning paradigm, thereby overlooking the reverse constraints from predicates to entities. Moreover, they generally fail to capture inter-relationship dependencies, resulting in isolated predictions that ignore the global contextual information formed by shared entities or semantic associations. To address these limitations, this paper proposes I <jats:sup>2</jats:sup> D-SGG, a novel framework that jointly models both I ntra- and I nter-relationship Dependencies to improve the accuracy and efficacy of SGG. First, we introduce a triple-decoder architecture with dedicated modules for decoding subject, object, and predicate, connected through a prior-enhanced sparse relation matrix. Second, decoupled conditional queries comprising position queries and content queries are strengthened via cross-layer fusion and bidirectional attention, facilitating deeper geometric and semantic interaction within each triplet. Third, a global correlation graph-based reasoning module is employed to model inter-relationships across triplets. This module utilizes Graph Convolutional Networks (GCNs) to enable cross-triplet message passing and dynamic feature aggregation, thereby supporting global context-aware relational reasoning beyond isolated triplet. Experiments on the VG-150 dataset demonstrate that I <jats:sup>2</jats:sup> D-SGG achieves a mean Recall@100 (mR@100) of 35.41%, outperforming the state-of-the-art one-stage method by 1.57%. Qualitative analyses further confirm its superior capability in fine-grained scene understanding. Ablation studies validate the effectiveness and generalizability of our proposed dual dependency modeling mechanism. I <jats:sup>2</jats:sup> D-SGG enhances the model's capacity to comprehend both intra- and inter-relationship, overcoming limitations of unidirectional propagation, entangled query design, and isolated triplet reasoning in conventional approaches, thereby offering a new perspective for fine-grained relational modeling in complex visual scenes.","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"58 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145847126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FBCCNet: a Bayesian perspective of federated learning with crowdsourced annotations on client side FBCCNet:在客户端使用众包注释的联邦学习的贝叶斯视角
IF 5.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-26 DOI: 10.1007/s40747-025-02182-3
Hangyu Zhu, Xilu Wang, Yaochu Jin
{"title":"FBCCNet: a Bayesian perspective of federated learning with crowdsourced annotations on client side","authors":"Hangyu Zhu, Xilu Wang, Yaochu Jin","doi":"10.1007/s40747-025-02182-3","DOIUrl":"https://doi.org/10.1007/s40747-025-02182-3","url":null,"abstract":"","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"1 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145836247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Grouping via sensitivity analysis evolutionary algorithm for high-dimensional expensive multi-objective optimization 基于灵敏度分析的分组进化算法用于高维昂贵多目标优化
IF 5.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-26 DOI: 10.1007/s40747-025-02160-9
Weichao Chen, Ziyang Li, Jiong Yu, Yonglin Pu
{"title":"Grouping via sensitivity analysis evolutionary algorithm for high-dimensional expensive multi-objective optimization","authors":"Weichao Chen, Ziyang Li, Jiong Yu, Yonglin Pu","doi":"10.1007/s40747-025-02160-9","DOIUrl":"https://doi.org/10.1007/s40747-025-02160-9","url":null,"abstract":"","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"9 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145836248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NIM-STGCN: Differentiable motion decomposition for egocentric pedestrian trajectory prediction NIM-STGCN:自中心行人轨迹预测的可微运动分解
IF 5.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-26 DOI: 10.1007/s40747-025-02190-3
Fangtao Qin, Bin Fang, Yi Wang
Pedestrian trajectory prediction from egocentric monocular video is hindered by camera motion, intermittent occlusions, and complex social interactions. We present NIM-STGCN, a unified framework whose core contribution is a differentiable view normalization(GVN) that couples an enhanced differentiable PnP layer (ED-PnP) with an $$textrm{SE}(3)$$ SE ( 3 ) warp to align past observations into a single virtual static camera frame. Because GVN is trained end-to-end, forecasting losses back-propagate to pose estimation, yielding geometrically cleaner inputs. On the normalized histories, a lightweight Gated Convolutional Imputation Module (GCIM) recovers missing bounding-box measurements while preserving observed entries, and an efficient spatio-temporal GCN encodes agent dynamics and interactions (optionally augmented by a physics-guided kinematics–interaction prior, PKIM). A Gaussian-mixture predictor produces multi-modal futures and is optimized with a sequence-level negative log-likelihood together with a time-weighted position loss. Extensive experiments on the JAAD and PIE benchmarks show that NIM-STGCN reduces Average Displacement Error (ADE) and Final Displacement Error (FDE) by 12–18 % compared to state-of-the-art methods. Code is available at https://github.com/fantot/NIM-STGCN . Graphical abstract
从以自我为中心的单目视频中预测行人轨迹会受到摄像机运动、间歇性遮挡和复杂的社会互动的阻碍。我们提出了NIM-STGCN,这是一个统一的框架,其核心贡献是可微视图归一化(GVN),将增强的可微PnP层(ED-PnP)与$$textrm{SE}(3)$$ SE(3)扭曲相结合,将过去的观测结果对齐到单个虚拟静态相机帧中。由于GVN是端到端训练的,预测损失反向传播到姿态估计,从而产生几何上更清晰的输入。在归一化历史上,轻量级门控卷积输入模块(GCIM)恢复丢失的边界盒测量,同时保留观察到的条目,高效的时空GCN编码代理动态和交互(可选地通过物理引导的运动学交互先验,PKIM进行增强)。高斯混合预测器产生多模态未来,并通过序列级负对数似然以及时间加权的位置损失进行优化。在JAAD和PIE基准测试中进行的大量实验表明,nimm - stgcn将平均位移误差(ADE)和最终位移误差(FDE)降低了12 - 18% % compared to state-of-the-art methods. Code is available at https://github.com/fantot/NIM-STGCN . Graphical abstract
{"title":"NIM-STGCN: Differentiable motion decomposition for egocentric pedestrian trajectory prediction","authors":"Fangtao Qin, Bin Fang, Yi Wang","doi":"10.1007/s40747-025-02190-3","DOIUrl":"https://doi.org/10.1007/s40747-025-02190-3","url":null,"abstract":"Pedestrian trajectory prediction from egocentric monocular video is hindered by camera motion, intermittent occlusions, and complex social interactions. We present NIM-STGCN, a unified framework whose core contribution is a differentiable view normalization(GVN) that couples an enhanced differentiable PnP layer (ED-PnP) with an <jats:inline-formula> <jats:alternatives> <jats:tex-math>$$textrm{SE}(3)$$</jats:tex-math> <mml:math xmlns:mml=\"http://www.w3.org/1998/Math/MathML\"> <mml:mrow> <mml:mtext>SE</mml:mtext> <mml:mo>(</mml:mo> <mml:mn>3</mml:mn> <mml:mo>)</mml:mo> </mml:mrow> </mml:math> </jats:alternatives> </jats:inline-formula> warp to align past observations into a single virtual static camera frame. Because GVN is trained end-to-end, forecasting losses back-propagate to pose estimation, yielding geometrically cleaner inputs. On the normalized histories, a lightweight Gated Convolutional Imputation Module (GCIM) recovers missing bounding-box measurements while preserving observed entries, and an efficient spatio-temporal GCN encodes agent dynamics and interactions (optionally augmented by a physics-guided kinematics–interaction prior, PKIM). A Gaussian-mixture predictor produces multi-modal futures and is optimized with a sequence-level negative log-likelihood together with a time-weighted position loss. Extensive experiments on the JAAD and PIE benchmarks show that NIM-STGCN reduces Average Displacement Error (ADE) and Final Displacement Error (FDE) by 12–18 % compared to state-of-the-art methods. Code is available at <jats:ext-link xmlns:xlink=\"http://www.w3.org/1999/xlink\" xlink:href=\"https://github.com/fantot/NIM-STGCN\" ext-link-type=\"uri\">https://github.com/fantot/NIM-STGCN</jats:ext-link> . Graphical abstract","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"29 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145836244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Credit risk prediction and heterogeneity analysis for SMEs based on large language models and multimodal data fusion 基于大语言模型和多模态数据融合的中小企业信用风险预测及异质性分析
IF 5.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-26 DOI: 10.1007/s40747-025-02192-1
Chuanhe Shen, Wenjing Pan, Xu Shen
{"title":"Credit risk prediction and heterogeneity analysis for SMEs based on large language models and multimodal data fusion","authors":"Chuanhe Shen, Wenjing Pan, Xu Shen","doi":"10.1007/s40747-025-02192-1","DOIUrl":"https://doi.org/10.1007/s40747-025-02192-1","url":null,"abstract":"","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"32 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145836246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ASOI: anomaly separation and overlap index, an internal evaluation metric for unsupervised anomaly detection ASOI:异常分离与重叠指数,一种用于无监督异常检测的内部评价指标
IF 5.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-26 DOI: 10.1007/s40747-025-02204-0
Jiyan Salim Mahmud, Zakarya Farou, Imre Lendák
Evaluating unsupervised anomaly detection presents significant challenges due to the absence of ground truth labels and the complex nature of anomaly distributions. In this study, we introduce two novel intrinsic evaluation metrics: the Anomaly Separation Index (ASI) and the Anomaly Separation and Overlap Index (ASOI), designed to overcome the limitations of traditional metrics, which cannot assess model performance without labels. ASI quantifies the degree of separation between detected anomalies and normal distributions, while ASOI incorporates both separation and distributional overlap between them, providing an innovative evaluation approach for anomaly detection models, enabling performance assessment even in the absence of ground truth labels. Extensive experiments through precision degradation tests and unsupervised anomaly detection algorithms were conducted on multiple datasets. The results indicate that the metrics consistently correlate with traditional metrics, such as the $$F_1$$ F 1 score, in various benchmark datasets characterized by complex feature interactions and varying levels of anomaly contamination. ASOI showed a higher correlation with the $$F_1$$ F 1 score compared to ASI and several other classical intrinsic metrics. Furthermore, the findings underscore the utility of ASOI as an internal validation measure for model optimization in unsupervised anomaly tasks. The proposed metrics are computationally efficient, scalable, and adaptable to a variety of anomaly detection scenarios, making them practical for real-world applications across industries such as cybersecurity, fraud detection, and predictive maintenance.
由于缺乏地面真值标签和异常分布的复杂性,评估无监督异常检测提出了重大挑战。在本研究中,我们引入了两种新的内在评价指标:异常分离指数(ASI)和异常分离与重叠指数(ASOI),旨在克服传统指标在没有标签的情况下无法评估模型性能的局限性。ASI量化了检测到的异常与正态分布之间的分离程度,而ASOI结合了它们之间的分离和分布重叠,为异常检测模型提供了一种创新的评估方法,即使在没有地面真值标签的情况下也能进行性能评估。通过精度退化测试和无监督异常检测算法在多个数据集上进行了广泛的实验。结果表明,在各种以复杂特征相互作用和不同程度的异常污染为特征的基准数据集中,这些指标与传统指标(如$$F_1$$ f1分数)一致相关。与ASI和其他几个经典内在指标相比,ASOI与$$F_1$$ f1得分的相关性更高。此外,研究结果强调了ASOI作为无监督异常任务中模型优化的内部验证措施的实用性。所提出的指标具有计算效率、可扩展性和可适应性,适用于各种异常检测场景,使其适用于网络安全、欺诈检测和预测性维护等行业的实际应用。
{"title":"ASOI: anomaly separation and overlap index, an internal evaluation metric for unsupervised anomaly detection","authors":"Jiyan Salim Mahmud, Zakarya Farou, Imre Lendák","doi":"10.1007/s40747-025-02204-0","DOIUrl":"https://doi.org/10.1007/s40747-025-02204-0","url":null,"abstract":"Evaluating unsupervised anomaly detection presents significant challenges due to the absence of ground truth labels and the complex nature of anomaly distributions. In this study, we introduce two novel intrinsic evaluation metrics: the Anomaly Separation Index (ASI) and the Anomaly Separation and Overlap Index (ASOI), designed to overcome the limitations of traditional metrics, which cannot assess model performance without labels. ASI quantifies the degree of separation between detected anomalies and normal distributions, while ASOI incorporates both separation and distributional overlap between them, providing an innovative evaluation approach for anomaly detection models, enabling performance assessment even in the absence of ground truth labels. Extensive experiments through precision degradation tests and unsupervised anomaly detection algorithms were conducted on multiple datasets. The results indicate that the metrics consistently correlate with traditional metrics, such as the <jats:inline-formula> <jats:alternatives> <jats:tex-math>$$F_1$$</jats:tex-math> <mml:math xmlns:mml=\"http://www.w3.org/1998/Math/MathML\"> <mml:msub> <mml:mi>F</mml:mi> <mml:mn>1</mml:mn> </mml:msub> </mml:math> </jats:alternatives> </jats:inline-formula> score, in various benchmark datasets characterized by complex feature interactions and varying levels of anomaly contamination. ASOI showed a higher correlation with the <jats:inline-formula> <jats:alternatives> <jats:tex-math>$$F_1$$</jats:tex-math> <mml:math xmlns:mml=\"http://www.w3.org/1998/Math/MathML\"> <mml:msub> <mml:mi>F</mml:mi> <mml:mn>1</mml:mn> </mml:msub> </mml:math> </jats:alternatives> </jats:inline-formula> score compared to ASI and several other classical intrinsic metrics. Furthermore, the findings underscore the utility of ASOI as an internal validation measure for model optimization in unsupervised anomaly tasks. The proposed metrics are computationally efficient, scalable, and adaptable to a variety of anomaly detection scenarios, making them practical for real-world applications across industries such as cybersecurity, fraud detection, and predictive maintenance.","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"118 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145829958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mismatching points rejection in multi-modal super-wide field-of-view infrared distorted image registration with global selection-preserving matching 基于全局选择保持匹配的多模态超宽视场红外畸变图像配准错配点抑制
IF 5.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-24 DOI: 10.1007/s40747-025-02202-2
Fuyu Huang, Jun Zou, Limin Liu, Zhaogang Cheng, Fang Zhao, Mingliang Gao, Dongdong Shi
{"title":"Mismatching points rejection in multi-modal super-wide field-of-view infrared distorted image registration with global selection-preserving matching","authors":"Fuyu Huang, Jun Zou, Limin Liu, Zhaogang Cheng, Fang Zhao, Mingliang Gao, Dongdong Shi","doi":"10.1007/s40747-025-02202-2","DOIUrl":"https://doi.org/10.1007/s40747-025-02202-2","url":null,"abstract":"","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"83 1","pages":""},"PeriodicalIF":5.8,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145829959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Complex & Intelligent Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1