首页 > 最新文献

Applied Soft Computing最新文献

英文 中文
Accelerating pattern mining on fuzzy data by packing truth values into blocks of bits 通过将真值打包到位块中来加速对模糊数据的模式挖掘
IF 6.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-14 DOI: 10.1016/j.asoc.2026.114661
Michal Burda
In pattern mining from tabular data using fuzzy logic, a common task involves computing triangular norms (t-norms) to represent conjunctions of fuzzy predicates and summing the resulting truth values to evaluate rule support or other pattern quality measures. Building on previous work, this paper presents an approach that packs multiple fuzzy truth values into a single integer and performs t-norm computations directly on this compact representation. By using 4-, 8-, or 16-bit precision, the method substantially reduces memory consumption and improves computational efficiency. For example, with 8-bit precision—offering two decimal places of accuracy—it requires only one-quarter of the memory and achieves 3–16× speedup compared to conventional floating-point-based method of computation. The proposed method is also compared with a traditional computation approach optimized using advanced Single-Instruction/Multiple-Data (SIMD) CPU operations, demonstrating its superior performance on modern architectures.
在使用模糊逻辑从表格数据进行模式挖掘时,一个常见的任务涉及计算三角规范(t-规范)来表示模糊谓词的连接,并对结果的真值求和以评估规则支持或其他模式质量度量。在先前工作的基础上,本文提出了一种方法,将多个模糊真值打包成一个整数,并直接在这个紧凑表示上执行t范数计算。通过使用4位、8位或16位精度,该方法大大降低了内存消耗并提高了计算效率。例如,使用8位精度(提供小数点后两位的精度),它只需要四分之一的内存,并且与传统的基于浮点的计算方法相比,可以实现3 - 16倍的加速。并将该方法与采用先进单指令/多数据(SIMD) CPU操作优化的传统计算方法进行了比较,证明了其在现代体系结构上的优越性能。
{"title":"Accelerating pattern mining on fuzzy data by packing truth values into blocks of bits","authors":"Michal Burda","doi":"10.1016/j.asoc.2026.114661","DOIUrl":"10.1016/j.asoc.2026.114661","url":null,"abstract":"<div><div>In pattern mining from tabular data using fuzzy logic, a common task involves computing triangular norms (t-norms) to represent conjunctions of fuzzy predicates and summing the resulting truth values to evaluate rule support or other pattern quality measures. Building on previous work, this paper presents an approach that packs multiple fuzzy truth values into a single integer and performs t-norm computations directly on this compact representation. By using 4-, 8-, or 16-bit precision, the method substantially reduces memory consumption and improves computational efficiency. For example, with 8-bit precision—offering two decimal places of accuracy—it requires only one-quarter of the memory and achieves 3–16<span><math><mo>×</mo></math></span> speedup compared to conventional floating-point-based method of computation. The proposed method is also compared with a traditional computation approach optimized using advanced Single-Instruction/Multiple-Data (SIMD) CPU operations, demonstrating its superior performance on modern architectures.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"191 ","pages":"Article 114661"},"PeriodicalIF":6.6,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A heuristic payload configuration method for UAV swarm based on hybrid genetic algorithm and variable neighborhood search 基于混合遗传算法和可变邻域搜索的启发式无人机群有效载荷配置方法
IF 6.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-14 DOI: 10.1016/j.asoc.2026.114604
Zitang Zhang, Qian Sun, Yujie Huang, Yibing Li
With the advancement of unmanned systems technology, unmanned aerial vehicle (UAV) swarms composed of miniaturized, heterogeneous, and intelligent platforms have emerged as a new operational paradigm in military operations. However, efficient heterogeneous UAV swarm payload configuration (HUPC) remains a significant challenge due to limited single-platform capabilities, which is further exacerbated by the increasing number of platforms, strong coupling among diverse resource types, and heterogeneous mission requirements. To address this issue, this paper proposes a payload configuration algorithm tailored to the operational characteristics of UAV swarms. The HUPC problem is formulated as a bi-variable integer nonlinear programming model, with the objective of minimizing overall configuration cost while satisfying multiple operational constraints. To solve the above model, a heuristic initialization strategy is developed based on multi-attribute encoding and task-driven prioritization, combined with a parallel evolutionary and variable neighborhood search (VNS) approach under the genetic algorithm’s (GA) framework. The algorithm leverages accumulated historical experience during the optimization process to efficiently derive payload configuration schemes that meet mission requirements. Simulation results demonstrate that, compared with existing approaches, the proposed method reduces the average configuration cost by approximately 10% in small-scale scenarios and about 7% in medium- and large-scale scenarios, while maintaining stable performance under multiple operational constraints. This demonstrates that the proposed method ensures the feasibility and rationality of payload configuration schemes across tasks of varying scales.
随着无人系统技术的进步,由小型化、异构化、智能化平台组成的无人机群已成为军事作战的一种新模式。然而,由于单一平台能力有限,高效的异构无人机群有效载荷配置(HUPC)仍然是一个重大挑战,平台数量的增加、不同资源类型之间的强耦合和异构任务需求进一步加剧了这一挑战。针对这一问题,本文提出了一种针对无人机群作战特点的有效载荷配置算法。HUPC问题被表述为一个双变量整数非线性规划模型,其目标是在满足多个运行约束的情况下最小化总体配置成本。针对上述模型,提出了一种基于多属性编码和任务驱动优先级的启发式初始化策略,并结合遗传算法框架下的并行进化和可变邻域搜索(VNS)方法。该算法利用优化过程中积累的历史经验,有效推导出满足任务要求的有效载荷配置方案。仿真结果表明,与现有方法相比,该方法在小规模场景下平均配置成本降低约10%,在中大规模场景下平均配置成本降低约7%,同时在多种操作约束下保持稳定的性能。验证了所提方法在不同规模任务间有效载荷配置方案的可行性和合理性。
{"title":"A heuristic payload configuration method for UAV swarm based on hybrid genetic algorithm and variable neighborhood search","authors":"Zitang Zhang,&nbsp;Qian Sun,&nbsp;Yujie Huang,&nbsp;Yibing Li","doi":"10.1016/j.asoc.2026.114604","DOIUrl":"10.1016/j.asoc.2026.114604","url":null,"abstract":"<div><div>With the advancement of unmanned systems technology, unmanned aerial vehicle (UAV) swarms composed of miniaturized, heterogeneous, and intelligent platforms have emerged as a new operational paradigm in military operations. However, efficient heterogeneous UAV swarm payload configuration (HUPC) remains a significant challenge due to limited single-platform capabilities, which is further exacerbated by the increasing number of platforms, strong coupling among diverse resource types, and heterogeneous mission requirements. To address this issue, this paper proposes a payload configuration algorithm tailored to the operational characteristics of UAV swarms. The HUPC problem is formulated as a bi-variable integer nonlinear programming model, with the objective of minimizing overall configuration cost while satisfying multiple operational constraints. To solve the above model, a heuristic initialization strategy is developed based on multi-attribute encoding and task-driven prioritization, combined with a parallel evolutionary and variable neighborhood search (VNS) approach under the genetic algorithm’s (GA) framework. The algorithm leverages accumulated historical experience during the optimization process to efficiently derive payload configuration schemes that meet mission requirements. Simulation results demonstrate that, compared with existing approaches, the proposed method reduces the average configuration cost by approximately 10% in small-scale scenarios and about 7% in medium- and large-scale scenarios, while maintaining stable performance under multiple operational constraints. This demonstrates that the proposed method ensures the feasibility and rationality of payload configuration schemes across tasks of varying scales.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"190 ","pages":"Article 114604"},"PeriodicalIF":6.6,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145979794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MAGF-CCL: Multi-level attentive graph fusion with cross-modal complementary learning for internal control material weaknesses prediction MAGF-CCL:多层次注意图融合与跨模态互补学习用于内部控制材料弱点预测
IF 6.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-14 DOI: 10.1016/j.asoc.2026.114656
Xuan Zhang , Boyu Hu , Xusheng Sun , Jingling Ma , Gang Wang , Tingting (Rachel) Chung
Internal control material weaknesses (ICMW) are often early warnings of possible financial misstatements or fraud that can lead to financial distress. Therefore, accurately predicting ICMW is crucial to mitigating greater losses. Recent studies have shown that multi-modal data holds significant promise for predicting ICMW in listed companies. However, the complementary effects of multi-modal data remain underexplored. This limits the model’s ability to fully capture the ICMW clues. Furthermore, existing studies primarily treat companies as independent entities. They overlooked the inter-company relationships that may influence the final prediction results. To address these limitations, this study proposes a Multi-level Attentive Graph Fusion with Cross-modal Complementary Learning (MAGF-CCL) method for ICMW prediction. Specifically, first, the instance-level graphs are constructed using k-nearest neighbors (KNN) algorithm. Graph Convolutional Network (GCN) is then employed to learn inter-company relationships in graphs. Second, a Multi-modal Complementary Learning (MCL) module is designed to explore the multi-modal complementarity, hence fully capturing ICMW clues. Third, to integrate multi-modalities effectively, the numerical and textual graphs are fused using Modality-level Fusion Mechanism (MFM) and Structure-level Fusion Mechanism (SFM). These fusion modules combine the multi-modal data and structural relationships, respectively. Finally, the fused graph is subsequently fed into a GCN to facilitate cross-modal information propagation and enhance ICMW prediction. Experimental results on a real-world dataset demonstrate that the proposed MAGF-CCL method outperforms state-of-the-art (SOTA) methods in predicting ICMW. The AUC value of MAGF-CCL achieved 91.04 %, outperforming existing SOTA method by nearly 3 %. This study also visualized the inter-company relationships and attention maps of MFM module, thereby providing relevant decision support for stakeholders.
内部控制重大缺陷(ICMW)通常是可能导致财务困境的财务错报或欺诈的早期预警。因此,准确预测ICMW对于减少更大的损失至关重要。最近的研究表明,多模态数据在预测上市公司ICMW方面具有重要的前景。然而,多模态数据的互补效应仍未得到充分探索。这限制了模型完全捕捉ICMW线索的能力。此外,现有研究主要将公司视为独立的实体。他们忽略了可能影响最终预测结果的公司内部关系。为了解决这些局限性,本研究提出了一种多级注意图融合交叉模态互补学习(MAGF-CCL)方法用于ICMW预测。具体来说,首先,使用k近邻(KNN)算法构建实例级图。然后使用图卷积网络(GCN)来学习图中的公司间关系。其次,设计了一个多模态互补学习(Multi-modal Complementary Learning, MCL)模块来探索多模态互补性,从而充分捕捉ICMW线索。第三,采用模态级融合机制(MFM)和结构级融合机制(SFM)对数字图形和文本图形进行融合,实现多模态的有效融合。这些融合模块分别结合了多模态数据和结构关系。最后,将融合图输入到GCN中,促进跨模态信息传播,增强ICMW预测能力。在真实数据集上的实验结果表明,所提出的MAGF-CCL方法在预测ICMW方面优于最先进的SOTA方法。MAGF-CCL的AUC值达到91.04 %,比现有的SOTA方法高出近3 %。本研究还可视化了MFM模块的公司间关系和注意图,从而为利益相关者提供相关的决策支持。
{"title":"MAGF-CCL: Multi-level attentive graph fusion with cross-modal complementary learning for internal control material weaknesses prediction","authors":"Xuan Zhang ,&nbsp;Boyu Hu ,&nbsp;Xusheng Sun ,&nbsp;Jingling Ma ,&nbsp;Gang Wang ,&nbsp;Tingting (Rachel) Chung","doi":"10.1016/j.asoc.2026.114656","DOIUrl":"10.1016/j.asoc.2026.114656","url":null,"abstract":"<div><div>Internal control material weaknesses (ICMW) are often early warnings of possible financial misstatements or fraud that can lead to financial distress. Therefore, accurately predicting ICMW is crucial to mitigating greater losses. Recent studies have shown that multi-modal data holds significant promise for predicting ICMW in listed companies. However, the complementary effects of multi-modal data remain underexplored. This limits the model’s ability to fully capture the ICMW clues. Furthermore, existing studies primarily treat companies as independent entities. They overlooked the inter-company relationships that may influence the final prediction results. To address these limitations, this study proposes a Multi-level Attentive Graph Fusion with Cross-modal Complementary Learning (MAGF-CCL) method for ICMW prediction. Specifically, first, the instance-level graphs are constructed using k-nearest neighbors (KNN) algorithm. Graph Convolutional Network (GCN) is then employed to learn inter-company relationships in graphs. Second, a Multi-modal Complementary Learning (MCL) module is designed to explore the multi-modal complementarity, hence fully capturing ICMW clues. Third, to integrate multi-modalities effectively, the numerical and textual graphs are fused using Modality-level Fusion Mechanism (MFM) and Structure-level Fusion Mechanism (SFM). These fusion modules combine the multi-modal data and structural relationships, respectively. Finally, the fused graph is subsequently fed into a GCN to facilitate cross-modal information propagation and enhance ICMW prediction. Experimental results on a real-world dataset demonstrate that the proposed MAGF-CCL method outperforms state-of-the-art (SOTA) methods in predicting ICMW. The AUC value of MAGF-CCL achieved 91.04 %, outperforming existing SOTA method by nearly 3 %. This study also visualized the inter-company relationships and attention maps of MFM module, thereby providing relevant decision support for stakeholders.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"191 ","pages":"Article 114656"},"PeriodicalIF":6.6,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145986733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-view graph neural networks by augmented aggregation 基于增强聚合的多视图图神经网络
IF 6.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-14 DOI: 10.1016/j.asoc.2026.114621
Long Shi , Junyu Chen , Lei Cao , Jun Wang , Jinghua Tan , Badong Chen
Recently, multi-view Graph Neural Networks (GNNs) have garnered increasing interest. However, three critical research aspects still remain challenging: 1) capturing underlying correlation information between views, 2) extracting intrinsic graph structure features, and 3) aggregating graph information from different views. To address these challenges, we propose a novel multi-view graph neural network framework. Specifically, we capture the local correlation between the views in the kernel feature space. By stacking the mapped graph matrices into a tensor, tensor decomposition is then performed to extract the global correlation among different graphs, which enhances both the adjacency and feature matrices. To explore the inherent graph structure features, we design an unsupervised scheme for filtering out low-relevance neighbors. This is achieved by initially constructing a score matrix based on similarity measures to evaluate the neighbor importance, and then designing a node-filtering strategy to balance important neighbors and fruitful edges. Finally, we design an augmented cross-aggregation module to enable in-depth intra-aggregation and inter-aggregation. Experimental results on real-world datasets show that our method outperforms several advanced graph neural network methods. The code will soon be released in a preprint version.
近年来,多视图图神经网络(gnn)引起了越来越多的关注。然而,三个关键的研究方面仍然具有挑战性:1)捕获视图之间的潜在关联信息,2)提取内在的图结构特征,以及3)聚合来自不同视图的图信息。为了解决这些挑战,我们提出了一种新的多视图图神经网络框架。具体来说,我们捕获了内核特征空间中视图之间的局部相关性。将映射的图矩阵叠加成一个张量,然后进行张量分解,提取不同图之间的全局相关性,从而增强邻接矩阵和特征矩阵。为了探索图的固有结构特征,我们设计了一种无监督的方案来过滤掉低相关性的邻居。首先构建一个基于相似性度量的评分矩阵来评估邻居的重要性,然后设计一个节点过滤策略来平衡重要邻居和有效边。最后,我们设计了一个增强的交叉聚合模块,以实现深度的内部聚合和内部聚合。在实际数据集上的实验结果表明,该方法优于几种先进的图神经网络方法。该代码将很快以预印本版本发布。
{"title":"Multi-view graph neural networks by augmented aggregation","authors":"Long Shi ,&nbsp;Junyu Chen ,&nbsp;Lei Cao ,&nbsp;Jun Wang ,&nbsp;Jinghua Tan ,&nbsp;Badong Chen","doi":"10.1016/j.asoc.2026.114621","DOIUrl":"10.1016/j.asoc.2026.114621","url":null,"abstract":"<div><div>Recently, multi-view Graph Neural Networks (GNNs) have garnered increasing interest. However, three critical research aspects still remain challenging: 1) capturing underlying correlation information between views, 2) extracting intrinsic graph structure features, and 3) aggregating graph information from different views. To address these challenges, we propose a novel multi-view graph neural network framework. Specifically, we capture the local correlation between the views in the kernel feature space. By stacking the mapped graph matrices into a tensor, tensor decomposition is then performed to extract the global correlation among different graphs, which enhances both the adjacency and feature matrices. To explore the inherent graph structure features, we design an unsupervised scheme for filtering out low-relevance neighbors. This is achieved by initially constructing a score matrix based on similarity measures to evaluate the neighbor importance, and then designing a node-filtering strategy to balance important neighbors and fruitful edges. Finally, we design an augmented cross-aggregation module to enable in-depth intra-aggregation and inter-aggregation. Experimental results on real-world datasets show that our method outperforms several advanced graph neural network methods. The code will soon be released in a preprint version.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"191 ","pages":"Article 114621"},"PeriodicalIF":6.6,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A neural knowledge learning-driven artificial bee colony algorithm with reinforcement adaptation for global optimization 一种神经知识学习驱动的增强自适应人工蜂群算法
IF 6.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-14 DOI: 10.1016/j.asoc.2026.114662
Gurmeet Saini, Shimpi Singh Jadon
Evolutionary algorithms often suffer from search inefficiency due to their inability to systematically reuse historical search patterns, leading to redundant exploration and premature stagnation. Addressing this limitation, we propose KLABC-RL, a novel framework that synergizes Reinforcement Learning (RL) with Knowledge Learning Evolutionary Computation (KLEC) within the Artificial Bee Colony (ABC) paradigm. Unlike conventional hybrids that enforce static knowledge transfer, KLABC-RL employs a Q-learning-based adaptive agent to dynamically govern the search process. This agent intelligently toggles between an Artificial Neural Network (ANN)-driven Knowledge Learning Model (KLM) for exploitation and standard ABC operators for exploration, thereby effectively preventing negative knowledge transfer. To further mitigate stagnation, a Hilbert space-based perturbation strategy is integrated into the scout phase, enhancing population diversity. Comprehensive evaluations on 23 classical benchmark functions, the IEEE CEC 2019 suite, and complex real-world engineering problems, specifically planar kinematic arm control and photovoltaic (PV) parameter extraction demonstrate the superiority of KLABC-RL. Comparative analysis against seven state-of-the-art algorithms and 4 hybrid variants of ABC reveals that KLABC-RL achieves significantly faster convergence and higher solution accuracy. Rigorous statistical validation, including Wilcoxon Rank-Sum, Friedman, and ANOVA tests, confirms the robustness and efficacy of the proposed framework in advancing intelligent evolutionary search.
由于进化算法无法系统地重用历史搜索模式,导致冗余的探索和过早的停滞,因此常常遭受搜索效率低下的困扰。为了解决这一限制,我们提出了KLABC-RL,这是一个在人工蜂群(ABC)范式中协同强化学习(RL)和知识学习进化计算(KLEC)的新框架。与强制静态知识转移的传统混合系统不同,KLABC-RL采用基于q学习的自适应代理来动态管理搜索过程。该智能体在人工神经网络(ANN)驱动的知识学习模型(KLM)进行开发和标准ABC算子进行探索之间智能切换,从而有效地防止了负知识转移。为了进一步缓解停滞,希尔伯特空间摄动策略被整合到侦察阶段,增强种群多样性。通过对23个经典基准函数、IEEE CEC 2019套件以及复杂的现实工程问题(特别是平面运动臂控制和光伏(PV)参数提取)的综合评估,证明了KLABC-RL的优势。通过对7种最先进算法和4种ABC混合变体的对比分析,KLABC-RL的收敛速度和求解精度显著提高。严格的统计验证,包括Wilcoxon Rank-Sum, Friedman和ANOVA检验,证实了所提出的框架在推进智能进化搜索方面的稳健性和有效性。
{"title":"A neural knowledge learning-driven artificial bee colony algorithm with reinforcement adaptation for global optimization","authors":"Gurmeet Saini,&nbsp;Shimpi Singh Jadon","doi":"10.1016/j.asoc.2026.114662","DOIUrl":"10.1016/j.asoc.2026.114662","url":null,"abstract":"<div><div>Evolutionary algorithms often suffer from search inefficiency due to their inability to systematically reuse historical search patterns, leading to redundant exploration and premature stagnation. Addressing this limitation, we propose KLABC-RL, a novel framework that synergizes Reinforcement Learning (RL) with Knowledge Learning Evolutionary Computation (KLEC) within the Artificial Bee Colony (ABC) paradigm. Unlike conventional hybrids that enforce static knowledge transfer, KLABC-RL employs a Q-learning-based adaptive agent to dynamically govern the search process. This agent intelligently toggles between an Artificial Neural Network (ANN)-driven Knowledge Learning Model (KLM) for exploitation and standard ABC operators for exploration, thereby effectively preventing negative knowledge transfer. To further mitigate stagnation, a Hilbert space-based perturbation strategy is integrated into the scout phase, enhancing population diversity. Comprehensive evaluations on 23 classical benchmark functions, the IEEE CEC 2019 suite, and complex real-world engineering problems, specifically planar kinematic arm control and photovoltaic (PV) parameter extraction demonstrate the superiority of KLABC-RL. Comparative analysis against seven state-of-the-art algorithms and 4 hybrid variants of ABC reveals that KLABC-RL achieves significantly faster convergence and higher solution accuracy. Rigorous statistical validation, including Wilcoxon Rank-Sum, Friedman, and ANOVA tests, confirms the robustness and efficacy of the proposed framework in advancing intelligent evolutionary search.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"191 ","pages":"Article 114662"},"PeriodicalIF":6.6,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Auto-seg: An automated G-code interpreter and 1DCNN-based framework for signal segmentation and synchronization in CNC machining Auto-seg:一种自动g码解释器和基于1dcnn的数控加工信号分割和同步框架
IF 6.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-14 DOI: 10.1016/j.asoc.2026.114644
Che-Wei Chou , Hwai-Jung Hsu , Kai-Chun Huang , Yu-Chieh Chen
High machining accuracy is crucial in CNC turning when manufacturing workpieces. Techniques such as thermal compensation and tool wear prediction reduce errors but rely heavily on accurate machining signals. However, variations in operations, turning tools, machining setting parameters, and noise complicate identifying the actual machining time, posing significant challenges for subsequent tasks. This study proposes a novel framework for identifying machining time intervals and lengths based on the CNC programming language (G-code) and multiple sensor signals. The proposed auto-Seg approach integrates G-code parsing with synchronized data acquisition and signal segmentation, enabling automatic and precise identification of machining states to enhance temporal alignment and enable precise feature extraction without manual intervention. Begin by analyzing G-code to calculate theoretical machining time, followed by utilizing motor signals to identify the actual start and end points of machining. Then, map those vibration signals to extract the actual machining segments. The segmented vibration data is used to pre-train a Convolutional Neural Network (CNN), enabling the model to identify cutting signals and verify their alignment with G-code-defined periods. To validate the proposed auto-Seg approach, various workpieces were tested in different factories. The results showed that the auto-Seg approach accurately identified cutting segments and their corresponding machining durations. It not only demonstrates the effectiveness of the proposed signal synchronization and segmentation framework but also reliably enhances data analytics, monitoring, and diagnostics in CNC machining, using lightweight models suitable for edge deployment and real-world applications.
在数控车削加工中,高精度的加工精度是至关重要的。热补偿和刀具磨损预测等技术可以减少误差,但在很大程度上依赖于精确的加工信号。然而,操作、车削工具、加工设置参数和噪声的变化使实际加工时间的确定复杂化,给后续任务带来了重大挑战。本研究提出了一种基于数控编程语言(G-code)和多传感器信号识别加工时间间隔和长度的新框架。该方法将G-code解析与同步数据采集和信号分割相结合,实现了加工状态的自动精确识别,增强了时间对齐,无需人工干预即可实现精确的特征提取。首先分析g代码计算理论加工时间,然后利用电机信号确定实际加工的起点和终点。然后,对这些振动信号进行映射,提取实际加工段。分割的振动数据用于预训练卷积神经网络(CNN),使模型能够识别切割信号,并验证它们与g代码定义的周期是否一致。为了验证所提出的自动隔离方法,在不同的工厂对不同的工件进行了测试。结果表明,该方法能准确地识别出切削段及其相应的加工时间。它不仅证明了所提出的信号同步和分割框架的有效性,而且还可靠地增强了CNC加工中的数据分析、监控和诊断,使用适合边缘部署和实际应用的轻量级模型。
{"title":"Auto-seg: An automated G-code interpreter and 1DCNN-based framework for signal segmentation and synchronization in CNC machining","authors":"Che-Wei Chou ,&nbsp;Hwai-Jung Hsu ,&nbsp;Kai-Chun Huang ,&nbsp;Yu-Chieh Chen","doi":"10.1016/j.asoc.2026.114644","DOIUrl":"10.1016/j.asoc.2026.114644","url":null,"abstract":"<div><div>High machining accuracy is crucial in CNC turning when manufacturing workpieces. Techniques such as thermal compensation and tool wear prediction reduce errors but rely heavily on accurate machining signals. However, variations in operations, turning tools, machining setting parameters, and noise complicate identifying the actual machining time, posing significant challenges for subsequent tasks. This study proposes a novel framework for identifying machining time intervals and lengths based on the CNC programming language (G-code) and multiple sensor signals. The proposed auto-Seg approach integrates G-code parsing with synchronized data acquisition and signal segmentation, enabling automatic and precise identification of machining states to enhance temporal alignment and enable precise feature extraction without manual intervention. Begin by analyzing G-code to calculate theoretical machining time, followed by utilizing motor signals to identify the actual start and end points of machining. Then, map those vibration signals to extract the actual machining segments. The segmented vibration data is used to pre-train a Convolutional Neural Network (CNN), enabling the model to identify cutting signals and verify their alignment with G-code-defined periods. To validate the proposed auto-Seg approach, various workpieces were tested in different factories. The results showed that the auto-Seg approach accurately identified cutting segments and their corresponding machining durations. It not only demonstrates the effectiveness of the proposed signal synchronization and segmentation framework but also reliably enhances data analytics, monitoring, and diagnostics in CNC machining, using lightweight models suitable for edge deployment and real-world applications.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"191 ","pages":"Article 114644"},"PeriodicalIF":6.6,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient incomplete multi-view tensor clustering through predefined anchor learning 基于预定义锚学习的高效不完全多视图张量聚类
IF 6.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-14 DOI: 10.1016/j.asoc.2026.114660
Zhuojun Han , Yitian Xu
Real-world multi-view datasets are often large and incomplete, driving anchor-based multi-view clustering (MVC) to be extended toward incomplete multi-view clustering (IMVC). Among anchor-based approaches, predefined-anchor methods are attractive due to their high efficiency without iterative anchor refinement. However, when applied to incomplete views, they still face two major challenges: unstable anchor selection and limited utilization of high-order information. These limitations degrade the quality of embedding features and affect clustering performance. To address these challenges, we propose IMVC-TPAL (Efficient Incomplete Multi-View Tensor Clustering through Predefined Anchor Learning) , which begins with a customized anchor selection strategy that reduces randomness and mitigates the impact of missing views, and further incorporates adaptive anchor graph completion directly into the embedding learning process. Additionally, a tensor-based low-frequency approximation operator is employed to explore intra-view similarity, resulting in smooth and discriminative embedding features. In experiments conducted on five datasets under three missing-view ratios, IMVC-TPAL achieves the best performance on 73.3% of all evaluation metrics and ranks second on the remaining ones, demonstrating its effectiveness. These results confirm that our method successfully integrates predefined-anchor learning with the incomplete multi-view setting, providing a reliable and scalable solution for IMVC.
现实世界中的多视图数据集往往是庞大而不完整的,这促使基于锚点的多视图聚类(MVC)向不完整多视图聚类(IMVC)的方向发展。在基于锚点的方法中,预定义锚点方法因其无需迭代锚点优化而具有较高的效率而备受关注。然而,当应用于不完整视图时,它们仍然面临两个主要挑战:锚点选择不稳定和高阶信息的利用有限。这些限制降低了嵌入特征的质量,影响了聚类性能。为了解决这些挑战,我们提出了IMVC-TPAL(通过预定义锚学习的高效不完全多视图张量聚类),它从一个定制的锚选择策略开始,减少随机性并减轻缺失视图的影响,并进一步将自适应锚图补全直接纳入嵌入学习过程。此外,采用基于张量的低频近似算子来探索视图内的相似性,从而获得平滑和判别的嵌入特征。在5个数据集上进行了3种失视率下的实验,IMVC-TPAL在73.3%的评价指标上表现最佳,在其余评价指标上排名第二,证明了IMVC-TPAL的有效性。这些结果证实了我们的方法成功地将预定义锚学习与不完全多视图设置相结合,为IMVC提供了可靠的可扩展解决方案。
{"title":"Efficient incomplete multi-view tensor clustering through predefined anchor learning","authors":"Zhuojun Han ,&nbsp;Yitian Xu","doi":"10.1016/j.asoc.2026.114660","DOIUrl":"10.1016/j.asoc.2026.114660","url":null,"abstract":"<div><div>Real-world multi-view datasets are often large and incomplete, driving anchor-based multi-view clustering (MVC) to be extended toward incomplete multi-view clustering (IMVC). Among anchor-based approaches, predefined-anchor methods are attractive due to their high efficiency without iterative anchor refinement. However, when applied to incomplete views, they still face two major challenges: unstable anchor selection and limited utilization of high-order information. These limitations degrade the quality of embedding features and affect clustering performance. To address these challenges, we propose IMVC-TPAL (Efficient Incomplete Multi-View Tensor Clustering through Predefined Anchor Learning) , which begins with a customized anchor selection strategy that reduces randomness and mitigates the impact of missing views, and further incorporates adaptive anchor graph completion directly into the embedding learning process. Additionally, a tensor-based low-frequency approximation operator is employed to explore intra-view similarity, resulting in smooth and discriminative embedding features. In experiments conducted on five datasets under three missing-view ratios, IMVC-TPAL achieves the best performance on 73.3% of all evaluation metrics and ranks second on the remaining ones, demonstrating its effectiveness. These results confirm that our method successfully integrates predefined-anchor learning with the incomplete multi-view setting, providing a reliable and scalable solution for IMVC.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"191 ","pages":"Article 114660"},"PeriodicalIF":6.6,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hopfield network-based algorithm for combinatorial optimization 基于Hopfield网络的组合优化算法
IF 6.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-13 DOI: 10.1016/j.asoc.2026.114652
Houssam Hamdouch , Safae Rbihou , Kaoutar Senhaji , Khalid Haddouch
Combinatorial optimization problems (COPs) present significant computational challenges due to their discrete nature, increasing complexity, and NP-hard characteristics. Identifying an effective solver is particularly difficult given the large variety of existing techniques, including exact algorithms, metaheuristics, and neural-network-based approaches such as Hopfield networks (HNs). Although HNs have shown strong potential for solving complex COPs through an energy-minimization framework, their performance is highly sensitive to the choice of hyperparameters and the initialization strategy, both of which require careful tuning. This paper introduces a new method that enhances the effectiveness of HNs for COPs by jointly optimizing their hyperparameters and starting point using the Arithmetic Optimization Algorithm (AOA). The goal is to develop a recurrent-neural-network-based approach that leverages systematic hyperparameter tuning and optimal initialization to improve solution quality and convergence behavior. Experimental results demonstrate that the proposed method achieves optimal solutions on 20 instances of the task assignment problem (TAP) and provides high-quality solutions for the graph coloring problem (GCP) and the traveling salesman problem (TSP) within reasonable computational times. Compared to a genetic algorithm (GA) and traditional HNs with random hyperparameter selection, the proposed approach achieves performance improvements of 52.83% for TAP, 28.97% for GCP, and 9.32% for TSP.
组合优化问题(cop)由于其离散性、不断增加的复杂性和NP-hard特性,给计算带来了巨大的挑战。考虑到现有的各种技术,包括精确算法、元启发式和基于神经网络的方法,如Hopfield网络(HNs),确定一个有效的求解器尤其困难。尽管HNs已经显示出通过能量最小化框架解决复杂cop的强大潜力,但它们的性能对超参数和初始化策略的选择高度敏感,这两者都需要仔细调优。本文介绍了一种利用算术优化算法(AOA)联合优化HNs的超参数和起点,从而提高HNs对cop的有效性的新方法。目标是开发一种基于循环神经网络的方法,利用系统超参数调整和最优初始化来提高解决方案的质量和收敛行为。实验结果表明,该方法在任务分配问题(TAP)的20个实例中均获得了最优解,并在合理的计算时间内为图着色问题(GCP)和旅行商问题(TSP)提供了高质量的解。与遗传算法(GA)和具有随机超参数选择的传统HNs相比,该方法在TAP、GCP和TSP方面的性能分别提高了52.83%、28.97%和9.32%。
{"title":"Hopfield network-based algorithm for combinatorial optimization","authors":"Houssam Hamdouch ,&nbsp;Safae Rbihou ,&nbsp;Kaoutar Senhaji ,&nbsp;Khalid Haddouch","doi":"10.1016/j.asoc.2026.114652","DOIUrl":"10.1016/j.asoc.2026.114652","url":null,"abstract":"<div><div>Combinatorial optimization problems (COPs) present significant computational challenges due to their discrete nature, increasing complexity, and NP-hard characteristics. Identifying an effective solver is particularly difficult given the large variety of existing techniques, including exact algorithms, metaheuristics, and neural-network-based approaches such as Hopfield networks (HNs). Although HNs have shown strong potential for solving complex COPs through an energy-minimization framework, their performance is highly sensitive to the choice of hyperparameters and the initialization strategy, both of which require careful tuning. This paper introduces a new method that enhances the effectiveness of HNs for COPs by jointly optimizing their hyperparameters and starting point using the Arithmetic Optimization Algorithm (AOA). The goal is to develop a recurrent-neural-network-based approach that leverages systematic hyperparameter tuning and optimal initialization to improve solution quality and convergence behavior. Experimental results demonstrate that the proposed method achieves optimal solutions on 20 instances of the task assignment problem (TAP) and provides high-quality solutions for the graph coloring problem (GCP) and the traveling salesman problem (TSP) within reasonable computational times. Compared to a genetic algorithm (GA) and traditional HNs with random hyperparameter selection, the proposed approach achieves performance improvements of 52.83% for TAP, 28.97% for GCP, and 9.32% for TSP.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"190 ","pages":"Article 114652"},"PeriodicalIF":6.6,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145979789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
End-to-end discrete cosine transform integration in spectral convolutional neural networks for resource-efficient deep learning 频谱卷积神经网络中资源高效深度学习的端到端离散余弦变换积分
IF 6.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-13 DOI: 10.1016/j.asoc.2026.114599
Ibrahim Yousef Alshareef , Ab Al-Hadi Ab Rahman , Nuzhat Khan , Hasan Alqaraghuli
Spectral convolutional neural networks using Fast Fourier Transform (FFT) often suffer from high computational complexity and memory demands due to complex valued operations and the need for inverse transforms limiting their deployment on resource constrained devices. This paper presents a novel end-to-end spectral convolutional neural network (SpCNN) architecture that operates entirely in the Discrete Cosine Transform (DCT) domain, eliminating the need for inverse transformations and complex arithmetic. Leveraging the DCT’s real valued representation and superior energy compaction, the proposed design significantly reduces computational workload and memory usage while preserving classification accuracy. Key innovations include the removal of IFFT layers, a frequency domain adaptive activation function (FReLU), and a DCT optimized spectral pooling mechanism, each tailored for deployment in low power, resource constrained environments. Experimental evaluations on MNIST and a 94-class ASCII dataset demonstrate the model’s efficiency: LeNet5-DCT achieves a 37.96% FLOPs reduction, 18.45% lower memory usage, and 96.56% test accuracy, while VGG7-DCT achieves a 33.95% FLOPs reduction, 14.32% lower memory usage, and 90.62% test accuracy. The architecture also shows strong robustness to quantization, confirming its suitability for edge AI applications and low energy inference. This work provides a scalable, hardware efficient spectral learning framework, paving the way for future hybrid spectral models optimized for embedded environments.
使用快速傅里叶变换(FFT)的频谱卷积神经网络通常由于复杂的值运算和对逆变换的需求而遭受高计算复杂度和内存需求的困扰,这限制了它们在资源受限设备上的部署。本文提出了一种全新的端到端频谱卷积神经网络(SpCNN)架构,该架构完全在离散余弦变换(DCT)域中运行,消除了对逆变换和复杂算法的需要。利用DCT的真实值表示和优越的能量压缩,提出的设计显着减少了计算工作量和内存使用,同时保持了分类准确性。关键创新包括去除IFFT层、频域自适应激活函数(FReLU)和DCT优化的频谱池机制,每个创新都是针对低功耗、资源受限环境的部署而定制的。在MNIST和94类ASCII数据集上的实验评估证明了该模型的效率:LeNet5-DCT实现了37.96%的FLOPs降低,18.45%的内存使用降低,96.56%的测试准确率,而VGG7-DCT实现了33.95%的FLOPs降低,14.32%的内存使用降低,90.62%的测试准确率。该架构还显示出强大的量化鲁棒性,证实了其适用于边缘人工智能应用和低能量推理。这项工作提供了一个可扩展的、硬件高效的光谱学习框架,为未来针对嵌入式环境优化的混合光谱模型铺平了道路。
{"title":"End-to-end discrete cosine transform integration in spectral convolutional neural networks for resource-efficient deep learning","authors":"Ibrahim Yousef Alshareef ,&nbsp;Ab Al-Hadi Ab Rahman ,&nbsp;Nuzhat Khan ,&nbsp;Hasan Alqaraghuli","doi":"10.1016/j.asoc.2026.114599","DOIUrl":"10.1016/j.asoc.2026.114599","url":null,"abstract":"<div><div>Spectral convolutional neural networks using Fast Fourier Transform (FFT) often suffer from high computational complexity and memory demands due to complex valued operations and the need for inverse transforms limiting their deployment on resource constrained devices. This paper presents a novel end-to-end spectral convolutional neural network (SpCNN) architecture that operates entirely in the Discrete Cosine Transform (DCT) domain, eliminating the need for inverse transformations and complex arithmetic. Leveraging the DCT’s real valued representation and superior energy compaction, the proposed design significantly reduces computational workload and memory usage while preserving classification accuracy. Key innovations include the removal of IFFT layers, a frequency domain adaptive activation function (FReLU), and a DCT optimized spectral pooling mechanism, each tailored for deployment in low power, resource constrained environments. Experimental evaluations on MNIST and a 94-class ASCII dataset demonstrate the model’s efficiency: LeNet5-DCT achieves a 37.96% FLOPs reduction, 18.45% lower memory usage, and 96.56% test accuracy, while VGG7-DCT achieves a 33.95% FLOPs reduction, 14.32% lower memory usage, and 90.62% test accuracy. The architecture also shows strong robustness to quantization, confirming its suitability for edge AI applications and low energy inference. This work provides a scalable, hardware efficient spectral learning framework, paving the way for future hybrid spectral models optimized for embedded environments.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"191 ","pages":"Article 114599"},"PeriodicalIF":6.6,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Boundary-guided large-scale vision model for unified multi-domain industrial anomaly detection 统一多领域工业异常检测的边界引导大规模视觉模型
IF 6.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-13 DOI: 10.1016/j.asoc.2026.114650
Zixuan Zhang , Fan Shi , Chen Jia , Mianzhao Wang , Xu Cheng
Extracting shared boundary cues across different anomaly domains is critical to enhancing generalization on unseen data, thereby laying the foundation for unified industrial anomaly detection paradigms. Existing unified detection paradigms often directly extract discriminative features from multiple data domains. However, due to the inherent semantic gaps between different data sources, bridging this disparity within a shared feature representation across multiple data domains remains a key challenge. To address this challenge, we propose a boundary-guided large-scale vision model that extracts commonalities across diverse domains. Specifically, we generate initial feature embeddings by establishing a multi-domain normal sample repository and employing a parameter coupling strategy. This captures shared boundary information across different data domains, thereby reducing the inherent semantic gaps. For anomalous feature synthesis, we incorporate this boundary information into the generation process, ensuring that the synthesized features retain critical structural details while expanding the coverage of potential anomalous data distributions. Additionally, to enhance feature space separation between normal and anomalous samples, we introduce a hybrid constraint optimization mechanism that improves the discriminative ability of the model. Extensive experiments on the MVTec AD, VisA, and MPDD datasets demonstrate that our method achieves state-of-the-art performance across various industrial scenarios. Experimental results demonstrate the effectiveness of boundary-guided shared information for multi-domain anomaly detection.
提取跨不同异常域的共享边界线索对于增强对未见数据的泛化至关重要,从而为统一的工业异常检测范式奠定基础。现有的统一检测范式往往直接从多个数据域中提取判别特征。然而,由于不同数据源之间固有的语义差距,在跨多个数据域的共享特征表示中弥合这种差距仍然是一个关键的挑战。为了解决这一挑战,我们提出了一个边界引导的大规模视觉模型,该模型可以提取不同领域的共性。具体来说,我们通过建立多域正常样本库和采用参数耦合策略来生成初始特征嵌入。这捕获了跨不同数据域的共享边界信息,从而减少了固有的语义差距。对于异常特征合成,我们将这些边界信息纳入到生成过程中,确保合成的特征保留关键的结构细节,同时扩大潜在异常数据分布的覆盖范围。此外,为了增强正常和异常样本之间的特征空间分离,我们引入了一种混合约束优化机制,提高了模型的判别能力。在MVTec AD、VisA和MPDD数据集上进行的大量实验表明,我们的方法可以在各种工业场景中实现最先进的性能。实验结果证明了边界引导共享信息用于多域异常检测的有效性。
{"title":"Boundary-guided large-scale vision model for unified multi-domain industrial anomaly detection","authors":"Zixuan Zhang ,&nbsp;Fan Shi ,&nbsp;Chen Jia ,&nbsp;Mianzhao Wang ,&nbsp;Xu Cheng","doi":"10.1016/j.asoc.2026.114650","DOIUrl":"10.1016/j.asoc.2026.114650","url":null,"abstract":"<div><div>Extracting shared boundary cues across different anomaly domains is critical to enhancing generalization on unseen data, thereby laying the foundation for unified industrial anomaly detection paradigms. Existing unified detection paradigms often directly extract discriminative features from multiple data domains. However, due to the inherent semantic gaps between different data sources, bridging this disparity within a shared feature representation across multiple data domains remains a key challenge. To address this challenge, we propose a boundary-guided large-scale vision model that extracts commonalities across diverse domains. Specifically, we generate initial feature embeddings by establishing a multi-domain normal sample repository and employing a parameter coupling strategy. This captures shared boundary information across different data domains, thereby reducing the inherent semantic gaps. For anomalous feature synthesis, we incorporate this boundary information into the generation process, ensuring that the synthesized features retain critical structural details while expanding the coverage of potential anomalous data distributions. Additionally, to enhance feature space separation between normal and anomalous samples, we introduce a hybrid constraint optimization mechanism that improves the discriminative ability of the model. Extensive experiments on the MVTec AD, VisA, and MPDD datasets demonstrate that our method achieves state-of-the-art performance across various industrial scenarios. Experimental results demonstrate the effectiveness of boundary-guided shared information for multi-domain anomaly detection.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"191 ","pages":"Article 114650"},"PeriodicalIF":6.6,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Applied Soft Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1