首页 > 最新文献

Neurocomputing最新文献

英文 中文
Feature-level attention network with group-aware interest modeling for sequential recommendation 基于群体感知兴趣建模的序列推荐特征级注意网络
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-26 DOI: 10.1016/j.neucom.2025.130302
Wei Jiang , Yongquan Fan , Yajun Du , Xianyong Li , Xiaomin Wang
Sequential recommendation focuses on modeling user preferences based on their historical interaction sequences to predict future behaviors with greater precision. Incorporating feature-level information beyond item IDs has become a crucial approach to improving the performance of the recommendation system. However, existing methods overlook the hierarchical group relationships among users. This limitation prevents these methods from fully capturing user preferences, leading to an incomplete understanding of their true interests. Meanwhile, effectively leveraging multi-source information in recommendation systems remains a significant challenge. Existing methods typically rely on simple techniques such as pooling or concatenation to integrate information from different sources, which could degrade overall performance. To address these limitations, we propose a novel approach: Feature-level Attention Network with Group-aware Interest Modeling for Sequential Recommendation (FANGIM). Specifically, we first employ two distinct encoders to generate user embeddings at different level. Next, we introduce a group clustering module, which identifies potential interest groups at multiple granularities and derives user group interest embeddings for both item and feature level interactions. Furthermore, we design a multi-source representation fusion module that effectively integrates information from diverse sources, reducing the semantic gap between different representation spaces. Additionally, we incorporate contrastive learning within this module to ensure consistency between the different levels of representations. Finally, extensive experiments demonstrate that FANGIM outperforms state-of-the-art baselines across four datasets.
顺序推荐侧重于基于用户的历史交互序列对用户偏好进行建模,以更精确地预测未来的行为。在项目id之外加入特征级信息已经成为提高推荐系统性能的关键方法。然而,现有的方法忽略了用户之间的层次组关系。这种限制使这些方法无法完全捕获用户偏好,从而导致对用户真正兴趣的不完整理解。同时,在推荐系统中有效利用多源信息仍然是一个重大挑战。现有方法通常依赖于池化或串联等简单技术来集成来自不同来源的信息,这可能会降低整体性能。为了解决这些限制,我们提出了一种新的方法:具有群体感知兴趣建模的特征级注意力网络(FANGIM)。具体来说,我们首先使用两个不同的编码器来生成不同级别的用户嵌入。接下来,我们引入了一个组聚类模块,该模块可以在多个粒度上识别潜在的兴趣组,并为项目级和特征级交互派生用户组兴趣嵌入。此外,我们设计了一个多源表示融合模块,有效地集成了不同来源的信息,减少了不同表示空间之间的语义差距。此外,我们在这个模块中加入了对比学习,以确保不同层次的表示之间的一致性。最后,广泛的实验表明,FANGIM在四个数据集上优于最先进的基线。
{"title":"Feature-level attention network with group-aware interest modeling for sequential recommendation","authors":"Wei Jiang ,&nbsp;Yongquan Fan ,&nbsp;Yajun Du ,&nbsp;Xianyong Li ,&nbsp;Xiaomin Wang","doi":"10.1016/j.neucom.2025.130302","DOIUrl":"10.1016/j.neucom.2025.130302","url":null,"abstract":"<div><div>Sequential recommendation focuses on modeling user preferences based on their historical interaction sequences to predict future behaviors with greater precision. Incorporating feature-level information beyond item IDs has become a crucial approach to improving the performance of the recommendation system. However, existing methods overlook the hierarchical group relationships among users. This limitation prevents these methods from fully capturing user preferences, leading to an incomplete understanding of their true interests. Meanwhile, effectively leveraging multi-source information in recommendation systems remains a significant challenge. Existing methods typically rely on simple techniques such as pooling or concatenation to integrate information from different sources, which could degrade overall performance. To address these limitations, we propose a novel approach: <u><strong>F</strong></u>eature-level <u><strong>A</strong></u>ttention <u><strong>N</strong></u>etwork with <u><strong>G</strong></u>roup-aware <u><strong>I</strong></u>nterest <u><strong>M</strong></u>odeling for Sequential Recommendation (FANGIM). Specifically, we first employ two distinct encoders to generate user embeddings at different level. Next, we introduce a group clustering module, which identifies potential interest groups at multiple granularities and derives user group interest embeddings for both item and feature level interactions. Furthermore, we design a multi-source representation fusion module that effectively integrates information from diverse sources, reducing the semantic gap between different representation spaces. Additionally, we incorporate contrastive learning within this module to ensure consistency between the different levels of representations. Finally, extensive experiments demonstrate that FANGIM outperforms state-of-the-art baselines across four datasets.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"639 ","pages":"Article 130302"},"PeriodicalIF":5.5,"publicationDate":"2025-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143876664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast blind image deblurring via patch-wise maximum content-weighted prior 快速盲图像去模糊通过补丁最大内容加权先验
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-25 DOI: 10.1016/j.neucom.2025.130267
Zheng Guo , Wei Yan , Zirui Zhang, Zhixiang Wu, Zhenhua Xu, Chunyong Wang, Jiancheng Lai, Zhenhua Li
Blind image deblurring aims to derive the kernel and corresponding clear version solely from blurred images. This paper introduces an innovative blind image deblurring method based on the patch-wise maximum content-weighted prior (PMCW). Our work originates from the intuitive observation that the maximum content-weighted value of non-overlapping patches will significantly decrease after blurring degradation, which we demonstrate both mathematically and empirically. Building upon this observation, we propose a novel blind deblurring model combining L0-regularized PMCW and L0-regularized gradient prior, and develop an efficient solution algorithm utilizing projected alternating minimization (PAM). Qualitative and quantitative evaluation results on multiple benchmark datasets indicate that our proposed model achieves optimal performance, surpassing state-of-the-art algorithms in solving efficiency and various quantitative metrics.
盲图像去模糊的目的是单纯从模糊的图像中得到内核和相应的清晰版本。提出了一种基于逐块最大内容加权先验(PMCW)的图像去模糊方法。我们的工作源于直观的观察,即在模糊退化后,非重叠斑块的最大内容加权值将显着降低,我们在数学和经验上都证明了这一点。在此基础上,我们提出了一种结合l0正则化PMCW和l0正则化梯度先验的新型盲去模糊模型,并开发了一种利用投影交替最小化(PAM)的高效解算法。在多个基准数据集上的定性和定量评估结果表明,我们提出的模型达到了最佳性能,在求解效率和各种定量指标方面超过了最先进的算法。
{"title":"Fast blind image deblurring via patch-wise maximum content-weighted prior","authors":"Zheng Guo ,&nbsp;Wei Yan ,&nbsp;Zirui Zhang,&nbsp;Zhixiang Wu,&nbsp;Zhenhua Xu,&nbsp;Chunyong Wang,&nbsp;Jiancheng Lai,&nbsp;Zhenhua Li","doi":"10.1016/j.neucom.2025.130267","DOIUrl":"10.1016/j.neucom.2025.130267","url":null,"abstract":"<div><div>Blind image deblurring aims to derive the kernel and corresponding clear version solely from blurred images. This paper introduces an innovative blind image deblurring method based on the patch-wise maximum content-weighted prior (<em>PMCW</em>). Our work originates from the intuitive observation that the maximum content-weighted value of non-overlapping patches will significantly decrease after blurring degradation, which we demonstrate both mathematically and empirically. Building upon this observation, we propose a novel blind deblurring model combining <span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>0</mn></mrow></msub></math></span>-regularized <em>PMCW</em> and <span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>0</mn></mrow></msub></math></span>-regularized gradient prior, and develop an efficient solution algorithm utilizing projected alternating minimization (PAM). Qualitative and quantitative evaluation results on multiple benchmark datasets indicate that our proposed model achieves optimal performance, surpassing state-of-the-art algorithms in solving efficiency and various quantitative metrics.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"639 ","pages":"Article 130267"},"PeriodicalIF":5.5,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143876662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A fast solution method for the Dynamic Flexible Pickup and Delivery Problem with task allocation fairness for multiple vehicles 考虑任务分配公平性的多车动态柔性取货问题的快速求解方法
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-24 DOI: 10.1016/j.neucom.2025.130266
Zhihui Sun, Ran Tian, Jiarui Wu, Xin Lu, Jinshi Wang
The Dynamic Flexible Pickup and Delivery Problem (DFPDP) originates from the actual needs of multi-warehouse management strategies and is one of the important challenges currently facing the field of logistics and distribution. In DFPDP, it is necessary to address dynamic order fluctuations, quickly plan heterogeneous fleet routes, ensure fairness in task allocation, and minimize total travel time under time window constraints. However, there is currently little research on this issue, and traditional heuristic algorithms make it difficult to quickly find a solution to this problem. First, we propose a Multimodal Constraint Dynamic Scheduling Mechanism (MCDSM) to select a vehicle with the lowest current time consumption to make task allocation between vehicles as fair as possible. Second, we propose a Parallel Encoder-Serial Decoder model integrating Variable-length Sequences (PESDVS), in which the variable-length sequences designed can effectively handle the generation of dynamic orders and the changes in the number of pickup and delivery locations, while the trained model can adapt itself to different order scenarios. In addition, the model improves the quality of order decisions through a parallel encoder and serial decoder structure to minimize the total traveling time of the fleet. Extensive experimental results demonstrate that our method has excellent performance and good generalization ability under different order sizes. At the same time, compared with heuristic algorithms, our method can quickly find a feasible solution to the problem and the task allocation between vehicles is relatively fair.
动态柔性取货问题(DFPDP)源于多仓库管理策略的实际需要,是当前物流配送领域面临的重要挑战之一。在DFPDP中,需要在时间窗口约束下解决动态订单波动问题,快速规划异构车队路线,保证任务分配的公平性,最大限度地减少总行程时间。然而,目前对该问题的研究很少,传统的启发式算法难以快速找到该问题的解。首先,我们提出了一种多模式约束动态调度机制(MCDSM),选择当前时间消耗最小的车辆,使车辆之间的任务分配尽可能公平。其次,我们提出了一种集成变长序列(PESDVS)的并行编码器-串行解码器模型,其中设计的变长序列可以有效地处理动态订单的生成和取货地点的变化,而训练好的模型可以适应不同的订单场景。此外,该模型通过并行编码器和串行解码器结构提高了订单决策的质量,使车队的总行驶时间最小化。大量的实验结果表明,该方法在不同订单大小下具有优异的性能和良好的泛化能力。同时,与启发式算法相比,我们的方法可以快速找到问题的可行解,并且车辆之间的任务分配相对公平。
{"title":"A fast solution method for the Dynamic Flexible Pickup and Delivery Problem with task allocation fairness for multiple vehicles","authors":"Zhihui Sun,&nbsp;Ran Tian,&nbsp;Jiarui Wu,&nbsp;Xin Lu,&nbsp;Jinshi Wang","doi":"10.1016/j.neucom.2025.130266","DOIUrl":"10.1016/j.neucom.2025.130266","url":null,"abstract":"<div><div>The Dynamic Flexible Pickup and Delivery Problem (DFPDP) originates from the actual needs of multi-warehouse management strategies and is one of the important challenges currently facing the field of logistics and distribution. In DFPDP, it is necessary to address dynamic order fluctuations, quickly plan heterogeneous fleet routes, ensure fairness in task allocation, and minimize total travel time under time window constraints. However, there is currently little research on this issue, and traditional heuristic algorithms make it difficult to quickly find a solution to this problem. First, we propose a Multimodal Constraint Dynamic Scheduling Mechanism (MCDSM) to select a vehicle with the lowest current time consumption to make task allocation between vehicles as fair as possible. Second, we propose a Parallel Encoder-Serial Decoder model integrating Variable-length Sequences (PESDVS), in which the variable-length sequences designed can effectively handle the generation of dynamic orders and the changes in the number of pickup and delivery locations, while the trained model can adapt itself to different order scenarios. In addition, the model improves the quality of order decisions through a parallel encoder and serial decoder structure to minimize the total traveling time of the fleet. Extensive experimental results demonstrate that our method has excellent performance and good generalization ability under different order sizes. At the same time, compared with heuristic algorithms, our method can quickly find a feasible solution to the problem and the task allocation between vehicles is relatively fair.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"639 ","pages":"Article 130266"},"PeriodicalIF":5.5,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143874515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pick and mix reliable pseudo labels for scribble-supervised medical image segmentation 挑选和混合可靠的伪标签潦草监督医学图像分割
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-24 DOI: 10.1016/j.neucom.2025.130293
Jiawei Su , Zhiming Luo , Dazhen Lin , Lihui Lin , Shaozi Li
Scribble-supervised segmentation methods have attracted significant attention in the field of medical imaging because of their potential to alleviate the data annotation burden. However, these methods often underperform due to a lack of sufficient supervision. Various methods have attempted to enrich the supervisory signals in different ways, including mixing pseudo labels from different samples (referred as Mixup-based method). However, these methods primarily focus on the quantity of enriched supervisory signals, disregarding their quality. This oversight presents a major drawback in that low-quality signals are often contaminated with the noise, thus can lead to undermine performance. Therefore, it is crucial to not only introduce diverse supervisory signals but also ensure their quality and reliability. Motivated by this understanding, we propose a new framework named Pick & Mix, which builds upon the Mixup-based method. In the first step, we leverage the consistency of intra-class features to assess the reliability of pseudo-labels. To enhance the quality of pseudo labels, we assign lower weights to those unreliable pseudo-labels to mitigate the noise effect in the training process. Furthermore, we utilize a threshold to pick reliable pseudo-labels based on their reliability score. In the second step, we mix the reliable pseudo-labels from various samples and generate corresponding mixed images to provide richer supervisory signals for model training. In this manner, we enhance the quality of supervisory signals by generating and picking reliable ones, as well as enrich the quantity of these signals through a process of mixing. Finally, we evaluated our framework on three publicly available datasets: ACDC, MSCMRseg, and BraTS2020. The experimental results demonstrate that our approach achieves state-of-the-art performance.
涂鸦监督分割方法因其减轻数据标注负担的潜力而在医学成像领域引起了极大的关注。然而,由于缺乏足够的监督,这些方法往往表现不佳。各种方法试图以不同的方式丰富监控信号,包括混合来自不同样本的伪标签(称为Mixup-based method)。然而,这些方法主要关注的是丰富的监控信号的数量,而忽略了它们的质量。这种疏忽带来了一个主要的缺点,即低质量的信号经常被噪声污染,从而可能导致性能下降。因此,不仅要引入多样化的监控信号,还要保证其质量和可靠性。基于这种理解,我们提出了一个名为Pick &;Mix,它建立在基于mixup的方法之上。在第一步,我们利用类内特征的一致性来评估伪标签的可靠性。为了提高伪标签的质量,我们对不可靠的伪标签赋予较低的权值,以减轻训练过程中的噪声影响。此外,我们利用一个阈值来选择可靠的伪标签基于他们的可靠性得分。第二步,我们将来自不同样本的可靠伪标签混合,生成相应的混合图像,为模型训练提供更丰富的监督信号。这样,我们通过产生和挑选可靠的监控信号来提高监控信号的质量,并通过混频过程来丰富这些信号的数量。最后,我们在三个公开可用的数据集上评估了我们的框架:ACDC、MSCMRseg和BraTS2020。实验结果表明,我们的方法达到了最先进的性能。
{"title":"Pick and mix reliable pseudo labels for scribble-supervised medical image segmentation","authors":"Jiawei Su ,&nbsp;Zhiming Luo ,&nbsp;Dazhen Lin ,&nbsp;Lihui Lin ,&nbsp;Shaozi Li","doi":"10.1016/j.neucom.2025.130293","DOIUrl":"10.1016/j.neucom.2025.130293","url":null,"abstract":"<div><div>Scribble-supervised segmentation methods have attracted significant attention in the field of medical imaging because of their potential to alleviate the data annotation burden. However, these methods often underperform due to a lack of sufficient supervision. Various methods have attempted to enrich the supervisory signals in different ways, including mixing pseudo labels from different samples (referred as Mixup-based method). However, these methods primarily focus on the quantity of enriched supervisory signals, disregarding their quality. This oversight presents a major drawback in that low-quality signals are often contaminated with the noise, thus can lead to undermine performance. Therefore, it is crucial to not only introduce diverse supervisory signals but also ensure their quality and reliability. Motivated by this understanding, we propose a new framework named Pick &amp; Mix, which builds upon the Mixup-based method. In the first step, we leverage the consistency of intra-class features to assess the reliability of pseudo-labels. To enhance the quality of pseudo labels, we assign lower weights to those unreliable pseudo-labels to mitigate the noise effect in the training process. Furthermore, we utilize a threshold to pick reliable pseudo-labels based on their reliability score. In the second step, we mix the reliable pseudo-labels from various samples and generate corresponding mixed images to provide richer supervisory signals for model training. In this manner, we enhance the quality of supervisory signals by generating and picking reliable ones, as well as enrich the quantity of these signals through a process of mixing. Finally, we evaluated our framework on three publicly available datasets: ACDC, MSCMRseg, and BraTS2020. The experimental results demonstrate that our approach achieves state-of-the-art performance.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"639 ","pages":"Article 130293"},"PeriodicalIF":5.5,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143876663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
E-Mamba: An efficient Mamba point cloud analysis method with enhanced feature representation E-Mamba:一种高效的Mamba点云分析方法,具有增强的特征表示
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-22 DOI: 10.1016/j.neucom.2025.130201
Dengao Li , Zhichao Gao , Shufeng Hao , Ziyou Xun , Jiajian Song , Jie Cheng , Jumin Zhao
As a key technology for three-dimensional space analysis, point cloud analysis is widely used in many fields such as automated machinery, unmanned vehicles and virtual reality. Learning local and global features of point cloud is crucial for gaining a deep understanding of point cloud data. In point cloud local feature learning, sub-clouds with center coordinates subtracted are usually used as point patches, which are then input into mini-PointNet to enhance the point cloud feature representation. However, this method has a high dependence on the point cloud density, which affects the model performance. In this work, we introduce E-Mamba, a new model for efficient point cloud analysis. We use Scalable Embedding to rescale and patch embedding sub-clouds, which improves the model’s feature representation and generalization capabilities for point cloud. In addition, we also introduced Holosync Reordering Pooling to reorder tokens while preserving the original sequence, and used the hybrid pooling method to extract global features. In this way, the model fully utilizes the periodicity of Mamba while achieving good generalization and global feature extraction capabilities. We conduct extensive experiments on ModelNet40, ScanObjectNN, and ShapeNetPart datasets. The results show that E-Mamba can achieve superior performance while significantly reducing GPU memory usage and FLOPs, whether pre-trained or not.
点云分析作为三维空间分析的关键技术,被广泛应用于自动化机械、无人驾驶车辆、虚拟现实等诸多领域。学习点云的局部和全局特征对于深入理解点云数据至关重要。在点云局部特征学习中,通常使用减去中心坐标的子云作为点补丁,然后将其输入到mini-PointNet中以增强点云特征的表示。然而,该方法对点云密度的依赖性较大,影响了模型的性能。本文介绍了一种新的点云分析模型E-Mamba。我们使用可缩放嵌入技术对子云进行重新缩放和补丁嵌入,提高了模型对点云的特征表示和泛化能力。此外,我们还引入了Holosync重新排序池,在保留原始序列的情况下对令牌进行重新排序,并使用混合池方法提取全局特征。这样,该模型充分利用了曼巴的周期性,同时具有良好的泛化能力和全局特征提取能力。我们在ModelNet40、ScanObjectNN和ShapeNetPart数据集上进行了广泛的实验。结果表明,无论是否进行预训练,E-Mamba都可以在显著降低GPU内存使用和FLOPs的同时实现卓越的性能。
{"title":"E-Mamba: An efficient Mamba point cloud analysis method with enhanced feature representation","authors":"Dengao Li ,&nbsp;Zhichao Gao ,&nbsp;Shufeng Hao ,&nbsp;Ziyou Xun ,&nbsp;Jiajian Song ,&nbsp;Jie Cheng ,&nbsp;Jumin Zhao","doi":"10.1016/j.neucom.2025.130201","DOIUrl":"10.1016/j.neucom.2025.130201","url":null,"abstract":"<div><div>As a key technology for three-dimensional space analysis, point cloud analysis is widely used in many fields such as automated machinery, unmanned vehicles and virtual reality. Learning local and global features of point cloud is crucial for gaining a deep understanding of point cloud data. In point cloud local feature learning, sub-clouds with center coordinates subtracted are usually used as point patches, which are then input into mini-PointNet to enhance the point cloud feature representation. However, this method has a high dependence on the point cloud density, which affects the model performance. In this work, we introduce E-Mamba, a new model for efficient point cloud analysis. We use Scalable Embedding to rescale and patch embedding sub-clouds, which improves the model’s feature representation and generalization capabilities for point cloud. In addition, we also introduced Holosync Reordering Pooling to reorder tokens while preserving the original sequence, and used the hybrid pooling method to extract global features. In this way, the model fully utilizes the periodicity of Mamba while achieving good generalization and global feature extraction capabilities. We conduct extensive experiments on ModelNet40, ScanObjectNN, and ShapeNetPart datasets. The results show that E-Mamba can achieve superior performance while significantly reducing GPU memory usage and FLOPs, whether pre-trained or not.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"639 ","pages":"Article 130201"},"PeriodicalIF":5.5,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143870072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An introspection of graph structure learning: A graph skeleton extraction via minimum dominating set 图结构学习的自省:基于最小支配集的图骨架提取
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-22 DOI: 10.1016/j.neucom.2025.130257
Zifeng Ye, Aifu Han, Guolin Chen, Xiaoxia Huang
Graph structure learning (GSL) is a data-driven learning approach that has garnered widespread attention in recent years. Nevertheless, the insufficient understanding of latent graph properties poses various challenges for effective graph modeling. This raises the following question: What type of graph skeleton can preserve the most crucial latent properties that significantly impact the performance of graph neural networks (GNNs) in downstream tasks? To this end, we have conducted a comprehensive study on three key graph properties: homophily, degree distribution, and connected components, and determined how these factors influence semi-supervised node classification tasks. Specifically, the influence of homophily on GNN performance is rigorously assessed. Motivated by the analysis, a dual-sparsity graph extraction method, based on the minimum dominating set (MDS), is proposed, to intelligently select informative edges under a given edge sampling ratio. This method effectively captures the scale-free characteristics of the degree distribution, and prioritizes the preservation of node connectivity. Experimental results show that homophily is a key factor in achieving high GNN accuracy. Additionally, the degree distribution and connected components describe the connectivity patterns of the graph from both local and global topological perspectives, which are highly correlated with node classification performance under the GNN message-passing mechanism. This work reveals the necessity of considering the graph skeleton and provides a stepping stone for facilitating GSL using these latent graph properties.
图结构学习(GSL)是近年来引起广泛关注的一种数据驱动的学习方法。然而,对潜在图属性的理解不足给有效的图建模带来了各种挑战。这就提出了以下问题:哪种类型的图骨架可以保留最重要的潜在属性,这些属性会显著影响图神经网络(gnn)在下游任务中的性能?为此,我们对同质性、度分布和连通分量这三个关键图属性进行了全面的研究,并确定了这些因素如何影响半监督节点分类任务。具体来说,我们严格评估了同质性对GNN性能的影响。在此基础上,提出了一种基于最小支配集(MDS)的双稀疏图提取方法,在给定的边缘采样比下智能地选择信息边缘。该方法有效地捕捉了度分布的无标度特征,并优先考虑保持节点连通性。实验结果表明,同质性是实现高GNN精度的关键因素。此外,度分布和连接分量从局部和全局拓扑的角度描述了图的连通性模式,这与GNN消息传递机制下的节点分类性能高度相关。这项工作揭示了考虑图骨架的必要性,并为使用这些潜在图属性促进GSL提供了一个垫脚石。
{"title":"An introspection of graph structure learning: A graph skeleton extraction via minimum dominating set","authors":"Zifeng Ye,&nbsp;Aifu Han,&nbsp;Guolin Chen,&nbsp;Xiaoxia Huang","doi":"10.1016/j.neucom.2025.130257","DOIUrl":"10.1016/j.neucom.2025.130257","url":null,"abstract":"<div><div>Graph structure learning (GSL) is a data-driven learning approach that has garnered widespread attention in recent years. Nevertheless, the insufficient understanding of latent graph properties poses various challenges for effective graph modeling. This raises the following question: What type of graph skeleton can preserve the most crucial latent properties that significantly impact the performance of graph neural networks (GNNs) in downstream tasks? To this end, we have conducted a comprehensive study on three key graph properties: homophily, degree distribution, and connected components, and determined how these factors influence semi-supervised node classification tasks. Specifically, the influence of homophily on GNN performance is rigorously assessed. Motivated by the analysis, a dual-sparsity graph extraction method, based on the minimum dominating set (MDS), is proposed, to intelligently select informative edges under a given edge sampling ratio. This method effectively captures the scale-free characteristics of the degree distribution, and prioritizes the preservation of node connectivity. Experimental results show that homophily is a key factor in achieving high GNN accuracy. Additionally, the degree distribution and connected components describe the connectivity patterns of the graph from both local and global topological perspectives, which are highly correlated with node classification performance under the GNN message-passing mechanism. This work reveals the necessity of considering the graph skeleton and provides a stepping stone for facilitating GSL using these latent graph properties.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"639 ","pages":"Article 130257"},"PeriodicalIF":5.5,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143876661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SSSI-L2p: An EEG extended source imaging algorithm based on the structured sparse regularization with L2p-Norm SSSI-L2p:一种基于l2p范数的结构化稀疏正则化的脑电图扩展源成像算法
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-22 DOI: 10.1016/j.neucom.2025.130250
Shu Peng , Hongyu Li , Yujie Deng , Hong Yu , Weibo Yi , Ke Liu
Electroencephalographic (EEG) source imaging (ESI) aims to estimate brain activity locations and extents. ESI is crucial for studying brain functions and detecting epileptic foci. However, accurately reconstructing extended sources remains challenging due to high susceptibility of EEG signals to interference and the underdetermined nature of the ESI problem. In this study, we introduce a new ESI algorithm, Structured Sparse Source Imaging based on L2p-norm (SSSI-L2p), to estimate potential brain activities. SSSI-L2p utilizes the mixed L2p-norm (0<p<1) to enforce spatial–temporal constraints within a structured sparsity regularization framework. By leveraging the alternating direction method of multipliers (ADMM) and iteratively reweighted least squares (IRLS) algorithm, the challenging optimization problem of SSSI-L2p can be effectively solved. We showcase the superior performance of SSSI-L2p over benchmark ESI methods through numerical simulations and human clinical data. Our results demonstrate that sources reconstructed by SSSI-L2p exhibit high spatial resolution and clear boundaries, highlighting its potential as a robust and effective ESI technique. Additionally, we have shared the source code of SSSI-L2p at https://github.com/Mashirops/SSSI-L2p.git.
脑电图(EEG)源成像(ESI)旨在估计大脑活动的位置和程度。ESI对于研究脑功能和检测癫痫灶至关重要。然而,由于脑电图信号对干扰的高度敏感性以及ESI问题的不确定性,准确重建扩展源仍然具有挑战性。在这项研究中,我们引入了一种新的ESI算法,基于L2p-norm的结构化稀疏源成像(SSSI-L2p),来估计潜在的大脑活动。SSSI-L2p利用混合l2p -范数(0<p<1)在结构化稀疏性正则化框架内强制实施时空约束。利用乘法器交替方向法(ADMM)和迭代加权最小二乘(IRLS)算法,可以有效地解决SSSI-L2p的优化问题。我们通过数值模拟和人类临床数据展示了SSSI-L2p优于基准ESI方法的性能。结果表明,SSSI-L2p重建的源具有较高的空间分辨率和清晰的边界,突出了其作为稳健有效的ESI技术的潜力。此外,我们还在https://github.com/Mashirops/SSSI-L2p.git上共享了SSSI-L2p的源代码。
{"title":"SSSI-L2p: An EEG extended source imaging algorithm based on the structured sparse regularization with L2p-Norm","authors":"Shu Peng ,&nbsp;Hongyu Li ,&nbsp;Yujie Deng ,&nbsp;Hong Yu ,&nbsp;Weibo Yi ,&nbsp;Ke Liu","doi":"10.1016/j.neucom.2025.130250","DOIUrl":"10.1016/j.neucom.2025.130250","url":null,"abstract":"<div><div>Electroencephalographic (EEG) source imaging (ESI) aims to estimate brain activity locations and extents. ESI is crucial for studying brain functions and detecting epileptic foci. However, accurately reconstructing extended sources remains challenging due to high susceptibility of EEG signals to interference and the underdetermined nature of the ESI problem. In this study, we introduce a new ESI algorithm, Structured Sparse Source Imaging based on <span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>2</mn><mi>p</mi></mrow></msub></math></span>-norm (SSSI-<span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>2</mn><mi>p</mi></mrow></msub></math></span>), to estimate potential brain activities. SSSI-<span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>2</mn><mi>p</mi></mrow></msub></math></span> utilizes the mixed <span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>2</mn><mi>p</mi></mrow></msub></math></span>-norm (<span><math><mrow><mn>0</mn><mo>&lt;</mo><mi>p</mi><mo>&lt;</mo><mn>1</mn></mrow></math></span>) to enforce spatial–temporal constraints within a structured sparsity regularization framework. By leveraging the alternating direction method of multipliers (ADMM) and iteratively reweighted least squares (IRLS) algorithm, the challenging optimization problem of SSSI-<span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>2</mn><mi>p</mi></mrow></msub></math></span> can be effectively solved. We showcase the superior performance of SSSI-<span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>2</mn><mi>p</mi></mrow></msub></math></span> over benchmark ESI methods through numerical simulations and human clinical data. Our results demonstrate that sources reconstructed by SSSI-<span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>2</mn><mi>p</mi></mrow></msub></math></span> exhibit high spatial resolution and clear boundaries, highlighting its potential as a robust and effective ESI technique. Additionally, we have shared the source code of SSSI-<span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>2</mn><mi>p</mi></mrow></msub></math></span> at <span><span>https://github.com/Mashirops/SSSI-L2p.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"639 ","pages":"Article 130250"},"PeriodicalIF":5.5,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143870070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Kernel broad learning cauchy conjugate gradient algorithm for online chaotic time series prediction 核广义学习柯西共轭梯度在线混沌时间序列预测算法
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-21 DOI: 10.1016/j.neucom.2025.130234
Liyun Su, Xiaoyi Wang
Accurate prediction of nonlinear systems in non-Gaussian noise environments has long been a significant challenge in the fields of statistical data analysis and time series modeling. To address this issue, this paper proposes an improved Cauchy Conjugate Gradient algorithm based on a kernel broad learning feature extraction strategy (Kernel Broad Learning Cauchy Conjugate Gradient, KBLCCG). This algorithm integrates kernel mapping with broad learning systems, forming a dual feature extraction mechanism that effectively captures the complex nonlinear structures of chaotic time series while preserving their inherent dynamic chaotic characteristics. The KBLCCG algorithm utilizes its robust feature extraction capabilities through the dual extraction mechanism of kernel mapping and broad learning systems, effectively capturing the intricate nonlinear structures present in time series data. The kernel broad learning strategy mitigates the phenomenon of kernel matrix size expansion during the iterative process, thereby reducing the computational burden and enhancing the algorithm's robustness. The Cauchy Conjugate Gradient method is employed to optimize the reduced-dimensional feature data, efficiently addressing the nonlinear prediction problem of the target sequence. Empirical analysis using simulation data and actual financial data (including the Lorenz system, Shanghai Composite Index, and CSI 300 Index) validates the performance of this method. Experimental results indicate that KBLCCG significantly outperforms existing adaptive filtering algorithms in terms of prediction accuracy, particularly demonstrating stronger generalization capabilities when dealing with complex chaotic systems. Compared to traditional methods, the kernel broad learning strategy markedly enhances the feature capturing and modeling effectiveness of chaotic time series, further validating the method's efficacy and robustness in nonlinear time series prediction. The KBLCCG algorithm not only exhibits superior predictive capabilities in complex non-Gaussian noise environments but also provides an innovative solution for handling the nonlinear and chaotic characteristics of time series prediction.
在非高斯噪声环境下对非线性系统的准确预测一直是统计数据分析和时间序列建模领域的一个重大挑战。为了解决这一问题,本文提出了一种基于核广义学习特征提取策略的改进Cauchy共轭梯度算法(kernel broad learning Cauchy Conjugate Gradient, KBLCCG)。该算法将核映射与广义学习系统相结合,形成了一种双特征提取机制,既能有效捕获混沌时间序列的复杂非线性结构,又能保持混沌序列固有的动态混沌特征。KBLCCG算法通过核映射和广义学习系统的双重提取机制,利用其强大的特征提取能力,有效捕获时间序列数据中存在的复杂非线性结构。核广义学习策略减轻了迭代过程中核矩阵大小膨胀的现象,从而减少了计算量,增强了算法的鲁棒性。采用柯西共轭梯度法对降维特征数据进行优化,有效地解决了目标序列的非线性预测问题。利用模拟数据和实际金融数据(包括洛伦兹系统、上证综合指数和沪深300指数)进行实证分析,验证了该方法的有效性。实验结果表明,KBLCCG在预测精度方面明显优于现有的自适应滤波算法,特别是在处理复杂混沌系统时表现出更强的泛化能力。与传统方法相比,核广义学习策略显著提高了混沌时间序列的特征捕获和建模效率,进一步验证了该方法在非线性时间序列预测中的有效性和鲁棒性。KBLCCG算法不仅在复杂的非高斯噪声环境中表现出优越的预测能力,而且为处理时间序列预测的非线性和混沌特性提供了一种创新的解决方案。
{"title":"Kernel broad learning cauchy conjugate gradient algorithm for online chaotic time series prediction","authors":"Liyun Su,&nbsp;Xiaoyi Wang","doi":"10.1016/j.neucom.2025.130234","DOIUrl":"10.1016/j.neucom.2025.130234","url":null,"abstract":"<div><div>Accurate prediction of nonlinear systems in non-Gaussian noise environments has long been a significant challenge in the fields of statistical data analysis and time series modeling. To address this issue, this paper proposes an improved Cauchy Conjugate Gradient algorithm based on a kernel broad learning feature extraction strategy (Kernel Broad Learning Cauchy Conjugate Gradient, KBLCCG). This algorithm integrates kernel mapping with broad learning systems, forming a dual feature extraction mechanism that effectively captures the complex nonlinear structures of chaotic time series while preserving their inherent dynamic chaotic characteristics. The KBLCCG algorithm utilizes its robust feature extraction capabilities through the dual extraction mechanism of kernel mapping and broad learning systems, effectively capturing the intricate nonlinear structures present in time series data. The kernel broad learning strategy mitigates the phenomenon of kernel matrix size expansion during the iterative process, thereby reducing the computational burden and enhancing the algorithm's robustness. The Cauchy Conjugate Gradient method is employed to optimize the reduced-dimensional feature data, efficiently addressing the nonlinear prediction problem of the target sequence. Empirical analysis using simulation data and actual financial data (including the Lorenz system, Shanghai Composite Index, and CSI 300 Index) validates the performance of this method. Experimental results indicate that KBLCCG significantly outperforms existing adaptive filtering algorithms in terms of prediction accuracy, particularly demonstrating stronger generalization capabilities when dealing with complex chaotic systems. Compared to traditional methods, the kernel broad learning strategy markedly enhances the feature capturing and modeling effectiveness of chaotic time series, further validating the method's efficacy and robustness in nonlinear time series prediction. The KBLCCG algorithm not only exhibits superior predictive capabilities in complex non-Gaussian noise environments but also provides an innovative solution for handling the nonlinear and chaotic characteristics of time series prediction.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"639 ","pages":"Article 130234"},"PeriodicalIF":5.5,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143869454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Privacy-preserving average consensus for second-order discrete-time multi-agent systems 二阶离散多智能体系统的隐私保护平均一致性
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-21 DOI: 10.1016/j.neucom.2025.130239
Jie Wang, Na Huang, Yun Chen, Qiang Lu
This study addresses the privacy-preserving average consensus problem in second-order discrete multi-agent systems under strongly connected and balanced graphs. When both velocity and position states of each agent are measurable, a novel lightweight algorithm is proposed by introducing perturbation signals into the transmitted information. Specifically, the algorithm is divided into two stages. In the initial stage, each agent introduces perturbation signals into its initial position and velocity states during transmission to confound potential attackers. In the subsequent stage, the agents use a standard average consensus algorithm to update their states, ensuring accurate convergence to the average of the initial states. Additionally, further considering the scenario where the velocity state is unavailable for each agent, an improved edge-based perturbation algorithm is introduced. Both algorithms not only effectively prevent the internal honest-but-curious agents from accurately inferring the initial states of other agents, except in the specific case where the curious agent is the sole neighbor of the target agent, but also protect privacy from the external eavesdroppers. Lastly, several numerical examples are conducted to validate the effectiveness of the proposed theoretical approaches.
研究了强连接平衡图下二阶离散多智能体系统的隐私保护平均一致性问题。当每个agent的速度和位置状态都可测量时,在传输信息中引入扰动信号,提出了一种新的轻量级算法。具体来说,算法分为两个阶段。在初始阶段,每个agent在传输过程中将扰动信号引入其初始位置和速度状态,以混淆潜在攻击者。在后续阶段,代理使用标准的平均共识算法来更新其状态,确保准确收敛到初始状态的平均值。此外,进一步考虑每个agent的速度状态不可用的情况,引入了一种改进的基于边缘的摄动算法。这两种算法不仅可以有效地防止内部诚实但好奇的智能体准确推断其他智能体的初始状态,除了好奇的智能体是目标智能体的唯一邻居的特定情况外,还可以保护隐私免受外部窃听者的侵害。最后,通过数值算例验证了所提理论方法的有效性。
{"title":"Privacy-preserving average consensus for second-order discrete-time multi-agent systems","authors":"Jie Wang,&nbsp;Na Huang,&nbsp;Yun Chen,&nbsp;Qiang Lu","doi":"10.1016/j.neucom.2025.130239","DOIUrl":"10.1016/j.neucom.2025.130239","url":null,"abstract":"<div><div>This study addresses the privacy-preserving average consensus problem in second-order discrete multi-agent systems under strongly connected and balanced graphs. When both velocity and position states of each agent are measurable, a novel lightweight algorithm is proposed by introducing perturbation signals into the transmitted information. Specifically, the algorithm is divided into two stages. In the initial stage, each agent introduces perturbation signals into its initial position and velocity states during transmission to confound potential attackers. In the subsequent stage, the agents use a standard average consensus algorithm to update their states, ensuring accurate convergence to the average of the initial states. Additionally, further considering the scenario where the velocity state is unavailable for each agent, an improved edge-based perturbation algorithm is introduced. Both algorithms not only effectively prevent the internal honest-but-curious agents from accurately inferring the initial states of other agents, except in the specific case where the curious agent is the sole neighbor of the target agent, but also protect privacy from the external eavesdroppers. Lastly, several numerical examples are conducted to validate the effectiveness of the proposed theoretical approaches.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"639 ","pages":"Article 130239"},"PeriodicalIF":5.5,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143870068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimating expert prior knowledge from optimization trajectories 从优化轨迹估计专家先验知识
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-20 DOI: 10.1016/j.neucom.2025.130219
Ville Tanskanen, Petrus Mikkola, Aras Erarslan, Arto Klami
A recurring task in research is iterative optimization of a process that can be evaluated only by conducting an experiment. Powerful algorithms for assisting this process exist, but they largely ignore the valuable knowledge of expert scientists. We consider a problem within this general scope, not aiming to automate the optimization but instead studying how to infer tacit expert knowledge. This complements the current literature focusing on how such information is used in the optimization process, paying little attention on how the information is obtained. We consider a new formulation where the expertise is inferred by passively observing a human solving an optimization problem, without requiring explicit elicitation techniques. Our solution leverages concepts from Bayesian optimization (BO) commonly used for automating the optimization, but now these tools are used as a theoretical model for the user behavior instead. We assume the expert solves the task approximately in the same manner as a BO algorithm would and solve what kind of prior knowledge about the target function is consistent with the sequence of choices they made. We introduce the problem and a concrete solution, and show that the recovered priors match the true priors in controlled simulated studies. We also empirically evaluate the robustness of the method against violations of the modeling assumptions and demonstrate it on real user data.
研究中一个反复出现的任务是对一个过程进行迭代优化,而这个过程只能通过进行实验来评估。帮助这一过程的强大算法是存在的,但它们在很大程度上忽略了专家科学家的宝贵知识。我们在这个一般范围内考虑一个问题,不是为了自动化优化,而是研究如何推断隐性专家知识。这补充了目前的文献关注如何在优化过程中使用这些信息,很少关注如何获得信息。我们考虑了一个新的公式,其中的专业知识是通过被动地观察一个人解决一个优化问题,而不需要明确的启发技术推断。我们的解决方案利用了通常用于自动化优化的贝叶斯优化(BO)的概念,但现在这些工具被用作用户行为的理论模型。我们假设专家近似地以与BO算法相同的方式解决任务,并解决关于目标函数的哪种先验知识与他们所做的选择序列一致。介绍了该问题及其具体解决方案,并在控制模拟研究中证明了恢复先验与真实先验相匹配。我们还根据经验评估了该方法对违反建模假设的鲁棒性,并在真实用户数据上进行了演示。
{"title":"Estimating expert prior knowledge from optimization trajectories","authors":"Ville Tanskanen,&nbsp;Petrus Mikkola,&nbsp;Aras Erarslan,&nbsp;Arto Klami","doi":"10.1016/j.neucom.2025.130219","DOIUrl":"10.1016/j.neucom.2025.130219","url":null,"abstract":"<div><div>A recurring task in research is iterative optimization of a process that can be evaluated only by conducting an experiment. Powerful algorithms for assisting this process exist, but they largely ignore the valuable knowledge of expert scientists. We consider a problem within this general scope, not aiming to automate the optimization but instead studying how to infer tacit expert knowledge. This complements the current literature focusing on how such information is used in the optimization process, paying little attention on how the information is obtained. We consider a new formulation where the expertise is inferred by passively observing a human solving an optimization problem, without requiring explicit elicitation techniques. Our solution leverages concepts from Bayesian optimization (BO) commonly used for automating the optimization, but now these tools are used as a theoretical model for the user behavior instead. We assume the expert solves the task approximately in the same manner as a BO algorithm would and solve what kind of prior knowledge about the target function is consistent with the sequence of choices they made. We introduce the problem and a concrete solution, and show that the recovered priors match the true priors in controlled simulated studies. We also empirically evaluate the robustness of the method against violations of the modeling assumptions and demonstrate it on real user data.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"639 ","pages":"Article 130219"},"PeriodicalIF":5.5,"publicationDate":"2025-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143863818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Neurocomputing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1