首页 > 最新文献

Artificial Intelligence最新文献

英文 中文
Arc-consistency with linear programming reduced costs (applied to stable set in chordal graphs) 线性规划的弧一致性降低了成本(应用于弦图中的稳定集)
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-01 Epub Date: 2025-10-10 DOI: 10.1016/j.artint.2025.104438
Guillaume Claus , Hadrien Cambazard , Hugo Apeloig , Pierre Hoppenot
A well known technique to reduce the search space in integer programming is known as variable fixing or reduced cost strengthening. The reduced costs given by an optimal dual solution of the linear relaxation can be used to strengthen the bounds of the variables but this filtering is incomplete. We show how reduced costs can be used to achieve Arc-Consistency (AC), i.e. a complete filtering, of a global constraint with a cost variable and an assignment cost for each value. We assume that an ideal Integer Linear Programming (ILP) formulation is available i.e. the convex hull of the characteristic vectors of the supports is known. A detailed analysis of reduced cost based filtering is proposed. We characterize arc-consistency based on complementary slackness i.e. completeness of reasoning as opposed to only optimality. We also give a simple sufficient condition allowing a set of dual solutions to ensure arc-consistency through reduced costs. In practice, when the constraint has a such an ideal ILP, n dual solutions are always enough to achieve AC (where n is the number of variables of the global constraint). It extends the work presented in [26] for satisfaction problems and in [17] for the specific case of the minimum weighted alldifferent constraint. Our analysis is illustrated on constraints related to the assignment and shortest path problem and also demonstrated on the weighted stable set problem in chordal graphs. A novel AC algorithm is proposed in this latter case based on reduced costs.
在整数规划中减小搜索空间的一种众所周知的技术是变量固定或降低成本增强。线性松弛的最优对偶解给出的降低代价可以用来加强变量的边界,但这种过滤是不完全的。我们展示了如何使用降低的成本来实现Arc-Consistency (AC),即具有成本变量和每个值的分配成本的全局约束的完整过滤。我们假设一个理想的整数线性规划(ILP)公式是可用的,即支撑的特征向量的凸包是已知的。对基于降代价的滤波进行了详细的分析。我们描述弧一致性基于互补松弛,即推理的完备性,而不是仅仅最优性。我们还给出了一个简单的充分条件,允许一组对偶解通过降低成本来保证弧一致性。在实践中,当约束具有这样一个理想的ILP时,n个对偶解总是足以实现AC(其中n为全局约束的变量数)。它扩展了[26]中关于满足问题的研究和[17]中关于最小加权所有不同约束的具体情况的研究。我们的分析说明了与分配和最短路径问题相关的约束,并证明了弦图中的加权稳定集问题。针对后一种情况,提出了一种新的基于降低成本的交流算法。
{"title":"Arc-consistency with linear programming reduced costs (applied to stable set in chordal graphs)","authors":"Guillaume Claus ,&nbsp;Hadrien Cambazard ,&nbsp;Hugo Apeloig ,&nbsp;Pierre Hoppenot","doi":"10.1016/j.artint.2025.104438","DOIUrl":"10.1016/j.artint.2025.104438","url":null,"abstract":"<div><div>A well known technique to reduce the search space in integer programming is known as <em>variable fixing</em> or <em>reduced cost strengthening</em>. The reduced costs given by an optimal dual solution of the linear relaxation can be used to strengthen the bounds of the variables but this filtering is incomplete. We show how reduced costs can be used to achieve Arc-Consistency (AC), <em>i.e.</em> a complete filtering, of a global constraint with a cost variable and an assignment cost for each value. We assume that an ideal Integer Linear Programming (ILP) formulation is available i.e. the convex hull of the characteristic vectors of the supports is known. A detailed analysis of reduced cost based filtering is proposed. We characterize arc-consistency based on complementary slackness <em>i.e.</em> completeness of reasoning as opposed to only optimality. We also give a simple sufficient condition allowing a set of dual solutions to ensure arc-consistency through reduced costs. In practice, when the constraint has a such an ideal ILP, <em>n</em> dual solutions are always enough to achieve AC (where <em>n</em> is the number of variables of the global constraint). It extends the work presented in <span><span>[26]</span></span> for satisfaction problems and in <span><span>[17]</span></span> for the specific case of the minimum weighted alldifferent constraint. Our analysis is illustrated on constraints related to the assignment and shortest path problem and also demonstrated on the weighted stable set problem in chordal graphs. A novel AC algorithm is proposed in this latter case based on reduced costs.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"349 ","pages":"Article 104438"},"PeriodicalIF":4.6,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145359752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Incentives for responsiveness, instrumental control and impact 激励响应,工具控制和影响
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-01 Epub Date: 2025-09-02 DOI: 10.1016/j.artint.2025.104408
Ryan Carey , Eric Langlois , Chris van Merwijk , Shane Legg , Tom Everitt
We introduce three concepts that describe an agent's incentives: response incentives indicate which variables in the environment, such as sensitive demographic information, affect the decision under the optimal policy. Instrumental control incentives indicate whether an agent's policy is chosen to manipulate part of its environment, such as the preferences or instructions of a user. Impact incentives indicate which variables an agent will affect, intentionally or otherwise. For each concept, we establish sound and complete graphical criteria, and discuss general classes of techniques that may be used to produce incentives for safe and fair agent behaviour. Finally, we outline how these notions may be generalised to multi-decision settings.
This journal paper extends our conference publication “Agent Incentives: A Causal Perspective”: the material on response incentives and instrumental control incentives is updated, while the work on impact incentives and multi-decision settings is entirely new.
我们引入了描述agent激励的三个概念:响应激励表明环境中的哪些变量,如敏感的人口统计信息,会影响最优策略下的决策;工具控制激励指示代理是否选择策略来操纵其环境的一部分,例如用户的偏好或指令。影响激励表明代理人有意或无意地影响哪些变量。对于每个概念,我们建立了健全和完整的图形标准,并讨论了可用于产生安全和公平代理行为激励的一般技术类别。最后,我们概述了如何将这些概念推广到多决策设置。这篇期刊论文扩展了我们的会议出版物“Agent Incentives: A Causal Perspective”:更新了关于响应激励和工具控制激励的材料,而关于影响激励和多决策设置的工作则是全新的。
{"title":"Incentives for responsiveness, instrumental control and impact","authors":"Ryan Carey ,&nbsp;Eric Langlois ,&nbsp;Chris van Merwijk ,&nbsp;Shane Legg ,&nbsp;Tom Everitt","doi":"10.1016/j.artint.2025.104408","DOIUrl":"10.1016/j.artint.2025.104408","url":null,"abstract":"<div><div>We introduce three concepts that describe an agent's incentives: response incentives indicate which variables in the environment, such as sensitive demographic information, affect the decision under the optimal policy. Instrumental control incentives indicate whether an agent's policy is chosen to manipulate part of its environment, such as the preferences or instructions of a user. Impact incentives indicate which variables an agent will affect, intentionally or otherwise. For each concept, we establish sound and complete graphical criteria, and discuss general classes of techniques that may be used to produce incentives for safe and fair agent behaviour. Finally, we outline how these notions may be generalised to multi-decision settings.</div><div>This journal paper extends our conference publication “Agent Incentives: A Causal Perspective”: the material on response incentives and instrumental control incentives is updated, while the work on impact incentives and multi-decision settings is entirely new.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"348 ","pages":"Article 104408"},"PeriodicalIF":4.6,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145018406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Algebras of actions in an agent's representations of the world 代理对世界的表示中的行动代数
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-01 Epub Date: 2025-08-20 DOI: 10.1016/j.artint.2025.104403
Alexander Dean, Eduardo Alonso, Esther Mondragón
Learning efficient representations allows robust processing of data, data that can then be generalised across different tasks and domains, and it is thus paramount in various areas of Artificial Intelligence, including computer vision, natural language processing and reinforcement learning, among others. Within the context of reinforcement learning, we propose in this paper a mathematical framework to learn representations by extracting the algebra of the transformations of worlds from the perspective of an agent. As a starting point, we use our framework to reproduce representations from the symmetry-based disentangled representation learning (SBDRL) formalism proposed by [1] and prove that, although useful, they are restricted to transformations that respond to the properties of algebraic groups. We then generalise two important results of SBDRL –the equivariance condition and the disentangling definition– from only working with group-based symmetry representations to working with representations capturing the transformation properties of worlds for any algebra, using examples common in reinforcement learning and generated by an algorithm that computes their corresponding Cayley tables. Finally, we combine our generalised equivariance condition and our generalised disentangling definition to show that disentangled sub-algebras can each have their own individual equivariance conditions, which can be treated independently, using category theory. In so doing, our framework offers a rich formal tool to represent different types of symmetry transformations in reinforcement learning, extending the scope of previous proposals and providing Artificial Intelligence developers with a sound foundation to implement efficient applications.
学习高效表示允许对数据进行稳健处理,然后可以将数据推广到不同的任务和领域,因此它在人工智能的各个领域至关重要,包括计算机视觉、自然语言处理和强化学习等。在强化学习的背景下,我们在本文中提出了一个数学框架,通过从代理的角度提取世界变换的代数来学习表征。作为起点,我们使用我们的框架从[1]提出的基于对称的解纠缠表示学习(SBDRL)形式主义中再现表示,并证明尽管它们有用,但它们仅限于响应代数群性质的变换。然后,我们推广了SBDRL的两个重要结果——等方差条件和解纠集定义——从仅处理基于群的对称表示到处理捕获任何代数世界变换属性的表示,使用强化学习中常见的示例,并由计算相应Cayley表的算法生成。最后,我们将广义等方差条件和广义解纠缠定义结合起来,证明了解纠缠子代数可以有各自独立的等方差条件,这些条件可以用范畴论独立处理。通过这样做,我们的框架提供了一个丰富的形式化工具来表示强化学习中不同类型的对称变换,扩展了以前建议的范围,并为人工智能开发人员提供了实现高效应用的良好基础。
{"title":"Algebras of actions in an agent's representations of the world","authors":"Alexander Dean,&nbsp;Eduardo Alonso,&nbsp;Esther Mondragón","doi":"10.1016/j.artint.2025.104403","DOIUrl":"10.1016/j.artint.2025.104403","url":null,"abstract":"<div><div>Learning efficient representations allows robust processing of data, data that can then be generalised across different tasks and domains, and it is thus paramount in various areas of Artificial Intelligence, including computer vision, natural language processing and reinforcement learning, among others. Within the context of reinforcement learning, we propose in this paper a mathematical framework to learn representations by extracting the algebra of the transformations of worlds from the perspective of an agent. As a starting point, we use our framework to reproduce representations from the symmetry-based disentangled representation learning (SBDRL) formalism proposed by <span><span>[1]</span></span> and prove that, although useful, they are restricted to transformations that respond to the properties of algebraic groups. We then generalise two important results of SBDRL –the equivariance condition and the disentangling definition– from only working with group-based symmetry representations to working with representations capturing the transformation properties of worlds for any algebra, using examples common in reinforcement learning and generated by an algorithm that computes their corresponding Cayley tables. Finally, we combine our generalised equivariance condition and our generalised disentangling definition to show that disentangled sub-algebras can each have their own individual equivariance conditions, which can be treated independently, using category theory. In so doing, our framework offers a rich formal tool to represent different types of symmetry transformations in reinforcement learning, extending the scope of previous proposals and providing Artificial Intelligence developers with a sound foundation to implement efficient applications.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"348 ","pages":"Article 104403"},"PeriodicalIF":4.6,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144886771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Minimax off-policy evaluation and learning with subgaussian and differentiable importance weighting 基于亚高斯和可微重要性加权的极大极小非策略评价与学习
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-01 Epub Date: 2025-09-15 DOI: 10.1016/j.artint.2025.104419
Alberto Maria Metelli, Alessio Russo, Marcello Restelli
In this work, we study the statistical properties of the off-policy estimation problem, i.e., estimating expectations under a target policy using samples collected from a different policy. We begin by presenting a novel minimax concentration lower bound that highlights the fundamental limits of off-policy estimation. We then analyze two well-known importance weighting (IW) techniques: vanilla IW and self-normalized importance weighting (SN). For both methods, we derive concentration and anti-concentration results, showing that their concentration rates are provably suboptimal compared to our lower bound. Observing that this undesired behavior arises from the heavy-tailed nature of the IW and SN estimators, we propose a new class of parametric estimators based on a transformation using the power mean (PM), which is no longer heavy-tailed. We study the theoretical properties of the PM estimator in terms of bias and variance. We show that, with suitable (possibly data-driven) tuning of its parameters, the PM estimator satisfies two key properties under certain conditions: (i) it achieves a subgaussian concentration rate that matches our lower bound and (ii) it maintains differentiability with respect to the target policy. Finally, we validate our approach through numerical simulations on both synthetic datasets and contextual bandits, comparing it against standard off-policy evaluation and learning baselines.1
在这项工作中,我们研究了非策略估计问题的统计性质,即使用从不同策略收集的样本估计目标策略下的期望。我们首先提出了一个新的极大极小浓度下界,突出了非政策估计的基本限制。然后,我们分析了两种众所周知的重要性加权(IW)技术:香草重要性加权和自标准化重要性加权(SN)。对于这两种方法,我们都得到了浓缩和反浓缩的结果,表明与我们的下界相比,它们的浓缩率可证明是次优的。观察到这种不希望的行为是由IW和SN估计器的重尾性质引起的,我们提出了一类新的基于使用功率均值(PM)变换的参数估计器,它不再是重尾。我们从偏置和方差的角度研究了PM估计量的理论性质。我们表明,通过适当的(可能是数据驱动的)参数调整,PM估计器在某些条件下满足两个关键性质:(i)它实现了与我们的下界匹配的亚高斯浓度率;(ii)它保持了相对于目标策略的可微性。最后,我们通过在合成数据集和上下文强盗上的数值模拟来验证我们的方法,并将其与标准的非政策评估和学习基线进行比较
{"title":"Minimax off-policy evaluation and learning with subgaussian and differentiable importance weighting","authors":"Alberto Maria Metelli,&nbsp;Alessio Russo,&nbsp;Marcello Restelli","doi":"10.1016/j.artint.2025.104419","DOIUrl":"10.1016/j.artint.2025.104419","url":null,"abstract":"<div><div>In this work, we study the statistical properties of the <em>off-policy estimation</em> problem, i.e., estimating expectations under a target policy using samples collected from a different policy. We begin by presenting a novel minimax concentration lower bound that highlights the fundamental limits of off-policy estimation. We then analyze two well-known <em>importance weighting</em> (IW) techniques: vanilla IW and self-normalized importance weighting (SN). For both methods, we derive concentration and anti-concentration results, showing that their concentration rates are provably suboptimal compared to our lower bound. Observing that this undesired behavior arises from the <em>heavy-tailed</em> nature of the IW and SN estimators, we propose a new class of parametric estimators based on a transformation using the <em>power mean</em> (PM), which is no longer heavy-tailed. We study the theoretical properties of the PM estimator in terms of bias and variance. We show that, with suitable (possibly data-driven) tuning of its parameters, the PM estimator satisfies two key properties under certain conditions: (<em>i</em>) it achieves a <em>subgaussian</em> concentration rate that matches our lower bound and (<em>ii</em>) it maintains differentiability with respect to the target policy. Finally, we validate our approach through numerical simulations on both synthetic datasets and contextual bandits, comparing it against standard off-policy evaluation and learning baselines.<span><span><sup>1</sup></span></span></div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"348 ","pages":"Article 104419"},"PeriodicalIF":4.6,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145094875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the design of truthful mechanisms for the capacitated facility location problem with two and more facilities 两个及两个以上可容设施选址问题的真实机制设计
IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-01 Epub Date: 2025-07-07 DOI: 10.1016/j.artint.2025.104390
Gennaro Auricchio , Zihe Wang , Jie Zhang
In this paper, we explore the Mechanism Design aspects of the m-Capacitated Facility Location Problem (m-CFLP) on a line, focusing on two frameworks. In the first framework, the number of facilities is arbitrary, all facilities share the same capacity, and the number of agents matches the total capacity of the facilities. In the second framework, we need to locate two facilities, each with a capacity equal to at least half the number of agents. For both frameworks, we propose truthful mechanisms with bounded approximation ratios in terms of Social Cost (SC) and Maximum Cost (MC). When m>2, our results stand in contrast to the impossibility results known for the classical m-Facility Location Problem, where capacity constraints are absent. Moreover, all the proposed mechanisms are optimal with respect to MC and either optimal or near-optimal with respect to the SC among anonymous mechanisms. We then establish lower bounds on the approximation ratios that any truthful and deterministic mechanism achieves with respect to SC and MC for both frameworks. Lastly, we run several numerical experiments to empirically evaluate the performances of our mechanisms with respect to the SC or the MC. Our empirical analysis shows that our proposed mechanisms outperform all previously proposed mechanisms applicable in this setting.
在本文中,我们探讨了m-Capacitated设施选址问题(m-CFLP)在一条线上的机制设计方面,重点是两个框架。在第一个框架中,设施的数量是任意的,所有设施共享相同的容量,代理的数量与设施的总容量相匹配。在第二个框架中,我们需要找到两个设施,每个设施的容量至少等于代理数量的一半。对于这两个框架,我们提出了基于社会成本(SC)和最大成本(MC)的有界近似比的真实机制。当m>;2时,我们的结果与不存在容量约束的经典m-设施选址问题的不可能结果形成对比。此外,所有提出的机制都是最优的MC和最优或接近最优的SC在匿名机制。然后,我们建立了关于两个框架的SC和MC的任何真实和确定性机制所达到的近似比率的下界。最后,我们进行了几个数值实验,以经验性地评估我们的机制相对于SC或MC的性能。我们的实证分析表明,我们提出的机制优于所有先前提出的适用于此设置的机制。
{"title":"On the design of truthful mechanisms for the capacitated facility location problem with two and more facilities","authors":"Gennaro Auricchio ,&nbsp;Zihe Wang ,&nbsp;Jie Zhang","doi":"10.1016/j.artint.2025.104390","DOIUrl":"10.1016/j.artint.2025.104390","url":null,"abstract":"<div><div>In this paper, we explore the Mechanism Design aspects of the <em>m</em>-Capacitated Facility Location Problem (<em>m</em>-CFLP) on a line, focusing on two frameworks. In the first framework, the number of facilities is arbitrary, all facilities share the same capacity, and the number of agents matches the total capacity of the facilities. In the second framework, we need to locate two facilities, each with a capacity equal to at least half the number of agents. For both frameworks, we propose truthful mechanisms with bounded approximation ratios in terms of Social Cost (SC) and Maximum Cost (MC). When <span><math><mi>m</mi><mo>&gt;</mo><mn>2</mn></math></span>, our results stand in contrast to the impossibility results known for the classical <em>m</em>-Facility Location Problem, where capacity constraints are absent. Moreover, all the proposed mechanisms are optimal with respect to MC and either optimal or near-optimal with respect to the SC among anonymous mechanisms. We then establish lower bounds on the approximation ratios that any truthful and deterministic mechanism achieves with respect to SC and MC for both frameworks. Lastly, we run several numerical experiments to empirically evaluate the performances of our mechanisms with respect to the SC or the MC. Our empirical analysis shows that our proposed mechanisms outperform all previously proposed mechanisms applicable in this setting.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"348 ","pages":"Article 104390"},"PeriodicalIF":5.1,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144581145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interpreting capsule networks for image classification by routing path visualization 基于路由路径可视化的图像分类胶囊网络解释
IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-01 Epub Date: 2025-07-17 DOI: 10.1016/j.artint.2025.104395
Amanjot Bhullar , Michael Czomko , R. Ayesha Ali , Douglas L. Welch
Artificial neural networks are popular for computer vision as they often give state-of-the-art performance, but are difficult to interpret because of their complexity. This black box modeling is especially troubling when the application concerns human well-being such as in medical image analysis or autonomous driving. In this work, we propose a technique called routing path visualization for capsule networks, which reveals how much of each region in an image is routed to each capsule. In turn, this technique can be used to interpret the entity that a given capsule detects, and speculate how the network makes a prediction. We demonstrate our new visualization technique on several real world datasets. Experimental results suggest that routing path visualization can precisely localize the predicted class from an image, even though the capsule networks are trained using just images and their respective class labels, without additional information defining the location of the class in the image.
人工神经网络在计算机视觉领域很受欢迎,因为它们通常提供最先进的性能,但由于其复杂性而难以解释。当医疗图像分析或自动驾驶等涉及人类福祉的应用程序时,这种黑盒建模尤其令人不安。在这项工作中,我们提出了一种称为胶囊网络路由路径可视化的技术,它揭示了图像中每个区域路由到每个胶囊的数量。反过来,这种技术可以用来解释给定胶囊检测到的实体,并推测网络如何做出预测。我们在几个真实世界的数据集上演示了我们新的可视化技术。实验结果表明,路由路径可视化可以精确地从图像中定位预测的类别,即使胶囊网络仅使用图像及其各自的类别标签进行训练,而没有额外的信息来定义图像中类别的位置。
{"title":"Interpreting capsule networks for image classification by routing path visualization","authors":"Amanjot Bhullar ,&nbsp;Michael Czomko ,&nbsp;R. Ayesha Ali ,&nbsp;Douglas L. Welch","doi":"10.1016/j.artint.2025.104395","DOIUrl":"10.1016/j.artint.2025.104395","url":null,"abstract":"<div><div>Artificial neural networks are popular for computer vision as they often give state-of-the-art performance, but are difficult to interpret because of their complexity. This black box modeling is especially troubling when the application concerns human well-being such as in medical image analysis or autonomous driving. In this work, we propose a technique called routing path visualization for capsule networks, which reveals how much of each region in an image is routed to each capsule. In turn, this technique can be used to interpret the entity that a given capsule detects, and speculate how the network makes a prediction. We demonstrate our new visualization technique on several real world datasets. Experimental results suggest that routing path visualization can precisely localize the predicted class from an image, even though the capsule networks are trained using just images and their respective class labels, without additional information defining the location of the class in the image.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"348 ","pages":"Article 104395"},"PeriodicalIF":5.1,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144665065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Abstracting situation calculus action theories 抽象情境演算行动理论
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-01 Epub Date: 2025-09-01 DOI: 10.1016/j.artint.2025.104407
Bita Banihashemi , Giuseppe De Giacomo , Yves Lespérance
We develop a general framework for agent abstraction based on the situation calculus and the ConGolog agent programming language. We assume that we have a high-level specification and a low-level specification of the agent, both represented as basic action theories. A refinement mapping specifies how each high-level action is implemented by a low-level ConGolog program and how each high-level fluent can be translated into a low-level formula. We define a notion of sound abstraction between such action theories in terms of the existence of a suitable bisimulation between their respective models. Sound abstractions have many useful properties that ensure that we can reason about the agent's actions (e.g., executability, projection, and planning) at the abstract level, and refine and concretely execute them at the low level. We also characterize the notion of complete abstraction where all actions (including exogenous ones) that the high level thinks can happen can in fact occur at the low level. To facilitate verifying that one has a sound/complete abstraction relative to a mapping, we provide a set of necessary and sufficient conditions. Finally, we identify a set of basic action theory constraints that ensure that for any low-level action sequence, there is a unique high-level action sequence that it refines. This allows us to track/monitor what the low-level agent is doing and describe it in abstract terms (i.e., provide high-level explanations, for instance, to a client or manager).
基于情境演算和ConGolog代理编程语言,我们开发了一个通用的代理抽象框架。我们假设我们有代理的高级规范和低级规范,它们都表示为基本的行为理论。细化映射指定了每个高级动作如何由低级的ConGolog程序实现,以及如何将每个高级流畅转换为低级公式。我们在这些行为理论之间定义了一个声音抽象的概念,根据它们各自模型之间存在合适的双模拟。合理的抽象具有许多有用的属性,这些属性确保我们可以在抽象级别上推断代理的行为(例如,可执行性、投影和计划),并在较低级别上改进和具体执行它们。我们还描述了完全抽象的概念,即高层认为可能发生的所有行为(包括外生行为)实际上都发生在低层。为了便于验证一个人相对于映射有一个健全/完整的抽象,我们提供了一组必要和充分的条件。最后,我们确定了一组基本的动作理论约束,确保对于任何低级动作序列,都有一个唯一的高级动作序列。这允许我们跟踪/监视低级代理正在做什么,并用抽象术语描述它(例如,提供高级解释,例如,向客户或经理)。
{"title":"Abstracting situation calculus action theories","authors":"Bita Banihashemi ,&nbsp;Giuseppe De Giacomo ,&nbsp;Yves Lespérance","doi":"10.1016/j.artint.2025.104407","DOIUrl":"10.1016/j.artint.2025.104407","url":null,"abstract":"<div><div>We develop a general framework for <em>agent abstraction</em> based on the situation calculus and the <span>ConGolog</span> agent programming language. We assume that we have a high-level specification and a low-level specification of the agent, both represented as basic action theories. A <em>refinement mapping</em> specifies how each high-level action is implemented by a low-level <span>ConGolog</span> program and how each high-level fluent can be translated into a low-level formula. We define a notion of <em>sound abstraction</em> between such action theories in terms of the existence of a suitable bisimulation between their respective models. Sound abstractions have many useful properties that ensure that we can reason about the agent's actions (e.g., executability, projection, and planning) at the abstract level, and refine and concretely execute them at the low level. We also characterize the notion of <em>complete abstraction</em> where all actions (including exogenous ones) that the high level thinks can happen can in fact occur at the low level. To facilitate verifying that one has a sound/complete abstraction relative to a mapping, we provide a set of necessary and sufficient conditions. Finally, we identify a set of basic action theory constraints that ensure that for any low-level action sequence, there is a unique high-level action sequence that it refines. This allows us to track/monitor what the low-level agent is doing and describe it in abstract terms (i.e., provide high-level explanations, for instance, to a client or manager).</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"348 ","pages":"Article 104407"},"PeriodicalIF":4.6,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144988707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rethinking visual prompt learning as masked visual token modeling 视觉提示学习作为蒙面视觉标记建模的再思考
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-01 Epub Date: 2025-09-10 DOI: 10.1016/j.artint.2025.104417
Ning Liao , Bowen Shi , Xiaopeng Zhang , Min Cao , Junchi Yan , Qi Tian
Prompt learning has achieved great success in efficiently exploiting large-scale pre-trained models in natural language processing (NLP). It reformulates the downstream tasks as the generative pre-training ones to achieve consistency, thus improving the performance stably. However, when transferring it to the vision area, current visual prompt learning methods are almost designed on discriminative pre-trained models, and there is also a lack of careful design to unify the forms of pre-training and downstream tasks. To explore prompt learning on the generative pre-trained visual model, as well as keeping the task consistency, we propose Visual Prompt learning as masked visual Token Modeling (VPTM) to transform the downstream visual classification task into the pre-trained masked visual token prediction task. In addition, we develop the prototypical verbalizer for mapping the predicted visual token with implicit semantics to explicit downstream labels. To our best knowledge, VPTM is the first visual prompt method on the generative pre-trained visual model, which achieves consistency between pre-training and downstream visual classification by task reformulation. Experiments show that VPTM outperforms other visual prompt methods and achieves excellent efficiency. Moreover, the task consistency of VPTM contributes to the robustness against prompt location, prompt length and prototype dimension, and could be deployed uniformly.
在自然语言处理(NLP)中,提示学习在有效利用大规模预训练模型方面取得了巨大成功。它将下游任务重新表述为生成式预训练任务,以达到一致性,从而稳定地提高性能。然而,当将其转移到视觉区域时,目前的视觉提示学习方法几乎都是在判别性预训练模型上设计的,也缺乏将预训练和下游任务的形式统一起来的精心设计。为了探索生成式预训练视觉模型上的提示学习,并保持任务一致性,我们提出了视觉提示学习作为屏蔽视觉标记建模(VPTM),将下游的视觉分类任务转化为预训练的屏蔽视觉标记预测任务。此外,我们还开发了原型语言表达器,用于将具有隐式语义的预测视觉标记映射到显式下游标签。据我们所知,VPTM是第一个基于生成式预训练视觉模型的视觉提示方法,它通过任务重构实现了预训练与下游视觉分类的一致性。实验表明,VPTM优于其他视觉提示方法,具有优异的效率。此外,VPTM的任务一致性有助于增强对提示位置、提示长度和原型尺寸的鲁棒性,并且可以统一部署。
{"title":"Rethinking visual prompt learning as masked visual token modeling","authors":"Ning Liao ,&nbsp;Bowen Shi ,&nbsp;Xiaopeng Zhang ,&nbsp;Min Cao ,&nbsp;Junchi Yan ,&nbsp;Qi Tian","doi":"10.1016/j.artint.2025.104417","DOIUrl":"10.1016/j.artint.2025.104417","url":null,"abstract":"<div><div>Prompt learning has achieved great success in efficiently exploiting large-scale pre-trained models in natural language processing (NLP). It reformulates the downstream tasks as the generative pre-training ones to achieve consistency, thus improving the performance stably. However, when transferring it to the vision area, current visual prompt learning methods are almost designed on discriminative pre-trained models, and there is also a lack of careful design to unify the forms of pre-training and downstream tasks. To explore prompt learning on the generative pre-trained visual model, as well as keeping the task consistency, we propose Visual Prompt learning as masked visual Token Modeling (VPTM) to transform the downstream visual classification task into the pre-trained masked visual token prediction task. In addition, we develop the prototypical verbalizer for mapping the predicted visual token with implicit semantics to explicit downstream labels. To our best knowledge, VPTM is the first visual prompt method on the generative pre-trained visual model, which achieves consistency between pre-training and downstream visual classification by task reformulation. Experiments show that VPTM outperforms other visual prompt methods and achieves excellent efficiency. Moreover, the task consistency of VPTM contributes to the robustness against prompt location, prompt length and prototype dimension, and could be deployed uniformly.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"348 ","pages":"Article 104417"},"PeriodicalIF":4.6,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145044399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learngene: Inheritable “genes” in intelligent agents Learngene:智能体中可遗传的“基因”
IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-01 Epub Date: 2025-09-22 DOI: 10.1016/j.artint.2025.104421
Fu Feng , Jing Wang , Xu Yang , Xin Geng
Biological intelligence has driven significant progress in artificial intelligence (AI), but a critical gap remains: biological systems inherit innate abilities from genes, with brains initialized by blueprints refined over 3.5 billion years of evolution, while machines rely heavily on inefficient, data-driven learning from scratch. This gap arises from the lack of a genetic mechanism in machines to transfer and accumulate inheritable knowledge across generations. To bridge this gap, we propose learngenes, network fragments that act as inheritable “genes” for machines. Unlike conventional knowledge transfer methods, learngenes enable efficient and universal knowledge transfer by selectively encapsulating task-agnostic knowledge. To facilitate the transfer and accumulation of task-agnostic knowledge across generations, we introduce Genetic Reinforcement Learning (GRL), a framework that simulates the learning and evolution of organisms in intelligent agents following Lamarckian principles. Through GRL, we identify learngenes as network fragments within agents' policy networks, equipping newborn agents with innate abilities for rapid adaptation to novel tasks. We demonstrate the advantages of learngene-based knowledge transfer over evolution-based search and traditional pre-trained models, and show how learngenes evolve through the accumulation of task-agnostic knowledge. Overall, this work establishes a novel paradigm for knowledge transfer and model initialization in AI, offering new possibilities for more adaptive, efficient, and scalable learning systems.
生物智能推动了人工智能(AI)的重大进步,但一个关键的差距仍然存在:生物系统从基因中继承了天生的能力,大脑是由35亿年的进化蓝图初始化的,而机器则严重依赖于低效的、数据驱动的从零开始学习。这种差距是由于机器缺乏遗传机制来传递和积累可遗传的知识。为了弥补这一差距,我们提出了学习基因,即作为机器可遗传“基因”的网络片段。与传统的知识转移方法不同,学习基因通过选择性地封装与任务无关的知识来实现高效和普遍的知识转移。为了促进任务不可知论知识在代际间的转移和积累,我们引入了遗传强化学习(GRL),这是一个遵循拉马克原理模拟智能代理中生物体的学习和进化的框架。通过GRL,我们将学习基因识别为智能体策略网络中的网络片段,为新生智能体提供快速适应新任务的先天能力。我们展示了基于学习基因的知识转移相对于基于进化的搜索和传统的预训练模型的优势,并展示了学习基因如何通过任务不可知知识的积累而进化。总的来说,这项工作为人工智能中的知识转移和模型初始化建立了一个新的范例,为更具适应性、效率和可扩展性的学习系统提供了新的可能性。
{"title":"Learngene: Inheritable “genes” in intelligent agents","authors":"Fu Feng ,&nbsp;Jing Wang ,&nbsp;Xu Yang ,&nbsp;Xin Geng","doi":"10.1016/j.artint.2025.104421","DOIUrl":"10.1016/j.artint.2025.104421","url":null,"abstract":"<div><div>Biological intelligence has driven significant progress in artificial intelligence (AI), but a critical gap remains: biological systems inherit innate abilities from genes, with brains initialized by blueprints refined over 3.5 billion years of evolution, while machines rely heavily on inefficient, data-driven learning from scratch. This gap arises from the lack of a genetic mechanism in machines to transfer and accumulate inheritable knowledge across generations. To bridge this gap, we propose learngenes, network fragments that act as inheritable “genes” for machines. Unlike conventional knowledge transfer methods, learngenes enable efficient and universal knowledge transfer by selectively encapsulating task-agnostic knowledge. To facilitate the transfer and accumulation of task-agnostic knowledge across generations, we introduce Genetic Reinforcement Learning (GRL), a framework that simulates the learning and evolution of organisms in intelligent agents following Lamarckian principles. Through GRL, we identify learngenes as network fragments within agents' policy networks, equipping newborn agents with innate abilities for rapid adaptation to novel tasks. We demonstrate the advantages of learngene-based knowledge transfer over evolution-based search and traditional pre-trained models, and show how learngenes evolve through the accumulation of task-agnostic knowledge. Overall, this work establishes a novel paradigm for knowledge transfer and model initialization in AI, offering new possibilities for more adaptive, efficient, and scalable learning systems.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"348 ","pages":"Article 104421"},"PeriodicalIF":4.6,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145154766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Provably efficient information-directed sampling algorithms for multi-agent reinforcement learning 多智能体强化学习中可证明的高效信息导向采样算法
IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-01 Epub Date: 2025-07-10 DOI: 10.1016/j.artint.2025.104392
Qiaosheng Zhang , Chenjia Bai , Shuyue Hu , Zhen Wang , Xuelong Li
This work designs and analyzes a novel set of algorithms for multi-agent reinforcement learning (MARL) based on the principle of information-directed sampling (IDS). These algorithms draw inspiration from foundational concepts in information theory, and are proven to be sample efficient in MARL settings such as two-player zero-sum Markov games (MGs) and multi-player general-sum MGs. For episodic two-player zero-sum MGs, we present three sample-efficient algorithms for learning Nash equilibrium. The basic algorithm, referred to as MAIDS, employs an asymmetric learning structure where the max-player first solves a minimax optimization problem based on the joint information ratio of the joint policy, and the min-player then minimizes the marginal information ratio with the max-player's policy fixed. Theoretical analyses show that it achieves a Bayesian regret of O˜(K) for K episodes. To reduce the computational load of MAIDS, we develop an improved algorithm called Reg-MAIDS, which has the same Bayesian regret bound while enjoying less computational complexity. Moreover, by leveraging the flexibility of IDS principle in choosing the learning target, we propose two methods for constructing compressed environments based on rate-distortion theory, upon which we develop an algorithm Compressed-MAIDS wherein the learning target is a compressed environment. Finally, we extend Reg-MAIDS to multi-player general-sum MGs and prove that it can learn either the Nash equilibrium or coarse correlated equilibrium in a sample-efficient manner.
本文设计并分析了一套基于信息导向采样(IDS)原理的多智能体强化学习(MARL)算法。这些算法从信息论的基本概念中获得灵感,并在MARL设置中被证明是样本效率高的,例如双人零和马尔可夫博弈(MGs)和多人一般和MGs。对于情景二人零和博弈,我们提出了三种样本效率算法来学习纳什均衡。其基本算法称为MAIDS,采用非对称学习结构,最大参与者首先根据联合策略的联合信息比解决最小最大优化问题,最小参与者在最大参与者的策略固定的情况下最小化边际信息比。理论分析表明,对于K集,它达到了O ~ (K)的贝叶斯遗憾。为了减少maid的计算量,我们开发了一种改进的Reg-MAIDS算法,该算法具有相同的贝叶斯遗憾界,同时具有更低的计算复杂度。此外,利用IDS原理在选择学习目标方面的灵活性,我们提出了两种基于率失真理论构建压缩环境的方法,并在此基础上开发了一种以压缩环境为学习目标的compressed - maids算法。最后,我们将regg - maids扩展到多玩家一般和博弈中,并证明了它可以以样本效率的方式学习纳什均衡或粗相关均衡。
{"title":"Provably efficient information-directed sampling algorithms for multi-agent reinforcement learning","authors":"Qiaosheng Zhang ,&nbsp;Chenjia Bai ,&nbsp;Shuyue Hu ,&nbsp;Zhen Wang ,&nbsp;Xuelong Li","doi":"10.1016/j.artint.2025.104392","DOIUrl":"10.1016/j.artint.2025.104392","url":null,"abstract":"<div><div>This work designs and analyzes a novel set of algorithms for multi-agent reinforcement learning (MARL) based on the principle of information-directed sampling (IDS). These algorithms draw inspiration from foundational concepts in information theory, and are proven to be sample efficient in MARL settings such as two-player zero-sum Markov games (MGs) and multi-player general-sum MGs. For episodic two-player zero-sum MGs, we present three sample-efficient algorithms for learning Nash equilibrium. The basic algorithm, referred to as <span>MAIDS</span>, employs an asymmetric learning structure where the max-player first solves a minimax optimization problem based on the <em>joint information ratio</em> of the joint policy, and the min-player then minimizes the <em>marginal information ratio</em> with the max-player's policy fixed. Theoretical analyses show that it achieves a Bayesian regret of <span><math><mover><mrow><mi>O</mi></mrow><mrow><mo>˜</mo></mrow></mover><mo>(</mo><msqrt><mrow><mi>K</mi></mrow></msqrt><mo>)</mo></math></span> for <em>K</em> episodes. To reduce the computational load of <span>MAIDS</span>, we develop an improved algorithm called <span>Reg-MAIDS</span>, which has the same Bayesian regret bound while enjoying less computational complexity. Moreover, by leveraging the flexibility of IDS principle in choosing the learning target, we propose two methods for constructing compressed environments based on rate-distortion theory, upon which we develop an algorithm <span>Compressed-MAIDS</span> wherein the learning target is a compressed environment. Finally, we extend <span>Reg-MAIDS</span> to multi-player general-sum MGs and prove that it can learn either the Nash equilibrium or coarse correlated equilibrium in a sample-efficient manner.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"348 ","pages":"Article 104392"},"PeriodicalIF":5.1,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144613978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1