首页 > 最新文献

Journal of Artificial Intelligence Research最新文献

英文 中文
Initialization of Feature Selection Search for Classification 初始化特征选择搜索分类
IF 5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-11-27 DOI: 10.1613/jair.1.14015
María Luque-Rodriguez, José Molina-Baena, Alfonso Jiménez-Vílchez, A. Arauzo-Azofra
Selecting the best features in a dataset improves accuracy and efficiency of classifiers  in a learning process. Datasets generally have more features than necessary, some of  them being irrelevant or redundant to others. For this reason, numerous feature selection  methods have been developed, in which different evaluation functions and measures are  applied. This paper proposes the systematic application of individual feature evaluation  methods to initialize search-based feature subset selection methods. An exhaustive review  of the starting methods used by genetic algorithms from 2014 to 2020 has been carried out.  Subsequently, an in-depth empirical study has been carried out evaluating the proposal for  different search-based feature selection methods (Sequential forward and backward selection,  Las Vegas filter and wrapper, Simulated Annealing and Genetic Algorithms). Since  the computation time is reduced and the classification accuracy with the selected features  is improved, the initialization of feature selection proposed in this work is proved to be  worth considering while designing any feature selection algorithms. 
在数据集中选择最佳特征可以提高分类器在学习过程中的准确性和效率。数据集通常具有比必要的更多的特征,其中一些特征与其他特征无关或冗余。因此,已经开发了许多特征选择方法,其中应用了不同的评估函数和度量。本文提出系统地应用个体特征评价方法初始化基于搜索的特征子集选择方法。对2014年至2020年遗传算法的启动方法进行了详尽的回顾。随后,对不同的基于搜索的特征选择方法(顺序正向和向后选择、拉斯维加斯滤波和包装、模拟退火和遗传算法)进行了深入的实证研究。由于减少了计算时间,并提高了所选特征的分类精度,因此在设计任何特征选择算法时,本文提出的特征选择初始化是值得考虑的。
{"title":"Initialization of Feature Selection Search for Classification","authors":"María Luque-Rodriguez, José Molina-Baena, Alfonso Jiménez-Vílchez, A. Arauzo-Azofra","doi":"10.1613/jair.1.14015","DOIUrl":"https://doi.org/10.1613/jair.1.14015","url":null,"abstract":"Selecting the best features in a dataset improves accuracy and efficiency of classifiers  in a learning process. Datasets generally have more features than necessary, some of  them being irrelevant or redundant to others. For this reason, numerous feature selection  methods have been developed, in which different evaluation functions and measures are  applied. This paper proposes the systematic application of individual feature evaluation  methods to initialize search-based feature subset selection methods. An exhaustive review  of the starting methods used by genetic algorithms from 2014 to 2020 has been carried out.  Subsequently, an in-depth empirical study has been carried out evaluating the proposal for  different search-based feature selection methods (Sequential forward and backward selection,  Las Vegas filter and wrapper, Simulated Annealing and Genetic Algorithms). Since  the computation time is reduced and the classification accuracy with the selected features  is improved, the initialization of feature selection proposed in this work is proved to be  worth considering while designing any feature selection algorithms. ","PeriodicalId":54877,"journal":{"name":"Journal of Artificial Intelligence Research","volume":"99 1","pages":"953-983"},"PeriodicalIF":5.0,"publicationDate":"2022-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76603726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interpretable Local Concept-based Explanation with Human Feedback to Predict All-cause Mortality 基于人类反馈的可解释的局部概念解释预测全因死亡率
IF 5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-11-18 DOI: 10.1613/jair.1.14019
Radwa El Shawi, M. Al-Mallah
Machine learning models are incorporated in different fields and disciplines in which some of them require a high level of accountability and transparency, for example, the healthcare sector. With the General Data Protection Regulation (GDPR), the importance for plausibility and verifiability of the predictions made by machine learning models has become essential. A widely used category of explanation techniques attempts to explain models’ predictions by quantifying the importance score of each input feature. However, summarizing such scores to provide human-interpretable explanations is challenging. Another category of explanation techniques focuses on learning a domain representation in terms of high-level human-understandable concepts and then utilizing them to explain predictions. These explanations are hampered by how concepts are constructed, which is not intrinsically interpretable. To this end, we propose Concept-based Local Explanations with Feedback (CLEF), a novel local model agnostic explanation framework for learning a set of high-level transparent concept definitions in high-dimensional tabular data that uses clinician-labeled concepts rather than raw features. CLEF maps the raw input features to high-level intuitive concepts and then decompose the evidence of prediction of the instance being explained into concepts. In addition, the proposed framework generates counterfactual explanations, suggesting the minimum changes in the instance’s concept based explanation that will lead to a different prediction. We demonstrate with simulated user feedback on predicting the risk of mortality. Such direct feedback is more effective than other techniques, that rely on hand-labelled or automatically extracted concepts, in learning concepts that align with ground truth concept definitions.
机器学习模型被纳入不同的领域和学科,其中一些需要高度的问责制和透明度,例如医疗保健部门。随着通用数据保护条例(GDPR)的实施,机器学习模型所做预测的合理性和可验证性变得至关重要。一种广泛使用的解释技术试图通过量化每个输入特征的重要性得分来解释模型的预测。然而,总结这些分数以提供人类可解释的解释是具有挑战性的。另一类解释技术侧重于根据人类可理解的高级概念学习领域表示,然后利用它们来解释预测。这些解释受到概念构造方式的阻碍,这在本质上是不可解释的。为此,我们提出了基于概念的带有反馈的局部解释(CLEF),这是一个新颖的局部模型不可知论解释框架,用于在高维表格数据中学习一组高级别透明概念定义,该解释使用临床医生标记的概念而不是原始特征。CLEF将原始输入特征映射到高级直观概念,然后将正在解释的实例的预测证据分解为概念。此外,所提出的框架产生反事实解释,表明实例中基于概念的解释的最小变化将导致不同的预测。我们用模拟的用户反馈来证明预测死亡风险。这种直接反馈比其他依赖于手工标记或自动提取概念的技术更有效,在学习与基本真理概念定义一致的概念时。
{"title":"Interpretable Local Concept-based Explanation with Human Feedback to Predict All-cause Mortality","authors":"Radwa El Shawi, M. Al-Mallah","doi":"10.1613/jair.1.14019","DOIUrl":"https://doi.org/10.1613/jair.1.14019","url":null,"abstract":"Machine learning models are incorporated in different fields and disciplines in which some of them require a high level of accountability and transparency, for example, the healthcare sector. With the General Data Protection Regulation (GDPR), the importance for plausibility and verifiability of the predictions made by machine learning models has become essential. A widely used category of explanation techniques attempts to explain models’ predictions by quantifying the importance score of each input feature. However, summarizing such scores to provide human-interpretable explanations is challenging. Another category of explanation techniques focuses on learning a domain representation in terms of high-level human-understandable concepts and then utilizing them to explain predictions. These explanations are hampered by how concepts are constructed, which is not intrinsically interpretable. To this end, we propose Concept-based Local Explanations with Feedback (CLEF), a novel local model agnostic explanation framework for learning a set of high-level transparent concept definitions in high-dimensional tabular data that uses clinician-labeled concepts rather than raw features. CLEF maps the raw input features to high-level intuitive concepts and then decompose the evidence of prediction of the instance being explained into concepts. In addition, the proposed framework generates counterfactual explanations, suggesting the minimum changes in the instance’s concept based explanation that will lead to a different prediction. We demonstrate with simulated user feedback on predicting the risk of mortality. Such direct feedback is more effective than other techniques, that rely on hand-labelled or automatically extracted concepts, in learning concepts that align with ground truth concept definitions.","PeriodicalId":54877,"journal":{"name":"Journal of Artificial Intelligence Research","volume":"22 1","pages":"833-855"},"PeriodicalIF":5.0,"publicationDate":"2022-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76754936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Solving the Watchman Route Problem with Heuristic Search 用启发式搜索求解守望者路线问题
IF 5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-11-09 DOI: 10.1613/jair.1.13685
Shawn Skyler, Dor Atzmon, Tamir Yaffe, Ariel Felner
This paper solves the Watchman Route Problem (WRP) on a general discrete graph with Heuristic Search. Given a graph, a line-of-sight (LOS) function, and a start vertex, the task is to (offline) find a (shortest) path through the graph such that all vertices in the graph will be visually seen by at least one vertex on the path. WRP is reminiscent but different from graph covering and mapping problems, which are done online on an unknown graph. We formalize WRP as a heuristic search problem and solve it optimally with an A*-based algorithm. We develop a series of admissible heuristics with increasing difficulty and accuracy. Our heuristics abstract the underlying graph into a disjoint line-of-sight graph (GDLS) which is based on disjoint clusters of vertices such that vertices within the same cluster have LOS to the same specific vertex. We use solutions for the Minimum Spanning Tree (MST) and the Traveling Salesman Problem (TSP) of GDLS as admissible heuristics for WRP. We theoretically and empirically investigate these heuristics. Then, we show how the optimal methods can be modified (by intelligently pruning away large sub-trees) to obtain various suboptimal solvers with and without bound guarantees. These suboptimal solvers are much faster and expand fewer nodes than the optimal solver with only minor reduction in the quality of the solution.
本文用启发式搜索方法解决了一般离散图上的守望者路径问题。给定一个图,一个视距(LOS)函数和一个起始顶点,任务是(离线)找到通过图的(最短)路径,使得图中的所有顶点至少被路径上的一个顶点可视地看到。WRP是一种联想问题,但不同于图覆盖和映射问题,后者是在未知图上在线完成的。我们将WRP形式化为一个启发式搜索问题,并使用基于a *的算法对其进行最优求解。我们开发了一系列可接受的启发式,难度和准确性越来越高。我们的启发式算法将底层图抽象为基于不相交的顶点簇的不相交视距图(disjoint line-of-sight graph, GDLS),使得同一簇中的顶点具有到同一特定顶点的LOS。我们将GDLS的最小生成树(MST)和旅行推销员问题(TSP)的解作为WRP的可接受启发式解。我们从理论上和经验上对这些启发式进行了研究。然后,我们展示了如何修改最优方法(通过智能地修剪掉大的子树)以获得具有和不具有约束保证的各种次优解。这些次优求解器比最优求解器更快,扩展的节点更少,而解决方案的质量只有轻微的降低。
{"title":"Solving the Watchman Route Problem with Heuristic Search","authors":"Shawn Skyler, Dor Atzmon, Tamir Yaffe, Ariel Felner","doi":"10.1613/jair.1.13685","DOIUrl":"https://doi.org/10.1613/jair.1.13685","url":null,"abstract":"This paper solves the Watchman Route Problem (WRP) on a general discrete graph with Heuristic Search. Given a graph, a line-of-sight (LOS) function, and a start vertex, the task is to (offline) find a (shortest) path through the graph such that all vertices in the graph will be visually seen by at least one vertex on the path. WRP is reminiscent but different from graph covering and mapping problems, which are done online on an unknown graph. We formalize WRP as a heuristic search problem and solve it optimally with an A*-based algorithm. We develop a series of admissible heuristics with increasing difficulty and accuracy. Our heuristics abstract the underlying graph into a disjoint line-of-sight graph (GDLS) which is based on disjoint clusters of vertices such that vertices within the same cluster have LOS to the same specific vertex. We use solutions for the Minimum Spanning Tree (MST) and the Traveling Salesman Problem (TSP) of GDLS as admissible heuristics for WRP. We theoretically and empirically investigate these heuristics. Then, we show how the optimal methods can be modified (by intelligently pruning away large sub-trees) to obtain various suboptimal solvers with and without bound guarantees. These suboptimal solvers are much faster and expand fewer nodes than the optimal solver with only minor reduction in the quality of the solution.","PeriodicalId":54877,"journal":{"name":"Journal of Artificial Intelligence Research","volume":"42 1","pages":"747-793"},"PeriodicalIF":5.0,"publicationDate":"2022-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89619785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AAN+: Generalized Average Attention Network for Accelerating Neural Transformer 加速神经变压器的广义平均注意网络
IF 5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-11-06 DOI: 10.1613/jair.1.13896
Biao Zhang, Deyi Xiong, Yubin Ge, Junfeng Yao, Hao Yue, Jinsong Su
Transformer benefits from the high parallelization of attention networks in fast training, but it still suffers from slow decoding partially due to the linear dependency O(m) of the decoder self-attention on previous target words at inference. In this paper, we propose a generalized average attention network (AAN+) aiming at speeding up decoding by reducing the dependency from O(m) to O(1). We find that the learned self-attention weights in the decoder follow some patterns which can be approximated via a dynamic structure. Based on this insight, we develop AAN+, extending our previously proposed average attention (Zhang et al., 2018a, AAN) to support more general position- and content-based attention patterns. AAN+ only requires to maintain a small constant number of hidden states during decoding, ensuring its O(1) dependency. We apply AAN+ as a drop-in replacement of the decoder selfattention and conduct experiments on machine translation (with diverse language pairs), table-to-text generation and document summarization. With masking tricks and dynamic programming, AAN+ enables Transformer to decode sentences around 20% faster without largely compromising in the training speed and the generation performance. Our results further reveal the importance of the localness (neighboring words) in AAN+ and its capability in modeling long-range dependency.
Transformer在快速训练中受益于注意网络的高度并行化,但由于解码器在推理时对先前目标词的自注意的线性依赖O(m),它仍然存在解码缓慢的问题。本文提出了一种广义平均注意网络(AAN+),旨在通过将依赖关系从0 (m)降低到O(1)来加快解码速度。我们发现解码器中习得的自注意权值遵循一些模式,这些模式可以通过动态结构来近似。基于这一见解,我们开发了AAN+,扩展了我们之前提出的平均注意力(Zhang等人,2018a, AAN),以支持更一般的基于位置和内容的注意力模式。AAN+在解码过程中只需要保持少量的常量隐藏状态,保证了其O(1)依赖性。我们将AAN+作为解码器自关注的临时替代,并在机器翻译(不同语言对)、表到文本生成和文档摘要上进行了实验。通过掩蔽技巧和动态规划,AAN+使Transformer能够在不影响训练速度和生成性能的情况下将句子解码速度提高20%左右。我们的研究结果进一步揭示了局部性(邻近词)在AAN+中的重要性及其在建模远程依赖方面的能力。
{"title":"AAN+: Generalized Average Attention Network for Accelerating Neural Transformer","authors":"Biao Zhang, Deyi Xiong, Yubin Ge, Junfeng Yao, Hao Yue, Jinsong Su","doi":"10.1613/jair.1.13896","DOIUrl":"https://doi.org/10.1613/jair.1.13896","url":null,"abstract":"Transformer benefits from the high parallelization of attention networks in fast training, but it still suffers from slow decoding partially due to the linear dependency O(m) of the decoder self-attention on previous target words at inference. In this paper, we propose a generalized average attention network (AAN+) aiming at speeding up decoding by reducing the dependency from O(m) to O(1). We find that the learned self-attention weights in the decoder follow some patterns which can be approximated via a dynamic structure. Based on this insight, we develop AAN+, extending our previously proposed average attention (Zhang et al., 2018a, AAN) to support more general position- and content-based attention patterns. AAN+ only requires to maintain a small constant number of hidden states during decoding, ensuring its O(1) dependency. We apply AAN+ as a drop-in replacement of the decoder selfattention and conduct experiments on machine translation (with diverse language pairs), table-to-text generation and document summarization. With masking tricks and dynamic programming, AAN+ enables Transformer to decode sentences around 20% faster without largely compromising in the training speed and the generation performance. Our results further reveal the importance of the localness (neighboring words) in AAN+ and its capability in modeling long-range dependency.","PeriodicalId":54877,"journal":{"name":"Journal of Artificial Intelligence Research","volume":"26 1","pages":"677-708"},"PeriodicalIF":5.0,"publicationDate":"2022-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81541655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Liability regimes in the age of AI: a use-case driven analysis of the burden of proof 人工智能时代的责任制度:用例驱动的举证责任分析
IF 5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-11-03 DOI: 10.1613/jair.1.14565
David Fern'andez Llorca, V. Charisi, Ronan Hamon, Ignacio E. S'anchez, Emilia G'omez
New emerging technologies powered by Artificial Intelligence (AI) have the potential to disruptively transform our societies for the better. In particular, data-driven learning approaches (i.e., Machine Learning (ML)) have been a true revolution in the advancement of multiple technologies in various application domains. But at the same time there is growing concern about certain intrinsic characteristics of these methodologies that carry potential risks to both safety and fundamental rights. Although there are mechanisms in the adoption process to minimize these risks (e.g., safety regulations), these do not exclude the possibility of harm occurring, and if this happens, victims should be able to seek compensation. Liability regimes will therefore play a key role in ensuring basic protection for victims using or interacting with these systems. However, the same characteristics that make AI systems inherently risky, such as lack of causality, opacity, unpredictability or their self and continuous learning capabilities, may lead to considerable difficulties when it comes to proving causation. This paper presents three case studies, as well as the methodology to reach them, that illustrate these difficulties. Specifically, we address the cases of cleaning robots, delivery drones and robots in education. The outcome of the proposed analysis suggests the need to revise liability regimes to alleviate the burden of proof on victims in cases involving AI technologies.This article appears in the AI & Society track.
由人工智能(AI)驱动的新兴技术有可能颠覆性地改变我们的社会,使其变得更好。特别是,数据驱动的学习方法(即机器学习(ML))已经成为各种应用领域中多种技术进步的真正革命。但与此同时,人们越来越担心这些方法的某些内在特征会对安全和基本权利带来潜在风险。虽然在收养过程中有尽量减少这些风险的机制(例如安全条例),但这些机制并不排除发生伤害的可能性,如果发生这种情况,受害者应该能够寻求赔偿。因此,责任制度将在确保对使用这些系统或与之互动的受害者提供基本保护方面发挥关键作用。然而,人工智能系统固有的风险特征,如缺乏因果关系、不透明、不可预测性或其自我和持续学习能力,可能会在证明因果关系时导致相当大的困难。本文提出了三个案例研究,以及达到它们的方法,说明了这些困难。具体来说,我们讨论了清洁机器人、送货无人机和教育机器人的情况。拟议分析的结果表明,需要修订责任制度,以减轻涉及人工智能技术的案件中受害者的举证责任。本文出现在人工智能与社会轨道上。
{"title":"Liability regimes in the age of AI: a use-case driven analysis of the burden of proof","authors":"David Fern'andez Llorca, V. Charisi, Ronan Hamon, Ignacio E. S'anchez, Emilia G'omez","doi":"10.1613/jair.1.14565","DOIUrl":"https://doi.org/10.1613/jair.1.14565","url":null,"abstract":"New emerging technologies powered by Artificial Intelligence (AI) have the potential to disruptively transform our societies for the better. In particular, data-driven learning approaches (i.e., Machine Learning (ML)) have been a true revolution in the advancement of multiple technologies in various application domains. But at the same time there is growing concern about certain intrinsic characteristics of these methodologies that carry potential risks to both safety and fundamental rights. Although there are mechanisms in the adoption process to minimize these risks (e.g., safety regulations), these do not exclude the possibility of harm occurring, and if this happens, victims should be able to seek compensation. Liability regimes will therefore play a key role in ensuring basic protection for victims using or interacting with these systems. However, the same characteristics that make AI systems inherently risky, such as lack of causality, opacity, unpredictability or their self and continuous learning capabilities, may lead to considerable difficulties when it comes to proving causation. This paper presents three case studies, as well as the methodology to reach them, that illustrate these difficulties. Specifically, we address the cases of cleaning robots, delivery drones and robots in education. The outcome of the proposed analysis suggests the need to revise liability regimes to alleviate the burden of proof on victims in cases involving AI technologies.\u0000\u0000\u0000\u0000This article appears in the AI & Society track.\u0000\u0000\u0000","PeriodicalId":54877,"journal":{"name":"Journal of Artificial Intelligence Research","volume":"36 1","pages":"613-644"},"PeriodicalIF":5.0,"publicationDate":"2022-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91152517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Communication-Aware Local Search for Distributed Constraint Optimization 基于通信感知的分布式约束优化局部搜索
IF 5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-10-28 DOI: 10.1613/jair.1.13826
Ben Rachmut
Most studies investigating models and algorithms for distributed constraint optimization problems (DCOPs) assume that messages arrive instantaneously and are never lost. Specifically, distributed local search DCOP algorithms, have been designed as synchronous algorithms (i.e., they perform in synchronous iterations in which each agent exchanges messages with all its neighbors), despite running in asynchronous environments. This is true also for an anytime mechanism that reports the best solution explored during the run of synchronous distributed local search algorithms. Thus, when the assumption of perfect communication is relaxed, the properties that were established for the state-of-the-art local search algorithms and the anytime mechanism may not necessarily apply.In this work, we address this limitation by: (1) Proposing a Communication-Aware DCOP model (CA-DCOP) that can represent scenarios with different communication disturbances; (2) Investigating the performance of existing local search DCOP algorithms, specifically Distributed Stochastic Algorithm (DSA) and Maximum Gain Messages (MGM), in the presence of message latency and message loss; (3) Proposing a latency-aware monotonic distributed local search DCOP algorithm; and (4) Proposing an asynchronous anytime framework for reporting the best solution explored by non-monotonic asynchronous local search DCOP algorithms. Our empirical results demonstrate that imperfect communication has a positive effect on distributed local search algorithms due to increased exploration. Furthermore, the asynchronous anytime framework we proposed allows one to benefit from algorithms with inherent explorative heuristics.
大多数研究分布式约束优化问题(dcop)的模型和算法的研究都假设消息是即时到达并且永远不会丢失的。具体来说,分布式本地搜索DCOP算法被设计为同步算法(即,它们在同步迭代中执行,其中每个代理与所有邻居交换消息),尽管运行在异步环境中。对于报告同步分布式本地搜索算法运行期间探索的最佳解决方案的任何时间机制也是如此。因此,当完美通信的假设被放宽时,为最先进的本地搜索算法和随时机制建立的属性可能不一定适用。在这项工作中,我们通过以下方式解决了这一限制:(1)提出了一个通信感知DCOP模型(CA-DCOP),该模型可以表示具有不同通信干扰的场景;(2)研究了现有的局部搜索DCOP算法,特别是分布式随机算法(DSA)和最大增益消息(MGM)在消息延迟和消息丢失情况下的性能;(3)提出了一种感知延迟的单调分布式局部搜索DCOP算法;(4)提出一种异步随时报告框架,用于报告非单调异步局部搜索DCOP算法探索的最优解。我们的实证结果表明,由于探索的增加,不完善的通信对分布式局部搜索算法有积极的影响。此外,我们提出的异步随时框架允许人们从具有固有探索性启发式的算法中受益。
{"title":"Communication-Aware Local Search for Distributed Constraint Optimization","authors":"Ben Rachmut","doi":"10.1613/jair.1.13826","DOIUrl":"https://doi.org/10.1613/jair.1.13826","url":null,"abstract":"Most studies investigating models and algorithms for distributed constraint optimization problems (DCOPs) assume that messages arrive instantaneously and are never lost. Specifically, distributed local search DCOP algorithms, have been designed as synchronous algorithms (i.e., they perform in synchronous iterations in which each agent exchanges messages with all its neighbors), despite running in asynchronous environments. This is true also for an anytime mechanism that reports the best solution explored during the run of synchronous distributed local search algorithms. Thus, when the assumption of perfect communication is relaxed, the properties that were established for the state-of-the-art local search algorithms and the anytime mechanism may not necessarily apply.\u0000In this work, we address this limitation by: (1) Proposing a Communication-Aware DCOP model (CA-DCOP) that can represent scenarios with different communication disturbances; (2) Investigating the performance of existing local search DCOP algorithms, specifically Distributed Stochastic Algorithm (DSA) and Maximum Gain Messages (MGM), in the presence of message latency and message loss; (3) Proposing a latency-aware monotonic distributed local search DCOP algorithm; and (4) Proposing an asynchronous anytime framework for reporting the best solution explored by non-monotonic asynchronous local search DCOP algorithms. Our empirical results demonstrate that imperfect communication has a positive effect on distributed local search algorithms due to increased exploration. Furthermore, the asynchronous anytime framework we proposed allows one to benefit from algorithms with inherent explorative heuristics.","PeriodicalId":54877,"journal":{"name":"Journal of Artificial Intelligence Research","volume":"1 1","pages":"637-675"},"PeriodicalIF":5.0,"publicationDate":"2022-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83128999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Low-Rank Representation of Reinforcement Learning Policies 强化学习策略的低秩表示
IF 5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-10-27 DOI: 10.1613/jair.1.13854
Bogdan Mazoure, T. Doan, Tianyu Li, V. Makarenkov, Joelle Pineau, Doina Precup, Guillaume Rabusseau
We propose a general framework for policy representation for reinforcement learning tasks. This framework involves finding a low-dimensional embedding of the policy on a reproducing kernel Hilbert space (RKHS). The usage of RKHS based methods allows us to derive strong theoretical guarantees on the expected return of the reconstructed policy. Such guarantees are typically lacking in black-box models, but are very desirable in tasks requiring stability and convergence guarantees. We conduct several experiments on classic RL domains. The results confirm that the policies can be robustly represented in a low-dimensional space while the embedded policy incurs almost no decrease in returns.
我们提出了一个用于强化学习任务的策略表示的通用框架。该框架涉及在再现核希尔伯特空间(RKHS)上寻找策略的低维嵌入。基于RKHS方法的使用使我们能够对重构政策的预期收益得出强有力的理论保证。这种保证在黑盒模型中通常是缺乏的,但是在需要稳定性和收敛性保证的任务中是非常理想的。我们在经典RL域上进行了几个实验。结果表明,策略在低维空间中可以鲁棒地表示,而嵌入策略几乎不会导致收益下降。
{"title":"Low-Rank Representation of Reinforcement Learning Policies","authors":"Bogdan Mazoure, T. Doan, Tianyu Li, V. Makarenkov, Joelle Pineau, Doina Precup, Guillaume Rabusseau","doi":"10.1613/jair.1.13854","DOIUrl":"https://doi.org/10.1613/jair.1.13854","url":null,"abstract":"We propose a general framework for policy representation for reinforcement learning tasks. This framework involves finding a low-dimensional embedding of the policy on a reproducing kernel Hilbert space (RKHS). The usage of RKHS based methods allows us to derive strong theoretical guarantees on the expected return of the reconstructed policy. Such guarantees are typically lacking in black-box models, but are very desirable in tasks requiring stability and convergence guarantees. We conduct several experiments on classic RL domains. The results confirm that the policies can be robustly represented in a low-dimensional space while the embedded policy incurs almost no decrease in returns.","PeriodicalId":54877,"journal":{"name":"Journal of Artificial Intelligence Research","volume":"29 1","pages":"597-636"},"PeriodicalIF":5.0,"publicationDate":"2022-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72913510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Planning with Perspectives - Decomposing Epistemic Planning using Functional STRIPS 透视规划-使用功能条分解认知规划
IF 5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-10-16 DOI: 10.1613/jair.1.13446
Guanghua Hu, Tim Miller, N. Lipovetzky
In this paper, we present a novel approach to epistemic planning called planning with perspectives (PWP) that is both more expressive and computationally more efficient than existing state-of-the-art epistemic planning tools. Epistemic planning — planning with knowledge and belief — is essential in many multi-agent and human-agent interaction domains. Most state-of-the-art epistemic planners solve epistemic planning problems by either compiling to propositional classical planning (for example, generating all possible knowledge atoms or compiling epistemic formulae to normal forms); or explicitly encoding Kripke-based semantics. However, these methods become computationally infeasible as problem sizes grow. In this paper, we decompose epistemic planning by delegating reasoning about epistemic formulae to an external solver. We do this by modelling the problem using Functional STRIPS, which is more expressive than standard STRIPS and supports the use of external, black-box functions within action models. Building on recent work that demonstrates the relationship between what an agent ‘sees’ and what it knows, we define the perspective of each agent using an external function, and build a solver for epistemic logic around this. Modellers can customise the perspective function of agents, allowing new epistemic logics to be defined without changing the planner. We ran evaluations on well-known epistemic planning benchmarks to compare an existing state-of-the-art planner, and on new scenarios that demonstrate the expressiveness of the PWP approach. The results show that our PWP planner scales significantly better than the state-of-the-art planner that we compared against, and can express problems more succinctly.
在本文中,我们提出了一种新的认知规划方法,称为透视规划(PWP),它比现有的最先进的认知规划工具更具表现力和计算效率。认知规划——基于知识和信念的规划——在许多多智能体和人-智能体交互领域是必不可少的。大多数最先进的知识规划者通过编译命题经典规划(例如,生成所有可能的知识原子或编译知识公式为标准形式)来解决知识规划问题;或者显式编码基于kripke的语义。然而,随着问题规模的增长,这些方法在计算上变得不可行。在本文中,我们通过将关于认知公式的推理委托给外部求解器来分解认知规划。我们通过使用Functional STRIPS对问题建模来做到这一点,它比标准strip更具表现力,并支持在操作模型中使用外部的黑盒函数。基于最近的工作,展示了智能体“看到”和它所知道的之间的关系,我们使用外部函数定义了每个智能体的视角,并围绕此构建了一个认知逻辑的求解器。建模者可以自定义代理的透视图功能,允许在不更改规划器的情况下定义新的认知逻辑。我们对众所周知的认知规划基准进行了评估,以比较现有的最先进的规划,并对展示PWP方法的表达能力的新场景进行了评估。结果表明,我们的PWP规划器的可伸缩性明显优于我们所比较的最先进的规划器,并且可以更简洁地表达问题。
{"title":"Planning with Perspectives - Decomposing Epistemic Planning using Functional STRIPS","authors":"Guanghua Hu, Tim Miller, N. Lipovetzky","doi":"10.1613/jair.1.13446","DOIUrl":"https://doi.org/10.1613/jair.1.13446","url":null,"abstract":"In this paper, we present a novel approach to epistemic planning called planning with perspectives (PWP) that is both more expressive and computationally more efficient than existing state-of-the-art epistemic planning tools. Epistemic planning — planning with knowledge and belief — is essential in many multi-agent and human-agent interaction domains. Most state-of-the-art epistemic planners solve epistemic planning problems by either compiling to propositional classical planning (for example, generating all possible knowledge atoms or compiling epistemic formulae to normal forms); or explicitly encoding Kripke-based semantics. However, these methods become computationally infeasible as problem sizes grow. In this paper, we decompose epistemic planning by delegating reasoning about epistemic formulae to an external solver. We do this by modelling the problem using Functional STRIPS, which is more expressive than standard STRIPS and supports the use of external, black-box functions within action models. Building on recent work that demonstrates the relationship between what an agent ‘sees’ and what it knows, we define the perspective of each agent using an external function, and build a solver for epistemic logic around this. Modellers can customise the perspective function of agents, allowing new epistemic logics to be defined without changing the planner. We ran evaluations on well-known epistemic planning benchmarks to compare an existing state-of-the-art planner, and on new scenarios that demonstrate the expressiveness of the PWP approach. The results show that our PWP planner scales significantly better than the state-of-the-art planner that we compared against, and can express problems more succinctly.","PeriodicalId":54877,"journal":{"name":"Journal of Artificial Intelligence Research","volume":"1 1","pages":"489-539"},"PeriodicalIF":5.0,"publicationDate":"2022-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90419565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Optimality Guarantees for Particle Belief Approximation of POMDPs pomdp粒子置信近似的最优性保证
IF 5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-10-10 DOI: 10.1613/jair.1.14525
M. H. Lim, Tyler J. Becker, Mykel J. Kochenderfer, C. Tomlin, Zachary Sunberg
Partially observable Markov decision processes (POMDPs) provide a flexible representation for real-world decision and control problems. However, POMDPs are notoriously difficult to solve, especially when the state and observation spaces are continuous or hybrid, which is often the case for physical systems. While recent online sampling-based POMDP algorithms that plan with observation likelihood weighting have shown practical effectiveness, a general theory characterizing the approximation error of the particle filtering techniques that these algorithms use has not previously been proposed. Our main contribution is bounding the error between any POMDP and its corresponding finite sample particle belief MDP (PB-MDP) approximation. This fundamental bridge between PB-MDPs and POMDPs allows us to adapt any sampling-based MDP algorithm to a POMDP by solving the corresponding particle belief MDP, thereby extending the convergence guarantees of the MDP algorithm to the POMDP. Practically, this is implemented by using the particle filter belief transition model as the generative model for the MDP solver. While this requires access to the observation density model from the POMDP, it only increases the transition sampling complexity of the MDP solver by a factor of O(C), where C is the number of particles. Thus, when combined with sparse sampling MDP algorithms, this approach can yield algorithms for POMDPs that have no direct theoretical dependence on the size of the state and observation spaces. In addition to our theoretical contribution, we perform five numerical experiments on benchmark POMDPs to demonstrate that a simple MDP algorithm adapted using PB-MDP approximation, Sparse-PFT, achieves performance competitive with other leading continuous observation POMDP solvers.
部分可观察马尔可夫决策过程(pomdp)为现实世界的决策和控制问题提供了一种灵活的表示。然而,众所周知,pomdp很难解决,特别是当状态和观察空间是连续的或混合的时候,这通常是物理系统的情况。虽然最近使用观测似然加权进行规划的基于在线采样的POMDP算法已经显示出实际有效性,但这些算法所使用的粒子滤波技术的近似误差的一般理论尚未被提出。我们的主要贡献是限定任何POMDP与其相应的有限样本粒子信度MDP (PB-MDP)近似之间的误差。这种pb -MDP和POMDP之间的基本桥梁允许我们通过求解相应的粒子信念MDP来将任何基于采样的MDP算法适应于POMDP,从而将MDP算法的收敛性保证扩展到POMDP。在实践中,采用粒子滤波信念转换模型作为MDP求解器的生成模型来实现。虽然这需要访问来自POMDP的观测密度模型,但它只会将MDP求解器的过渡采样复杂性增加O(C),其中C是粒子数。因此,当与稀疏采样MDP算法相结合时,该方法可以产生对状态和观测空间的大小没有直接理论依赖的pomdp算法。除了我们的理论贡献之外,我们还在基准POMDP上进行了五个数值实验,以证明采用PB-MDP近似的简单MDP算法,即Sparse-PFT,可以实现与其他领先的连续观测POMDP解算器相媲美的性能。
{"title":"Optimality Guarantees for Particle Belief Approximation of POMDPs","authors":"M. H. Lim, Tyler J. Becker, Mykel J. Kochenderfer, C. Tomlin, Zachary Sunberg","doi":"10.1613/jair.1.14525","DOIUrl":"https://doi.org/10.1613/jair.1.14525","url":null,"abstract":"Partially observable Markov decision processes (POMDPs) provide a flexible representation for real-world decision and control problems. However, POMDPs are notoriously difficult to solve, especially when the state and observation spaces are continuous or hybrid, which is often the case for physical systems. While recent online sampling-based POMDP algorithms that plan with observation likelihood weighting have shown practical effectiveness, a general theory characterizing the approximation error of the particle filtering techniques that these algorithms use has not previously been proposed. Our main contribution is bounding the error between any POMDP and its corresponding finite sample particle belief MDP (PB-MDP) approximation. This fundamental bridge between PB-MDPs and POMDPs allows us to adapt any sampling-based MDP algorithm to a POMDP by solving the corresponding particle belief MDP, thereby extending the convergence guarantees of the MDP algorithm to the POMDP. Practically, this is implemented by using the particle filter belief transition model as the generative model for the MDP solver. While this requires access to the observation density model from the POMDP, it only increases the transition sampling complexity of the MDP solver by a factor of O(C), where C is the number of particles. Thus, when combined with sparse sampling MDP algorithms, this approach can yield algorithms for POMDPs that have no direct theoretical dependence on the size of the state and observation spaces. In addition to our theoretical contribution, we perform five numerical experiments on benchmark POMDPs to demonstrate that a simple MDP algorithm adapted using PB-MDP approximation, Sparse-PFT, achieves performance competitive with other leading continuous observation POMDP solvers.","PeriodicalId":54877,"journal":{"name":"Journal of Artificial Intelligence Research","volume":"77 1","pages":"1591-1636"},"PeriodicalIF":5.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67392941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Multi-Agent Path Finding: A New Boolean Encoding 多智能体寻径:一种新的布尔编码
IF 5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-09-29 DOI: 10.1613/jair.1.13818
Roberto Asín Achá, Rodrigo López, Sebastián Hagedorn, Jorge A. Baier
Multi-agent pathfinding (MAPF) is an NP-hard problem. As such, dense maps may be very hard to solve optimally. In such scenarios, compilation-based approaches, via Boolean satisfiability (SAT) and answer set programming (ASP), have been shown to outperform heuristic-search-based approaches, such as conflict-based search (CBS). In this paper, we propose a new Boolean encoding for MAPF, and show how to implement it in ASP and MaxSAT. A feature that distinguishes our encoding from existing ones is that swap and follow conflicts are encoded using binary clauses, which can be exploited by current conflict-driven clause learning (CDCL) solvers. In addition, the number of clauses used to encode swap and follow conflicts do not depend on the number of agents, allowing us to scale better. For MaxSAT, we study different ways in which we may combine the MSU3 and LSU algorithms for maximum performance. In our experimental evaluation, we used square grids, ranging from 20 x 20 to 50 x 50 cells, and warehouse maps, with a varying number of agents and obstacles. We compared against representative solvers of the state-of-the-art, including the search-based algorithm CBS, the ASP-based solver ASP-MAPF, and the branch-and-cut-and-price hybrid solver, BCP. We observe that the ASP implementation of our encoding, ASP-MAPF2 outperforms other solvers in most of our experiments. The MaxSAT implementation of our encoding, MtMS shows best performance in relatively small warehouse maps when the number of agents is large, which are the instances with closer resemblance to hard puzzle-like problems.
多智能体寻路(MAPF)是一个np难题。因此,密集地图可能很难以最佳方式解决。在这种情况下,基于编译的方法,通过布尔可满足性(SAT)和答案集编程(ASP),已被证明优于基于启发式搜索的方法,如基于冲突的搜索(CBS)。本文提出了一种新的MAPF布尔编码方法,并给出了如何在ASP和MaxSAT中实现它。将我们的编码与现有编码区分开来的一个特征是,swap和follow冲突是使用二进制子句编码的,当前的冲突驱动子句学习(CDCL)解决方案可以利用二进制子句。此外,用于编码交换和跟踪冲突的子句的数量不依赖于代理的数量,从而允许我们更好地扩展。对于MaxSAT,我们研究了将MSU3和LSU算法结合在一起以获得最大性能的不同方法。在我们的实验评估中,我们使用方形网格,范围从20 x 20到50 x 50单元,以及仓库地图,具有不同数量的代理和障碍物。我们比较了最先进的代表性求解器,包括基于搜索的算法CBS,基于asp的求解器ASP-MAPF,以及分支-降价-价格混合求解器BCP。我们观察到我们编码的ASP实现,ASP- mapf2在我们的大多数实验中优于其他求解器。我们编码的MaxSAT实现MtMS在相对较小的仓库映射中显示出最佳性能,当代理数量很大时,这些实例更类似于困难的谜题问题。
{"title":"Multi-Agent Path Finding: A New Boolean Encoding","authors":"Roberto Asín Achá, Rodrigo López, Sebastián Hagedorn, Jorge A. Baier","doi":"10.1613/jair.1.13818","DOIUrl":"https://doi.org/10.1613/jair.1.13818","url":null,"abstract":"Multi-agent pathfinding (MAPF) is an NP-hard problem. As such, dense maps may be very hard to solve optimally. In such scenarios, compilation-based approaches, via Boolean satisfiability (SAT) and answer set programming (ASP), have been shown to outperform heuristic-search-based approaches, such as conflict-based search (CBS). In this paper, we propose a new Boolean encoding for MAPF, and show how to implement it in ASP and MaxSAT. A feature that distinguishes our encoding from existing ones is that swap and follow conflicts are encoded using binary clauses, which can be exploited by current conflict-driven clause learning (CDCL) solvers. In addition, the number of clauses used to encode swap and follow conflicts do not depend on the number of agents, allowing us to scale better. For MaxSAT, we study different ways in which we may combine the MSU3 and LSU algorithms for maximum performance. In our experimental evaluation, we used square grids, ranging from 20 x 20 to 50 x 50 cells, and warehouse maps, with a varying number of agents and obstacles. We compared against representative solvers of the state-of-the-art, including the search-based algorithm CBS, the ASP-based solver ASP-MAPF, and the branch-and-cut-and-price hybrid solver, BCP. We observe that the ASP implementation of our encoding, ASP-MAPF2 outperforms other solvers in most of our experiments. The MaxSAT implementation of our encoding, MtMS shows best performance in relatively small warehouse maps when the number of agents is large, which are the instances with closer resemblance to hard puzzle-like problems.","PeriodicalId":54877,"journal":{"name":"Journal of Artificial Intelligence Research","volume":"98 1","pages":"323-350"},"PeriodicalIF":5.0,"publicationDate":"2022-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85899961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Journal of Artificial Intelligence Research
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1