首页 > 最新文献

Artif. Intell.最新文献

英文 中文
Entropy Estimation via Uniformization 均匀化熵估计
Pub Date : 2023-04-19 DOI: 10.48550/arXiv.2304.09700
Ziqiao Ao, Jinglai Li
Entropy estimation is of practical importance in information theory and statistical science. Many existing entropy estimators suffer from fast growing estimation bias with respect to dimensionality, rendering them unsuitable for high-dimensional problems. In this work we propose a transform-based method for high-dimensional entropy estimation, which consists of the following two main ingredients. First by modifying the k-NN based entropy estimator, we propose a new estimator which enjoys small estimation bias for samples that are close to a uniform distribution. Second we design a normalizing flow based mapping that pushes samples toward a uniform distribution, and the relation between the entropy of the original samples and the transformed ones is also derived. As a result the entropy of a given set of samples is estimated by first transforming them toward a uniform distribution and then applying the proposed estimator to the transformed samples. The performance of the proposed method is compared against several existing entropy estimators, with both mathematical examples and real-world applications.
熵估计在信息论和统计科学中具有重要的应用价值。许多现有的熵估计器在维度上存在快速增长的估计偏差,使得它们不适合高维问题。在这项工作中,我们提出了一种基于变换的高维熵估计方法,该方法由以下两个主要成分组成。首先,通过修改基于k-NN的熵估计器,我们提出了一种新的估计器,它对接近均匀分布的样本具有较小的估计偏差。其次,我们设计了一个基于归一化流的映射,将样本推向均匀分布,并推导了原始样本和变换后样本的熵之间的关系。因此,一组给定样本的熵是通过首先将它们转换成均匀分布,然后将所提出的估计量应用于转换后的样本来估计的。通过数学实例和实际应用,将该方法的性能与几种现有的熵估计器进行了比较。
{"title":"Entropy Estimation via Uniformization","authors":"Ziqiao Ao, Jinglai Li","doi":"10.48550/arXiv.2304.09700","DOIUrl":"https://doi.org/10.48550/arXiv.2304.09700","url":null,"abstract":"Entropy estimation is of practical importance in information theory and statistical science. Many existing entropy estimators suffer from fast growing estimation bias with respect to dimensionality, rendering them unsuitable for high-dimensional problems. In this work we propose a transform-based method for high-dimensional entropy estimation, which consists of the following two main ingredients. First by modifying the k-NN based entropy estimator, we propose a new estimator which enjoys small estimation bias for samples that are close to a uniform distribution. Second we design a normalizing flow based mapping that pushes samples toward a uniform distribution, and the relation between the entropy of the original samples and the transformed ones is also derived. As a result the entropy of a given set of samples is estimated by first transforming them toward a uniform distribution and then applying the proposed estimator to the transformed samples. The performance of the proposed method is compared against several existing entropy estimators, with both mathematical examples and real-world applications.","PeriodicalId":8496,"journal":{"name":"Artif. Intell.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84791431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Task-Guided IRL in POMDPs that Scales 任务导向IRL在pomdp中的扩展
Pub Date : 2022-12-30 DOI: 10.48550/arXiv.2301.01219
Franck Djeumou, Christian Ellis, Murat Cubuktepe, Craig T. Lennon, U. Topcu
In inverse reinforcement learning (IRL), a learning agent infers a reward function encoding the underlying task using demonstrations from experts. However, many existing IRL techniques make the often unrealistic assumption that the agent has access to full information about the environment. We remove this assumption by developing an algorithm for IRL in partially observable Markov decision processes (POMDPs). We address two limitations of existing IRL techniques. First, they require an excessive amount of data due to the information asymmetry between the expert and the learner. Second, most of these IRL techniques require solving the computationally intractable forward problem -- computing an optimal policy given a reward function -- in POMDPs. The developed algorithm reduces the information asymmetry while increasing the data efficiency by incorporating task specifications expressed in temporal logic into IRL. Such specifications may be interpreted as side information available to the learner a priori in addition to the demonstrations. Further, the algorithm avoids a common source of algorithmic complexity by building on causal entropy as the measure of the likelihood of the demonstrations as opposed to entropy. Nevertheless, the resulting problem is nonconvex due to the so-called forward problem. We solve the intrinsic nonconvexity of the forward problem in a scalable manner through a sequential linear programming scheme that guarantees to converge to a locally optimal policy. In a series of examples, including experiments in a high-fidelity Unity simulator, we demonstrate that even with a limited amount of data and POMDPs with tens of thousands of states, our algorithm learns reward functions and policies that satisfy the task while inducing similar behavior to the expert by leveraging the provided side information.
在逆强化学习(IRL)中,学习代理通过专家的演示推断出编码底层任务的奖励函数。然而,许多现有的IRL技术常常不切实际地假设代理可以访问有关环境的全部信息。我们通过开发部分可观察马尔可夫决策过程(pomdp)中的IRL算法来消除这一假设。我们解决了现有IRL技术的两个限制。首先,由于专家和学习者之间的信息不对称,它们需要大量的数据。其次,在pomdp中,大多数IRL技术需要解决计算上难以解决的前向问题——计算给定奖励函数的最优策略。该算法通过将时间逻辑表示的任务规范融入到IRL中,减少了信息不对称,提高了数据效率。除了演示之外,这些说明可以被解释为学习者先验地获得的附加信息。此外,该算法通过建立因果熵作为演示可能性的度量,而不是熵,从而避免了算法复杂性的常见来源。然而,由于所谓的前向问题,结果问题是非凸的。通过保证收敛于局部最优策略的顺序线性规划方案,以可伸缩的方式解决了前向问题的固有非凸性。在一系列例子中,包括在高保真Unity模拟器中的实验,我们证明了即使在有限数量的数据和具有数万个状态的pomdp中,我们的算法也可以学习满足任务的奖励函数和策略,同时通过利用提供的侧信息诱导与专家相似的行为。
{"title":"Task-Guided IRL in POMDPs that Scales","authors":"Franck Djeumou, Christian Ellis, Murat Cubuktepe, Craig T. Lennon, U. Topcu","doi":"10.48550/arXiv.2301.01219","DOIUrl":"https://doi.org/10.48550/arXiv.2301.01219","url":null,"abstract":"In inverse reinforcement learning (IRL), a learning agent infers a reward function encoding the underlying task using demonstrations from experts. However, many existing IRL techniques make the often unrealistic assumption that the agent has access to full information about the environment. We remove this assumption by developing an algorithm for IRL in partially observable Markov decision processes (POMDPs). We address two limitations of existing IRL techniques. First, they require an excessive amount of data due to the information asymmetry between the expert and the learner. Second, most of these IRL techniques require solving the computationally intractable forward problem -- computing an optimal policy given a reward function -- in POMDPs. The developed algorithm reduces the information asymmetry while increasing the data efficiency by incorporating task specifications expressed in temporal logic into IRL. Such specifications may be interpreted as side information available to the learner a priori in addition to the demonstrations. Further, the algorithm avoids a common source of algorithmic complexity by building on causal entropy as the measure of the likelihood of the demonstrations as opposed to entropy. Nevertheless, the resulting problem is nonconvex due to the so-called forward problem. We solve the intrinsic nonconvexity of the forward problem in a scalable manner through a sequential linear programming scheme that guarantees to converge to a locally optimal policy. In a series of examples, including experiments in a high-fidelity Unity simulator, we demonstrate that even with a limited amount of data and POMDPs with tens of thousands of states, our algorithm learns reward functions and policies that satisfy the task while inducing similar behavior to the expert by leveraging the provided side information.","PeriodicalId":8496,"journal":{"name":"Artif. Intell.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80005271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Defense coordination in security games: Equilibrium analysis and mechanism design 安全博弈中的防御协调:均衡分析与机制设计
Pub Date : 2022-09-01 DOI: 10.1016/j.artint.2022.103791
Jiarui Gan, E. Elkind, Sarit Kraus, M. Wooldridge
{"title":"Defense coordination in security games: Equilibrium analysis and mechanism design","authors":"Jiarui Gan, E. Elkind, Sarit Kraus, M. Wooldridge","doi":"10.1016/j.artint.2022.103791","DOIUrl":"https://doi.org/10.1016/j.artint.2022.103791","url":null,"abstract":"","PeriodicalId":8496,"journal":{"name":"Artif. Intell.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81771618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Measuring power in coalitional games with friends, enemies and allies 在与朋友、敌人和盟友的联盟游戏中衡量力量
Pub Date : 2022-09-01 DOI: 10.1016/j.artint.2022.103792
Oskar Skibski, Takamasa Suzuki, Tomasz Grabowski, Y. Sakurai, Tomasz P. Michalak, M. Yokoo
{"title":"Measuring power in coalitional games with friends, enemies and allies","authors":"Oskar Skibski, Takamasa Suzuki, Tomasz Grabowski, Y. Sakurai, Tomasz P. Michalak, M. Yokoo","doi":"10.1016/j.artint.2022.103792","DOIUrl":"https://doi.org/10.1016/j.artint.2022.103792","url":null,"abstract":"","PeriodicalId":8496,"journal":{"name":"Artif. Intell.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81025926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Reasoning about general preference relations 关于一般偏好关系的推理
Pub Date : 2022-09-01 DOI: 10.1016/j.artint.2022.103793
Davide Grossi, W. van der Hoek, Louwe B. Kuijer
{"title":"Reasoning about general preference relations","authors":"Davide Grossi, W. van der Hoek, Louwe B. Kuijer","doi":"10.1016/j.artint.2022.103793","DOIUrl":"https://doi.org/10.1016/j.artint.2022.103793","url":null,"abstract":"","PeriodicalId":8496,"journal":{"name":"Artif. Intell.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84190819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Discovering Agents 发现代理
Pub Date : 2022-08-17 DOI: 10.48550/arXiv.2208.08345
Z. Kenton, Ramana Kumar, Sebastian Farquhar, Jonathan G. Richens, Matt MacDermott, Tom Everitt
Causal models of agents have been used to analyse the safety aspects of machine learning systems. But identifying agents is non-trivial -- often the causal model is just assumed by the modeler without much justification -- and modelling failures can lead to mistakes in the safety analysis. This paper proposes the first formal causal definition of agents -- roughly that agents are systems that would adapt their policy if their actions influenced the world in a different way. From this we derive the first causal discovery algorithm for discovering agents from empirical data, and give algorithms for translating between causal models and game-theoretic influence diagrams. We demonstrate our approach by resolving some previous confusions caused by incorrect causal modelling of agents.
代理的因果模型已被用于分析机器学习系统的安全方面。但识别代理人并非易事——通常因果模型只是由建模者在没有太多理由的情况下假设的——而建模失败可能导致安全分析中的错误。本文提出了agent的第一个正式的因果定义——粗略地说,agent是一种系统,如果它们的行为以不同的方式影响世界,它们会调整自己的政策。在此基础上,我们推导了第一个从经验数据中发现代理的因果发现算法,并给出了在因果模型和博弈论影响图之间转换的算法。我们通过解决先前由不正确的代理因果建模引起的一些混淆来演示我们的方法。
{"title":"Discovering Agents","authors":"Z. Kenton, Ramana Kumar, Sebastian Farquhar, Jonathan G. Richens, Matt MacDermott, Tom Everitt","doi":"10.48550/arXiv.2208.08345","DOIUrl":"https://doi.org/10.48550/arXiv.2208.08345","url":null,"abstract":"Causal models of agents have been used to analyse the safety aspects of machine learning systems. But identifying agents is non-trivial -- often the causal model is just assumed by the modeler without much justification -- and modelling failures can lead to mistakes in the safety analysis. This paper proposes the first formal causal definition of agents -- roughly that agents are systems that would adapt their policy if their actions influenced the world in a different way. From this we derive the first causal discovery algorithm for discovering agents from empirical data, and give algorithms for translating between causal models and game-theoretic influence diagrams. We demonstrate our approach by resolving some previous confusions caused by incorrect causal modelling of agents.","PeriodicalId":8496,"journal":{"name":"Artif. Intell.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77130229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Simplified Risk-aware Decision Making with Belief-dependent Rewards in Partially Observable Domains 部分可观察域中具有信任依赖奖励的简化风险意识决策
Pub Date : 2022-08-01 DOI: 10.1016/j.artint.2022.103775
A. Zhitnikov, V. Indelman
{"title":"Simplified Risk-aware Decision Making with Belief-dependent Rewards in Partially Observable Domains","authors":"A. Zhitnikov, V. Indelman","doi":"10.1016/j.artint.2022.103775","DOIUrl":"https://doi.org/10.1016/j.artint.2022.103775","url":null,"abstract":"","PeriodicalId":8496,"journal":{"name":"Artif. Intell.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73670195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Q-Learning-based model predictive variable impedance control for physical human-robot collaboration 基于q - learning的模型预测变阻抗控制在物理人机协作中的应用
Pub Date : 2022-08-01 DOI: 10.1016/j.artint.2022.103771
L. Roveda, Andrea Testa, Asad Ali Shahid, F. Braghin, D. Piga
{"title":"Q-Learning-based model predictive variable impedance control for physical human-robot collaboration","authors":"L. Roveda, Andrea Testa, Asad Ali Shahid, F. Braghin, D. Piga","doi":"10.1016/j.artint.2022.103771","DOIUrl":"https://doi.org/10.1016/j.artint.2022.103771","url":null,"abstract":"","PeriodicalId":8496,"journal":{"name":"Artif. Intell.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91345015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Safe, Learning-Based MPC for Highway Driving under Lane-Change Uncertainty: A Distributionally Robust Approach 车道变化不确定性下的安全、基于学习的MPC高速公路驾驶:一种分布鲁棒方法
Pub Date : 2022-06-27 DOI: 10.48550/arXiv.2206.13319
Mathijs Schuurmans, Alexander Katriniok, Chris Meissen, H. E. Tseng, Panagiotis Patrinos
We present a case study applying learning-based distributionally robust model predictive control to highway motion planning under stochastic uncertainty of the lane change behavior of surrounding road users. The dynamics of road users are modelled using Markov jump systems, in which the switching variable describes the desired lane of the vehicle under consideration and the continuous state describes the pose and velocity of the vehicles. We assume the switching probabilities of the underlying Markov chain to be unknown. As the vehicle is observed and thus, samples from the Markov chain are drawn, the transition probabilities are estimated along with an ambiguity set which accounts for misestimations of these probabilities. Correspondingly, a distributionally robust optimal control problem is formulated over a scenario tree, and solved in receding horizon. As a result, a motion planning procedure is obtained which through observation of the target vehicle gradually becomes less conservative while avoiding overconfidence in estimates obtained from small sample sizes. We present an extensive numerical case study, comparing the effects of several different design aspects on the controller performance and safety.
本文研究了基于学习的分布鲁棒模型预测控制在随机不确定性下的高速公路运动规划中的应用。使用马尔可夫跳跃系统对道路使用者的动力学建模,其中切换变量描述了所考虑车辆的期望车道,连续状态描述了车辆的姿态和速度。我们假设底层马尔可夫链的切换概率是未知的。由于车辆被观察到,因此,从马尔可夫链中抽取样本,估计过渡概率以及一个模糊集,该模糊集解释了这些概率的错误估计。相应地,在一个场景树上构造了一个分布鲁棒最优控制问题,并在后退视界上求解。通过对目标车辆的观察,得到了一种运动规划过程,该过程逐渐变得不那么保守,同时避免了对小样本量估计的过度自信。我们提出了一个广泛的数值案例研究,比较了几个不同的设计方面对控制器性能和安全性的影响。
{"title":"Safe, Learning-Based MPC for Highway Driving under Lane-Change Uncertainty: A Distributionally Robust Approach","authors":"Mathijs Schuurmans, Alexander Katriniok, Chris Meissen, H. E. Tseng, Panagiotis Patrinos","doi":"10.48550/arXiv.2206.13319","DOIUrl":"https://doi.org/10.48550/arXiv.2206.13319","url":null,"abstract":"We present a case study applying learning-based distributionally robust model predictive control to highway motion planning under stochastic uncertainty of the lane change behavior of surrounding road users. The dynamics of road users are modelled using Markov jump systems, in which the switching variable describes the desired lane of the vehicle under consideration and the continuous state describes the pose and velocity of the vehicles. We assume the switching probabilities of the underlying Markov chain to be unknown. As the vehicle is observed and thus, samples from the Markov chain are drawn, the transition probabilities are estimated along with an ambiguity set which accounts for misestimations of these probabilities. Correspondingly, a distributionally robust optimal control problem is formulated over a scenario tree, and solved in receding horizon. As a result, a motion planning procedure is obtained which through observation of the target vehicle gradually becomes less conservative while avoiding overconfidence in estimates obtained from small sample sizes. We present an extensive numerical case study, comparing the effects of several different design aspects on the controller performance and safety.","PeriodicalId":8496,"journal":{"name":"Artif. Intell.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78878269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A polynomial reduction of forks into logic programs 将分叉多项式化简成逻辑程序
Pub Date : 2022-03-01 DOI: 10.1016/j.artint.2022.103712
Felicidad Aguado, Pedro Cabalar, Jorge Fandinno, D. Pearce, Gilberto Pérez, Concepción Vidal Martín
{"title":"A polynomial reduction of forks into logic programs","authors":"Felicidad Aguado, Pedro Cabalar, Jorge Fandinno, D. Pearce, Gilberto Pérez, Concepción Vidal Martín","doi":"10.1016/j.artint.2022.103712","DOIUrl":"https://doi.org/10.1016/j.artint.2022.103712","url":null,"abstract":"","PeriodicalId":8496,"journal":{"name":"Artif. Intell.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88860666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Artif. Intell.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1