首页 > 最新文献

Information and Inference-A Journal of the Ima最新文献

英文 中文
OUP accepted manuscript OUP接受稿件
IF 1.6 4区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2021-01-01 DOI: 10.1093/imaiai/iaab020
{"title":"OUP accepted manuscript","authors":"","doi":"10.1093/imaiai/iaab020","DOIUrl":"https://doi.org/10.1093/imaiai/iaab020","url":null,"abstract":"","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"47 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88917000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
OUP accepted manuscript OUP接受稿件
IF 1.6 4区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2021-01-01 DOI: 10.1093/imaiai/iaab029
{"title":"OUP accepted manuscript","authors":"","doi":"10.1093/imaiai/iaab029","DOIUrl":"https://doi.org/10.1093/imaiai/iaab029","url":null,"abstract":"","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"84 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86949626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
OUP accepted manuscript OUP接受稿件
IF 1.6 4区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2021-01-01 DOI: 10.1093/imaiai/iaab022
{"title":"OUP accepted manuscript","authors":"","doi":"10.1093/imaiai/iaab022","DOIUrl":"https://doi.org/10.1093/imaiai/iaab022","url":null,"abstract":"","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"591 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77317120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
OUP accepted manuscript OUP接受稿件
IF 1.6 4区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2021-01-01 DOI: 10.1093/imaiai/iaab021
{"title":"OUP accepted manuscript","authors":"","doi":"10.1093/imaiai/iaab021","DOIUrl":"https://doi.org/10.1093/imaiai/iaab021","url":null,"abstract":"","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"63 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84732722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Concentration inequalities for the empirical distribution of discrete distributions: beyond the method of types 离散分布的经验分布的集中不等式:超越类型的方法
IF 1.6 4区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2020-12-16 DOI: 10.1093/imaiai/iaz025
Jay Mardia, Jiantao Jiao, Ervin Tánczos, R. Nowak, T. Weissman
We study concentration inequalities for the Kullback–Leibler (KL) divergence between the empirical distribution and the true distribution. Applying a recursion technique, we improve over the method of types bound uniformly in all regimes of sample size n and alphabet size k, and the improvement becomes more significant when k is large. We discuss the applications of our results in obtaining tighter concentration inequalities for L1 deviations of the empirical distribution from the true distribution, and the difference between concentration around the expectation or zero. We also obtain asymptotically tight bounds on the variance of the KL divergence between the empirical and true distribution, and demonstrate their quantitatively different behaviors between small and large sample sizes compared to the alphabet size.
我们研究了经验分布和真实分布之间的Kullback-Leibler (KL)散度的集中不等式。应用递归技术,在样本大小为n、字母大小为k的所有区域中,对类型一致定界的方法进行了改进,当k较大时,改进更为显著。我们讨论了我们的结果在经验分布与真实分布的L1偏差以及期望值周围或零之间的浓度差的更严格的浓度不等式中的应用。我们还获得了经验分布和真实分布之间KL散度方差的渐近紧界,并证明了与字母表大小相比,它们在小样本容量和大样本容量之间的定量差异行为。
{"title":"Concentration inequalities for the empirical distribution of discrete distributions: beyond the method of types","authors":"Jay Mardia, Jiantao Jiao, Ervin Tánczos, R. Nowak, T. Weissman","doi":"10.1093/imaiai/iaz025","DOIUrl":"https://doi.org/10.1093/imaiai/iaz025","url":null,"abstract":"We study concentration inequalities for the Kullback–Leibler (KL) divergence between the empirical distribution and the true distribution. Applying a recursion technique, we improve over the method of types bound uniformly in all regimes of sample size n and alphabet size k, and the improvement becomes more significant when k is large. We discuss the applications of our results in obtaining tighter concentration inequalities for L1 deviations of the empirical distribution from the true distribution, and the difference between concentration around the expectation or zero. We also obtain asymptotically tight bounds on the variance of the KL divergence between the empirical and true distribution, and demonstrate their quantitatively different behaviors between small and large sample sizes compared to the alphabet size.","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"9 1","pages":"813-850"},"PeriodicalIF":1.6,"publicationDate":"2020-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87011039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Analysis of fast structured dictionary learning. 快速结构化词典学习分析
IF 1.4 4区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2020-12-01 Epub Date: 2019-11-19 DOI: 10.1093/imaiai/iaz028
Saiprasad Ravishankar, Anna Ma, Deanna Needell

Sparsity-based models and techniques have been exploited in many signal processing and imaging applications. Data-driven methods based on dictionary and sparsifying transform learning enable learning rich image features from data and can outperform analytical models. In particular, alternating optimization algorithms have been popular for learning such models. In this work, we focus on alternating minimization for a specific structured unitary sparsifying operator learning problem and provide a convergence analysis. While the algorithm converges to the critical points of the problem generally, our analysis establishes under mild assumptions, the local linear convergence of the algorithm to the underlying sparsifying model of the data. Analysis and numerical simulations show that our assumptions hold for standard probabilistic data models. In practice, the algorithm is robust to initialization.

基于稀疏性的模型和技术已在许多信号处理和成像应用中得到开发。基于字典和稀疏性变换学习的数据驱动方法可以从数据中学习丰富的图像特征,其效果优于分析模型。其中,交替优化算法一直是学习此类模型的常用方法。在这项工作中,我们将重点放在交替最小化上,以解决特定的结构化单元稀疏化算子学习问题,并提供收敛性分析。虽然算法一般会收敛到问题的临界点,但我们的分析在温和的假设条件下,确定了算法对数据基础稀疏化模型的局部线性收敛。分析和数值模拟表明,我们的假设对于标准概率数据模型是成立的。在实践中,该算法对初始化具有鲁棒性。
{"title":"Analysis of fast structured dictionary learning.","authors":"Saiprasad Ravishankar, Anna Ma, Deanna Needell","doi":"10.1093/imaiai/iaz028","DOIUrl":"10.1093/imaiai/iaz028","url":null,"abstract":"<p><p>Sparsity-based models and techniques have been exploited in many signal processing and imaging applications. Data-driven methods based on dictionary and sparsifying transform learning enable learning rich image features from data and can outperform analytical models. In particular, alternating optimization algorithms have been popular for learning such models. In this work, we focus on alternating minimization for a specific structured unitary sparsifying operator learning problem and provide a convergence analysis. While the algorithm converges to the critical points of the problem generally, our analysis establishes under mild assumptions, the local linear convergence of the algorithm to the underlying sparsifying model of the data. Analysis and numerical simulations show that our assumptions hold for standard probabilistic data models. In practice, the algorithm is robust to initialization.</p>","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"9 4","pages":"785-811"},"PeriodicalIF":1.4,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7737167/pdf/iaz028.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38730960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Matchability of heterogeneous networks pairs. 异构网络对的匹配性。
IF 1.4 4区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2020-12-01 Epub Date: 2020-01-06 DOI: 10.1093/imaiai/iaz031
Vince Lyzinski, Daniel L Sussman

We consider the problem of graph matchability in non-identically distributed networks. In a general class of edge-independent networks, we demonstrate that graph matchability can be lost with high probability when matching the networks directly. We further demonstrate that under mild model assumptions, matchability is almost perfectly recovered by centering the networks using universal singular value thresholding before matching. These theoretical results are then demonstrated in both real and synthetic simulation settings. We also recover analogous core-matchability results in a very general core-junk network model, wherein some vertices do not correspond between the graph pair.

我们考虑的是非同分布式网络中的图匹配性问题。在一般的边缘无关网络中,我们证明了直接匹配网络时,图的可匹配性会以很高的概率丢失。我们进一步证明,在温和的模型假设下,通过在匹配前使用通用奇异值阈值将网络居中,几乎可以完美地恢复匹配性。这些理论结果随后在实际和合成模拟环境中得到了验证。我们还在一个非常通用的核心-垃圾网络模型中恢复了类似的核心匹配性结果,在这个模型中,一些顶点在图对之间并不对应。
{"title":"Matchability of heterogeneous networks pairs.","authors":"Vince Lyzinski, Daniel L Sussman","doi":"10.1093/imaiai/iaz031","DOIUrl":"10.1093/imaiai/iaz031","url":null,"abstract":"<p><p>We consider the problem of graph matchability in non-identically distributed networks. In a general class of edge-independent networks, we demonstrate that graph matchability can be lost with high probability when matching the networks directly. We further demonstrate that under mild model assumptions, matchability is almost perfectly recovered by centering the networks using universal singular value thresholding before matching. These theoretical results are then demonstrated in both real and synthetic simulation settings. We also recover analogous core-matchability results in a very general core-junk network model, wherein some vertices do not correspond between the graph pair.</p>","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"9 4","pages":"749-783"},"PeriodicalIF":1.4,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7737166/pdf/iaz031.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38730959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Overlap matrix concentration in optimal Bayesian inference 最优贝叶斯推理中的重叠矩阵集中
IF 1.6 4区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2020-10-01 DOI: 10.1093/imaiai/iaaa008
Jean Barbier
We consider models of Bayesian inference of signals with vectorial components of finite dimensionality. We show that under a proper perturbation, these models are replica symmetric in the sense that the overlap matrix concentrates. The overlap matrix is the order parameter in these models and is directly related to error metrics such as minimum mean-square errors. Our proof is valid in the optimal Bayesian inference setting. This means that it relies on the assumption that the model and all its hyper-parameters are known so that the posterior distribution can be written exactly. Examples of important problems in high-dimensional inference and learning to which our results apply are low-rank tensor factorization, the committee machine neural network with a finite number of hidden neurons in the teacher–student scenario or multi-layer versions of the generalized linear model.
我们考虑具有有限维矢量分量的信号的贝叶斯推理模型。我们证明了在适当的扰动下,这些模型在重叠矩阵集中的意义上是复制对称的。重叠矩阵是这些模型中的阶参数,并且与误差度量(例如最小均方误差)直接相关。我们的证明在最优贝叶斯推理设置中是有效的。这意味着它依赖于这样一个假设,即模型及其所有超参数都是已知的,这样后验分布就可以精确地书写出来。我们的结果适用于高维推理和学习中的重要问题的例子是低秩张量因子分解、教师-学生场景中具有有限数量隐藏神经元的委员会机器神经网络或广义线性模型的多层版本。
{"title":"Overlap matrix concentration in optimal Bayesian inference","authors":"Jean Barbier","doi":"10.1093/imaiai/iaaa008","DOIUrl":"https://doi.org/10.1093/imaiai/iaaa008","url":null,"abstract":"We consider models of Bayesian inference of signals with vectorial components of finite dimensionality. We show that under a proper perturbation, these models are replica symmetric in the sense that the overlap matrix concentrates. The overlap matrix is the order parameter in these models and is directly related to error metrics such as minimum mean-square errors. Our proof is valid in the optimal Bayesian inference setting. This means that it relies on the assumption that the model and all its hyper-parameters are known so that the posterior distribution can be written exactly. Examples of important problems in high-dimensional inference and learning to which our results apply are low-rank tensor factorization, the committee machine neural network with a finite number of hidden neurons in the teacher–student scenario or multi-layer versions of the generalized linear model.","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"10 1","pages":"597-623"},"PeriodicalIF":1.6,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1093/imaiai/iaaa008","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50262519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Tight recovery guarantees for orthogonal matching pursuit under Gaussian noise 高斯噪声下正交匹配追踪的严密恢复保证
IF 1.6 4区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2020-10-01 DOI: 10.1093/imaiai/iaaa021
Chen Amiraz;Robert Krauthgamer;Boaz Nadler
Orthogonal matching pursuit (OMP) is a popular algorithm to estimate an unknown sparse vector from multiple linear measurements of it. Assuming exact sparsity and that the measurements are corrupted by additive Gaussian noise, the success of OMP is often formulated as exactly recovering the support of the sparse vector. Several authors derived a sufficient condition for exact support recovery by OMP with high probability depending on the signal-to-noise ratio, defined as the magnitude of the smallest non-zero coefficient of the vector divided by the noise level. We make two contributions. First, we derive a slightly sharper sufficient condition for two variants of OMP, in which either the sparsity level or the noise level is known. Next, we show that this sharper sufficient condition is tight, in the following sense: for a wide range of problem parameters, there exist a dictionary of linear measurements and a sparse vector with a signal-to-noise ratio slightly below that of the sufficient condition, for which with high probability OMP fails to recover its support. Finally, we present simulations that illustrate that our condition is tight for a much broader range of dictionaries.
正交匹配追踪(OMP)是一种流行的算法,用于从未知稀疏向量的多个线性测量中估计未知稀疏向量。假设精确的稀疏性,并且测量被加性高斯噪声破坏,OMP的成功通常被公式化为精确地恢复稀疏向量的支持。几位作者根据信噪比导出了OMP高概率精确恢复支持的充分条件,信噪比定义为向量的最小非零系数的大小除以噪声水平。我们有两个贡献。首先,我们导出了OMP的两个变体的稍微尖锐的充分条件,其中稀疏性水平或噪声水平是已知的。接下来,我们证明了这个更尖锐的充分条件是严格的,在以下意义上:对于广泛的问题参数,存在一个线性测量字典和一个信噪比略低于充分条件的稀疏向量,OMP很可能无法恢复其支持。最后,我们给出的模拟结果表明,对于更广泛的词典,我们的条件是严格的。
{"title":"Tight recovery guarantees for orthogonal matching pursuit under Gaussian noise","authors":"Chen Amiraz;Robert Krauthgamer;Boaz Nadler","doi":"10.1093/imaiai/iaaa021","DOIUrl":"https://doi.org/10.1093/imaiai/iaaa021","url":null,"abstract":"Orthogonal matching pursuit (OMP) is a popular algorithm to estimate an unknown sparse vector from multiple linear measurements of it. Assuming exact sparsity and that the measurements are corrupted by additive Gaussian noise, the success of OMP is often formulated as exactly recovering the support of the sparse vector. Several authors derived a sufficient condition for exact support recovery by OMP with high probability depending on the signal-to-noise ratio, defined as the magnitude of the smallest non-zero coefficient of the vector divided by the noise level. We make two contributions. First, we derive a slightly sharper sufficient condition for two variants of OMP, in which either the sparsity level or the noise level is known. Next, we show that this sharper sufficient condition is tight, in the following sense: for a wide range of problem parameters, there exist a dictionary of linear measurements and a sparse vector with a signal-to-noise ratio slightly below that of the sufficient condition, for which with high probability OMP fails to recover its support. Finally, we present simulations that illustrate that our condition is tight for a much broader range of dictionaries.","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"10 1","pages":"573-595"},"PeriodicalIF":1.6,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1093/imaiai/iaaa021","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50262615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Erratum to: Super-resolution of near-colliding point sources 勘误表:近碰撞点源的超分辨率
IF 1.6 4区 数学 Q2 MATHEMATICS, APPLIED Pub Date : 2020-10-01 DOI: 10.1093/imaiai/iaaa015
Dmitry Batenkov;Gil Goldman;Yosef Yomdin
{"title":"Erratum to: Super-resolution of near-colliding point sources","authors":"Dmitry Batenkov;Gil Goldman;Yosef Yomdin","doi":"10.1093/imaiai/iaaa015","DOIUrl":"https://doi.org/10.1093/imaiai/iaaa015","url":null,"abstract":"","PeriodicalId":45437,"journal":{"name":"Information and Inference-A Journal of the Ima","volume":"10 1","pages":"721-721"},"PeriodicalIF":1.6,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1093/imaiai/iaaa015","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50262522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Information and Inference-A Journal of the Ima
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1