首页 > 最新文献

Proceedings of the ... International Conference on Image Analysis and Processing. International Conference on Image Analysis and Processing最新文献

英文 中文
Analyzing EEG Data with Machine and Deep Learning: A Benchmark 用机器和深度学习分析脑电图数据:一个基准
D. Avola, Marco Cascio, L. Cinque, Alessio Fagioli, G. Foresti, Marco Raoul Marini, D. Pannone
Nowadays, machine and deep learning techniques are widely used in different areas, ranging from economics to biology. In general, these techniques can be used in two ways: trying to adapt well-known models and architectures to the available data, or designing custom architectures. In both cases, to speed up the research process, it is useful to know which type of models work best for a specific problem and/or data type. By focusing on EEG signal analysis, and for the first time in literature, in this paper a benchmark of machine and deep learning for EEG signal classification is proposed. For our experiments we used the four most widespread models, i.e., multilayer perceptron, convolutional neural network, long short-term memory, and gated recurrent unit, highlighting which one can be a good starting point for developing EEG classification models.
如今,机器和深度学习技术被广泛应用于从经济学到生物学的不同领域。通常,这些技术可以以两种方式使用:尝试使已知的模型和体系结构适应可用的数据,或者设计定制的体系结构。在这两种情况下,为了加快研究过程,了解哪种类型的模型最适合特定问题和/或数据类型是有用的。本文以脑电信号分析为重点,在文献中首次提出了脑电信号分类的机器学习和深度学习基准。在我们的实验中,我们使用了四种最广泛的模型,即多层感知器、卷积神经网络、长短期记忆和门控循环单元,突出了哪一种模型可以作为开发EEG分类模型的良好起点。
{"title":"Analyzing EEG Data with Machine and Deep Learning: A Benchmark","authors":"D. Avola, Marco Cascio, L. Cinque, Alessio Fagioli, G. Foresti, Marco Raoul Marini, D. Pannone","doi":"10.48550/arXiv.2203.10009","DOIUrl":"https://doi.org/10.48550/arXiv.2203.10009","url":null,"abstract":"Nowadays, machine and deep learning techniques are widely used in different areas, ranging from economics to biology. In general, these techniques can be used in two ways: trying to adapt well-known models and architectures to the available data, or designing custom architectures. In both cases, to speed up the research process, it is useful to know which type of models work best for a specific problem and/or data type. By focusing on EEG signal analysis, and for the first time in literature, in this paper a benchmark of machine and deep learning for EEG signal classification is proposed. For our experiments we used the four most widespread models, i.e., multilayer perceptron, convolutional neural network, long short-term memory, and gated recurrent unit, highlighting which one can be a good starting point for developing EEG classification models.","PeriodicalId":74527,"journal":{"name":"Proceedings of the ... International Conference on Image Analysis and Processing. International Conference on Image Analysis and Processing","volume":"56 1","pages":"335-345"},"PeriodicalIF":0.0,"publicationDate":"2022-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79165950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning video retrieval models with relevance-aware online mining 基于相关性感知在线挖掘的学习视频检索模型
Alex Falcon, G. Serra, O. Lanz
Due to the amount of videos and related captions uploaded every hour, deep learning-based solutions for cross-modal video retrieval are attracting more and more attention. A typical approach consists in learning a joint text-video embedding space, where the similarity of a video and its associated caption is maximized, whereas a lower similarity is enforced with all the other captions, called negatives. This approach assumes that only the video and caption pairs in the dataset are valid, but different captions positives may also describe its visual contents, hence some of them may be wrongly penalized. To address this shortcoming, we propose the Relevance-Aware Negatives and Positives mining (RANP) which, based on the semantics of the negatives, improves their selection while also increasing the similarity of other valid positives. We explore the influence of these techniques on two videotext datasets: EPIC-Kitchens-100 and MSR-VTT. By using the proposed techniques, we achieve considerable improvements in terms of nDCG and mAP, leading to state-of-the-art results, e.g. +5.3% nDCG and +3.0% mAP on EPIC-Kitchens-100. We share code and pretrained models at https://github.com/aranciokov/ranp.
由于每小时上传的视频和相关字幕数量庞大,基于深度学习的跨模式视频检索解决方案越来越受到关注。一个典型的方法是学习一个联合文本-视频嵌入空间,其中视频及其相关标题的相似性被最大化,而所有其他标题的相似性被强制降低,称为阴性。这种方法假设数据集中只有视频和字幕对是有效的,但是不同的字幕阳性也可能描述了它的视觉内容,因此其中一些可能会被错误地惩罚。为了解决这一缺点,我们提出了基于否定语义的关联感知否定和肯定挖掘(RANP),该方法改进了否定的选择,同时也增加了其他有效肯定的相似性。我们探讨了这些技术对两个视频文本数据集的影响:EPIC-Kitchens-100和MSR-VTT。通过使用所提出的技术,我们在nDCG和mAP方面取得了相当大的改进,导致了最先进的结果,例如EPIC-Kitchens-100上+5.3%的nDCG和+3.0%的mAP。我们在https://github.com/aranciokov/ranp上共享代码和预训练模型。
{"title":"Learning video retrieval models with relevance-aware online mining","authors":"Alex Falcon, G. Serra, O. Lanz","doi":"10.48550/arXiv.2203.08688","DOIUrl":"https://doi.org/10.48550/arXiv.2203.08688","url":null,"abstract":"Due to the amount of videos and related captions uploaded every hour, deep learning-based solutions for cross-modal video retrieval are attracting more and more attention. A typical approach consists in learning a joint text-video embedding space, where the similarity of a video and its associated caption is maximized, whereas a lower similarity is enforced with all the other captions, called negatives. This approach assumes that only the video and caption pairs in the dataset are valid, but different captions positives may also describe its visual contents, hence some of them may be wrongly penalized. To address this shortcoming, we propose the Relevance-Aware Negatives and Positives mining (RANP) which, based on the semantics of the negatives, improves their selection while also increasing the similarity of other valid positives. We explore the influence of these techniques on two videotext datasets: EPIC-Kitchens-100 and MSR-VTT. By using the proposed techniques, we achieve considerable improvements in terms of nDCG and mAP, leading to state-of-the-art results, e.g. +5.3% nDCG and +3.0% mAP on EPIC-Kitchens-100. We share code and pretrained models at https://github.com/aranciokov/ranp.","PeriodicalId":74527,"journal":{"name":"Proceedings of the ... International Conference on Image Analysis and Processing. International Conference on Image Analysis and Processing","volume":"26 1","pages":"182-194"},"PeriodicalIF":0.0,"publicationDate":"2022-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87366829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
MOBDrone: a Drone Video Dataset for Man OverBoard Rescue MOBDrone:用于人落水救援的无人机视频数据集
Donato Cafarelli, Luca Ciampi, Lucia Vadicamo, C. Gennaro, A. Berton, M. Paterni, C. Benvenuti, M. Passera, F. Falchi
Modern Unmanned Aerial Vehicles (UAV) equipped with cameras can play an essential role in speeding up the identification and rescue of people who have fallen overboard, i.e., man overboard (MOB). To this end, Artificial Intelligence techniques can be leveraged for the automatic understanding of visual data acquired from drones. However, detecting people at sea in aerial imagery is challenging primarily due to the lack of specialized annotated datasets for training and testing detectors for this task. To fill this gap, we introduce and publicly release the MOBDrone benchmark, a collection of more than 125K drone-view images in a marine environment under several conditions, such as different altitudes, camera shooting angles, and illumination. We manually annotated more than 180K objects, of which about 113K man overboard, precisely localizing them with bounding boxes. Moreover, we conduct a thorough performance analysis of several state-of-the-art object detectors on the MOBDrone data, serving as baselines for further research.
配备摄像头的现代无人机(UAV)可以在加快识别和救援落水人员(即落水人员)方面发挥重要作用。为此,可以利用人工智能技术来自动理解从无人机获取的视觉数据。然而,在航空图像中检测海上人员是具有挑战性的,主要是因为缺乏专门的注释数据集来训练和测试探测器。为了填补这一空白,我们引入并公开发布了MOBDrone基准,这是在不同高度,相机拍摄角度和照明等几种条件下,在海洋环境中收集的超过125K的无人机视图图像。我们手动标注了超过180K个对象,其中大约113K个对象是多余的,我们用边界框精确地定位了它们。此外,我们对MOBDrone数据上的几个最先进的目标探测器进行了全面的性能分析,作为进一步研究的基线。
{"title":"MOBDrone: a Drone Video Dataset for Man OverBoard Rescue","authors":"Donato Cafarelli, Luca Ciampi, Lucia Vadicamo, C. Gennaro, A. Berton, M. Paterni, C. Benvenuti, M. Passera, F. Falchi","doi":"10.48550/arXiv.2203.07973","DOIUrl":"https://doi.org/10.48550/arXiv.2203.07973","url":null,"abstract":"Modern Unmanned Aerial Vehicles (UAV) equipped with cameras can play an essential role in speeding up the identification and rescue of people who have fallen overboard, i.e., man overboard (MOB). To this end, Artificial Intelligence techniques can be leveraged for the automatic understanding of visual data acquired from drones. However, detecting people at sea in aerial imagery is challenging primarily due to the lack of specialized annotated datasets for training and testing detectors for this task. To fill this gap, we introduce and publicly release the MOBDrone benchmark, a collection of more than 125K drone-view images in a marine environment under several conditions, such as different altitudes, camera shooting angles, and illumination. We manually annotated more than 180K objects, of which about 113K man overboard, precisely localizing them with bounding boxes. Moreover, we conduct a thorough performance analysis of several state-of-the-art object detectors on the MOBDrone data, serving as baselines for further research.","PeriodicalId":74527,"journal":{"name":"Proceedings of the ... International Conference on Image Analysis and Processing. International Conference on Image Analysis and Processing","volume":"49 1","pages":"633-644"},"PeriodicalIF":0.0,"publicationDate":"2022-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73565925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Decontextualized I3D ConvNet for ultra-distance runners performance analysis at a glance 去语境化的I3D ConvNet超长跑运动员的表现分析一目了然
David Freire-Obregón, J. Lorenzo-Navarro, M. C. Santana
{"title":"Decontextualized I3D ConvNet for ultra-distance runners performance analysis at a glance","authors":"David Freire-Obregón, J. Lorenzo-Navarro, M. C. Santana","doi":"10.1007/978-3-031-06433-3_21","DOIUrl":"https://doi.org/10.1007/978-3-031-06433-3_21","url":null,"abstract":"","PeriodicalId":74527,"journal":{"name":"Proceedings of the ... International Conference on Image Analysis and Processing. International Conference on Image Analysis and Processing","volume":"62 1","pages":"242-253"},"PeriodicalIF":0.0,"publicationDate":"2022-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79055060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Improve Convolutional Neural Network Pruning by Maximizing Filter Variety 通过最大化滤波器种类来改进卷积神经网络剪枝
Nathan Hubens, M. Mancas, B. Gosselin, Marius Preda, T. Zaharia
Neural network pruning is a widely used strategy for reducing model storage and computing requirements. It allows to lower the complexity of the network by introducing sparsity in the weights. Because taking advantage of sparse matrices is still challenging, pruning is often performed in a structured way, i.e. removing entire convolution filters in the case of ConvNets, according to a chosen pruning criteria. Common pruning criteria, such as l1-norm or movement, usually do not consider the individual utility of filters, which may lead to: (1) the removal of filters exhibiting rare, thus important and discriminative behaviour, and (2) the retaining of filters with redundant information. In this paper, we present a technique solving those two issues, and which can be appended to any pruning criteria. This technique ensures that the criteria of selection focuses on redundant filters, while retaining the rare ones, thus maximizing the variety of remaining filters. The experimental results, carried out on different datasets (CIFAR-10, CIFAR-100 and CALTECH-101) and using different architectures (VGG-16 and ResNet-18) demonstrate that it is possible to achieve similar sparsity levels while maintaining a higher performance when appending our filter selection technique to pruning criteria. Moreover, we assess the quality of the found sparse sub-networks by applying the Lottery Ticket Hypothesis and find that the addition of our method allows to discover better performing tickets in most cases
神经网络剪枝是一种广泛使用的减少模型存储和计算需求的策略。它允许通过在权重中引入稀疏性来降低网络的复杂性。由于利用稀疏矩阵仍然具有挑战性,因此修剪通常以结构化的方式进行,即根据选择的修剪标准去除卷积网络中的整个卷积过滤器。常见的修剪标准,如11范数或运动,通常不考虑过滤器的个别效用,这可能导致:(1)去除表现出罕见的,因此重要的和有区别的行为的过滤器,以及(2)保留冗余信息的过滤器。在本文中,我们提出了一种解决这两个问题的技术,它可以附加到任何修剪标准中。这种技术确保了选择的标准集中在冗余的过滤器上,同时保留了罕见的过滤器,从而最大限度地增加了剩余过滤器的多样性。在不同的数据集(CIFAR-10、CIFAR-100和CALTECH-101)和不同的架构(VGG-16和ResNet-18)上进行的实验结果表明,当将我们的滤波器选择技术附加到修剪标准中时,可以在保持更高性能的同时实现相似的稀疏度水平。此外,我们通过应用彩票假设来评估发现的稀疏子网络的质量,并发现添加我们的方法可以在大多数情况下发现性能更好的彩票
{"title":"Improve Convolutional Neural Network Pruning by Maximizing Filter Variety","authors":"Nathan Hubens, M. Mancas, B. Gosselin, Marius Preda, T. Zaharia","doi":"10.48550/arXiv.2203.05807","DOIUrl":"https://doi.org/10.48550/arXiv.2203.05807","url":null,"abstract":"Neural network pruning is a widely used strategy for reducing model storage and computing requirements. It allows to lower the complexity of the network by introducing sparsity in the weights. Because taking advantage of sparse matrices is still challenging, pruning is often performed in a structured way, i.e. removing entire convolution filters in the case of ConvNets, according to a chosen pruning criteria. Common pruning criteria, such as l1-norm or movement, usually do not consider the individual utility of filters, which may lead to: (1) the removal of filters exhibiting rare, thus important and discriminative behaviour, and (2) the retaining of filters with redundant information. In this paper, we present a technique solving those two issues, and which can be appended to any pruning criteria. This technique ensures that the criteria of selection focuses on redundant filters, while retaining the rare ones, thus maximizing the variety of remaining filters. The experimental results, carried out on different datasets (CIFAR-10, CIFAR-100 and CALTECH-101) and using different architectures (VGG-16 and ResNet-18) demonstrate that it is possible to achieve similar sparsity levels while maintaining a higher performance when appending our filter selection technique to pruning criteria. Moreover, we assess the quality of the found sparse sub-networks by applying the Lottery Ticket Hypothesis and find that the addition of our method allows to discover better performing tickets in most cases","PeriodicalId":74527,"journal":{"name":"Proceedings of the ... International Conference on Image Analysis and Processing. International Conference on Image Analysis and Processing","volume":"73 1","pages":"379-390"},"PeriodicalIF":0.0,"publicationDate":"2022-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85743937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Avalanche RL: a Continual Reinforcement Learning Library Avalanche RL:一个持续强化学习库
Nicolo Lucchesi, Antonio Carta, Vincenzo Lomonaco
{"title":"Avalanche RL: a Continual Reinforcement Learning Library","authors":"Nicolo Lucchesi, Antonio Carta, Vincenzo Lomonaco","doi":"10.1007/978-3-031-06427-2_44","DOIUrl":"https://doi.org/10.1007/978-3-031-06427-2_44","url":null,"abstract":"","PeriodicalId":74527,"journal":{"name":"Proceedings of the ... International Conference on Image Analysis and Processing. International Conference on Image Analysis and Processing","volume":"4 1","pages":"524-535"},"PeriodicalIF":0.0,"publicationDate":"2022-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76401821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
StandardSim: A Synthetic Dataset For Retail Environments StandardSim:零售环境的合成数据集
Cristina Mata, Nick Locascio, Mohammed Azeem Sheikh, Kenny Kihara, Daniel L. Fischetti
{"title":"StandardSim: A Synthetic Dataset For Retail Environments","authors":"Cristina Mata, Nick Locascio, Mohammed Azeem Sheikh, Kenny Kihara, Daniel L. Fischetti","doi":"10.1007/978-3-031-06430-2_6","DOIUrl":"https://doi.org/10.1007/978-3-031-06430-2_6","url":null,"abstract":"","PeriodicalId":74527,"journal":{"name":"Proceedings of the ... International Conference on Image Analysis and Processing. International Conference on Image Analysis and Processing","volume":"468 1","pages":"65-76"},"PeriodicalIF":0.0,"publicationDate":"2022-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72776501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Learning Semantics for Visual Place Recognition through Multi-Scale Attention 通过多尺度注意学习视觉位置识别的语义
Valerio Paolicelli, A. Tavera, Gabriele Berton, C. Masone, Barbara Caputo
{"title":"Learning Semantics for Visual Place Recognition through Multi-Scale Attention","authors":"Valerio Paolicelli, A. Tavera, Gabriele Berton, C. Masone, Barbara Caputo","doi":"10.1007/978-3-031-06430-2_38","DOIUrl":"https://doi.org/10.1007/978-3-031-06430-2_38","url":null,"abstract":"","PeriodicalId":74527,"journal":{"name":"Proceedings of the ... International Conference on Image Analysis and Processing. International Conference on Image Analysis and Processing","volume":"51 1","pages":"454-466"},"PeriodicalIF":0.0,"publicationDate":"2022-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76544587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A Robust and Efficient Overhead People Counting System for Retail Applications 一种稳健、高效的零售管理人员计数系统
Antonio Greco, A. Saggese, Bruno Vento
{"title":"A Robust and Efficient Overhead People Counting System for Retail Applications","authors":"Antonio Greco, A. Saggese, Bruno Vento","doi":"10.1007/978-3-031-06430-2_12","DOIUrl":"https://doi.org/10.1007/978-3-031-06430-2_12","url":null,"abstract":"","PeriodicalId":74527,"journal":{"name":"Proceedings of the ... International Conference on Image Analysis and Processing. International Conference on Image Analysis and Processing","volume":"15 1","pages":"139-150"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75604669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D Key-Points Estimation from Single-View RGB Images 单视图RGB图像的3D关键点估计
M. Zohaib, M. Taiana, Milind G. Padalkar, A. D. Bue
{"title":"3D Key-Points Estimation from Single-View RGB Images","authors":"M. Zohaib, M. Taiana, Milind G. Padalkar, A. D. Bue","doi":"10.1007/978-3-031-06430-2_3","DOIUrl":"https://doi.org/10.1007/978-3-031-06430-2_3","url":null,"abstract":"","PeriodicalId":74527,"journal":{"name":"Proceedings of the ... International Conference on Image Analysis and Processing. International Conference on Image Analysis and Processing","volume":"30 1","pages":"27-38"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74354866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
Proceedings of the ... International Conference on Image Analysis and Processing. International Conference on Image Analysis and Processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1