首页 > 最新文献

News. Phi Delta Epsilon最新文献

英文 中文
Alternative Data Augmentation for Industrial Monitoring using Adversarial Learning 使用对抗学习的工业监测替代数据增强
Pub Date : 2022-05-09 DOI: 10.48550/arXiv.2205.04222
Silvan Mertes, A. Margraf, Steffen Geinitz, Elisabeth Andr'e
Visual inspection software has become a key factor in the manufacturing industry for quality control and process monitoring. Semantic segmentation models have gained importance since they allow for more precise examination. These models, however, require large image datasets in order to achieve a fair accuracy level. In some cases, training data is sparse or lacks of sufficient annotation, a fact that especially applies to highly specialized production environments. Data augmentation represents a common strategy to extend the dataset. Still, it only varies the image within a narrow range. In this article, a novel strategy is proposed to augment small image datasets. The approach is applied to surface monitoring of carbon fibers, a specific industry use case. We apply two different methods to create binary labels: a problem-tailored trigonometric function and a WGAN model. Afterwards, the labels are translated into color images using pix2pix and used to train a U-Net. The results suggest that the trigonometric function is superior to the WGAN model. However, a precise examination of the resulting images indicate that WGAN and image-to-image translation achieve good segmentation results and only deviate to a small degree from traditional data augmentation. In summary, this study examines an industry application of data synthesization using generative adversarial networks and explores its potential for monitoring systems of production environments. keywords{Image-to-Image Translation, Carbon Fiber, Data Augmentation, Computer Vision, Industrial Monitoring, Adversarial Learning.
视觉检测软件已成为制造业质量控制和过程监控的关键因素。语义分割模型变得越来越重要,因为它们允许更精确的检查。然而,这些模型需要大量的图像数据集才能达到相当的精度水平。在某些情况下,训练数据稀疏或缺乏足够的注释,这一事实尤其适用于高度专业化的生产环境。数据增强是扩展数据集的一种常用策略。尽管如此,它只能在一个狭窄的范围内改变图像。本文提出了一种增强小图像数据集的新策略。该方法应用于碳纤维的表面监测,这是一个特定的行业用例。我们应用两种不同的方法来创建二元标签:一个针对问题的三角函数和一个WGAN模型。然后,使用pix2pix将标签转换为彩色图像,并用于训练U-Net。结果表明,三角函数模型优于WGAN模型。然而,对结果图像的精确检查表明,WGAN和图像到图像的转换获得了良好的分割结果,仅与传统的数据增强有很小的偏差。总之,本研究考察了使用生成对抗网络的数据合成的行业应用,并探索了其在生产环境监控系统中的潜力。关键词:图像到图像转换,碳纤维,数据增强,计算机视觉,工业监控,对抗性学习。
{"title":"Alternative Data Augmentation for Industrial Monitoring using Adversarial Learning","authors":"Silvan Mertes, A. Margraf, Steffen Geinitz, Elisabeth Andr'e","doi":"10.48550/arXiv.2205.04222","DOIUrl":"https://doi.org/10.48550/arXiv.2205.04222","url":null,"abstract":"Visual inspection software has become a key factor in the manufacturing industry for quality control and process monitoring. Semantic segmentation models have gained importance since they allow for more precise examination. These models, however, require large image datasets in order to achieve a fair accuracy level. In some cases, training data is sparse or lacks of sufficient annotation, a fact that especially applies to highly specialized production environments. Data augmentation represents a common strategy to extend the dataset. Still, it only varies the image within a narrow range. In this article, a novel strategy is proposed to augment small image datasets. The approach is applied to surface monitoring of carbon fibers, a specific industry use case. We apply two different methods to create binary labels: a problem-tailored trigonometric function and a WGAN model. Afterwards, the labels are translated into color images using pix2pix and used to train a U-Net. The results suggest that the trigonometric function is superior to the WGAN model. However, a precise examination of the resulting images indicate that WGAN and image-to-image translation achieve good segmentation results and only deviate to a small degree from traditional data augmentation. In summary, this study examines an industry application of data synthesization using generative adversarial networks and explores its potential for monitoring systems of production environments. keywords{Image-to-Image Translation, Carbon Fiber, Data Augmentation, Computer Vision, Industrial Monitoring, Adversarial Learning.","PeriodicalId":88612,"journal":{"name":"News. Phi Delta Epsilon","volume":"21 1","pages":"1-23"},"PeriodicalIF":0.0,"publicationDate":"2022-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78843002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Intercategorical Label Interpolation for Emotional Face Generation with Conditional Generative Adversarial Networks 基于条件生成对抗网络的情感面孔分类间标签插值
Pub Date : 2022-04-26 DOI: 10.48550/arXiv.2204.12237
Silvan Mertes, Dominik Schiller, F. Lingenfelser, Thomas Kiderle, Valentin Kroner, Lama Diab, Elisabeth Andr'e
Generative adversarial networks offer the possibility to generate deceptively real images that are almost indistinguishable from actual photographs. Such systems however rely on the presence of large datasets to realistically replicate the corresponding domain. This is especially a problem if not only random new images are to be generated, but specific (continuous) features are to be co-modeled. A particularly important use case in emph{Human-Computer Interaction} (HCI) research is the generation of emotional images of human faces, which can be used for various use cases, such as the automatic generation of avatars. The problem hereby lies in the availability of training data. Most suitable datasets for this task rely on categorical emotion models and therefore feature only discrete annotation labels. This greatly hinders the learning and modeling of smooth transitions between displayed affective states. To overcome this challenge, we explore the potential of label interpolation to enhance networks trained on categorical datasets with the ability to generate images conditioned on continuous features.
生成对抗网络提供了生成与实际照片几乎无法区分的虚假真实图像的可能性。然而,这样的系统依赖于大型数据集的存在来真实地复制相应的域。如果不仅要生成随机的新图像,而且要对特定的(连续的)特征进行协同建模,这就尤其是个问题。在emph{人机交互}(HCI)研究中,一个特别重要的用例是人脸情感图像的生成,它可以用于各种用例,例如自动生成化身。这里的问题在于训练数据的可用性。大多数适合这项任务的数据集依赖于分类情感模型,因此只具有离散的注释标签。这极大地阻碍了情感状态之间平稳过渡的学习和建模。为了克服这一挑战,我们探索了标签插值的潜力,以增强在分类数据集上训练的网络,使其能够生成以连续特征为条件的图像。
{"title":"Intercategorical Label Interpolation for Emotional Face Generation with Conditional Generative Adversarial Networks","authors":"Silvan Mertes, Dominik Schiller, F. Lingenfelser, Thomas Kiderle, Valentin Kroner, Lama Diab, Elisabeth Andr'e","doi":"10.48550/arXiv.2204.12237","DOIUrl":"https://doi.org/10.48550/arXiv.2204.12237","url":null,"abstract":"Generative adversarial networks offer the possibility to generate deceptively real images that are almost indistinguishable from actual photographs. Such systems however rely on the presence of large datasets to realistically replicate the corresponding domain. This is especially a problem if not only random new images are to be generated, but specific (continuous) features are to be co-modeled. A particularly important use case in emph{Human-Computer Interaction} (HCI) research is the generation of emotional images of human faces, which can be used for various use cases, such as the automatic generation of avatars. The problem hereby lies in the availability of training data. Most suitable datasets for this task rely on categorical emotion models and therefore feature only discrete annotation labels. This greatly hinders the learning and modeling of smooth transitions between displayed affective states. To overcome this challenge, we explore the potential of label interpolation to enhance networks trained on categorical datasets with the ability to generate images conditioned on continuous features.","PeriodicalId":88612,"journal":{"name":"News. Phi Delta Epsilon","volume":"101 1","pages":"67-87"},"PeriodicalIF":0.0,"publicationDate":"2022-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89928229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reliable Classification of Images by Calculating Their Credibility Using a Layer-Wise Activation Cluster Analysis of CNNs 利用cnn的分层激活聚类分析计算图像可信度的可靠分类
Pub Date : 2022-01-01 DOI: 10.1007/978-3-031-37317-6_3
Daniel Lehmann, M. Ebner
{"title":"Reliable Classification of Images by Calculating Their Credibility Using a Layer-Wise Activation Cluster Analysis of CNNs","authors":"Daniel Lehmann, M. Ebner","doi":"10.1007/978-3-031-37317-6_3","DOIUrl":"https://doi.org/10.1007/978-3-031-37317-6_3","url":null,"abstract":"","PeriodicalId":88612,"journal":{"name":"News. Phi Delta Epsilon","volume":"50 1","pages":"33-55"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84586972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Forecast of Dengue Cases based on the Deep Learning Approach: A Case Study for a Brazilian City 基于深度学习方法的登革热病例预测:以巴西某城市为例
Pub Date : 2022-01-01 DOI: 10.5220/0011135500003277
L. S. D. Souza, S. N. Alves-Souza, L. V. L. Filgueiras, L. Velloso, M. F. Carvalho, Luciano Garcia, Marcia Ito, J. Jarske, T. L. Santos, H. Fernandes, Gabriela Araújo, Wesley Barbosa
{"title":"Forecast of Dengue Cases based on the Deep Learning Approach: A Case Study for a Brazilian City","authors":"L. S. D. Souza, S. N. Alves-Souza, L. V. L. Filgueiras, L. Velloso, M. F. Carvalho, Luciano Garcia, Marcia Ito, J. Jarske, T. L. Santos, H. Fernandes, Gabriela Araújo, Wesley Barbosa","doi":"10.5220/0011135500003277","DOIUrl":"https://doi.org/10.5220/0011135500003277","url":null,"abstract":"","PeriodicalId":88612,"journal":{"name":"News. Phi Delta Epsilon","volume":"57 1","pages":"71-76"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87700410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic UML Defects Detection based on Image of Diagram 基于图图像的UML缺陷自动检测
Pub Date : 2022-01-01 DOI: 10.5220/0011316900003277
Murielle Lokonon, V. R. Houndji
: Unified Modeling Language (UML) is a standardized modeling language used to design software systems. However, software engineering learners often have difficulties understanding UML and often repeat the same mistakes. Several solutions automatically correct UML diagrams. These solutions are generally restricted to the modeling tool used or need teachers’ intervention for providing exercises, answers, and other rules to consider for diagrams corrections. This paper proposes a tool that allows the automatic correction of UML diagrams by taking an image as input. The aim is to help UML practicers get automatic feedback on their diagrams regardless of how they have represented them. We have conducted our experiments on the use case diagrams. We have first built a dataset of images of the most elements encountered in the use case diagrams. Then, based on this dataset, we have trained some machine learning models using the Detectron2 library developed by Facebook AI Research (FAIR). Finally, we have used the model with the best performances and a predefined list of errors to set up a tool that can syntactically correct any use case diagram with relatively good precision. Thanks to its genericity, the use of this tool is easier and more practical than the state-of-the-art UML diagrams correction systems.
{"title":"Automatic UML Defects Detection based on Image of Diagram","authors":"Murielle Lokonon, V. R. Houndji","doi":"10.5220/0011316900003277","DOIUrl":"https://doi.org/10.5220/0011316900003277","url":null,"abstract":": Unified Modeling Language (UML) is a standardized modeling language used to design software systems. However, software engineering learners often have difficulties understanding UML and often repeat the same mistakes. Several solutions automatically correct UML diagrams. These solutions are generally restricted to the modeling tool used or need teachers’ intervention for providing exercises, answers, and other rules to consider for diagrams corrections. This paper proposes a tool that allows the automatic correction of UML diagrams by taking an image as input. The aim is to help UML practicers get automatic feedback on their diagrams regardless of how they have represented them. We have conducted our experiments on the use case diagrams. We have first built a dataset of images of the most elements encountered in the use case diagrams. Then, based on this dataset, we have trained some machine learning models using the Detectron2 library developed by Facebook AI Research (FAIR). Finally, we have used the model with the best performances and a predefined list of errors to set up a tool that can syntactically correct any use case diagram with relatively good precision. Thanks to its genericity, the use of this tool is easier and more practical than the state-of-the-art UML diagrams correction systems.","PeriodicalId":88612,"journal":{"name":"News. Phi Delta Epsilon","volume":"4 1","pages":"193-198"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78420768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time Distance Measurement in a 2D Image on Hardware with Limited Resources for Low-power IoT Devices (Radar Control System) 低功耗物联网设备(雷达控制系统)在有限资源硬件上的二维图像实时距离测量
Pub Date : 2022-01-01 DOI: 10.5220/0011188100003277
Jurij Kuzmic, G. Rudolph
{"title":"Real-time Distance Measurement in a 2D Image on Hardware with Limited Resources for Low-power IoT Devices (Radar Control System)","authors":"Jurij Kuzmic, G. Rudolph","doi":"10.5220/0011188100003277","DOIUrl":"https://doi.org/10.5220/0011188100003277","url":null,"abstract":"","PeriodicalId":88612,"journal":{"name":"News. Phi Delta Epsilon","volume":"4 1","pages":"94-101"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72749494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Active Collection of Well-Being and Health Data in Mobile Devices 在移动设备中积极收集幸福和健康数据
Pub Date : 2022-01-01 DOI: 10.1007/978-3-031-37317-6_2
João Marques, Francisco Faria, Rita Machado, Heitor Cardoso, Alexandre Bernardino, Plinio Moreno
{"title":"Active Collection of Well-Being and Health Data in Mobile Devices","authors":"João Marques, Francisco Faria, Rita Machado, Heitor Cardoso, Alexandre Bernardino, Plinio Moreno","doi":"10.1007/978-3-031-37317-6_2","DOIUrl":"https://doi.org/10.1007/978-3-031-37317-6_2","url":null,"abstract":"","PeriodicalId":88612,"journal":{"name":"News. Phi Delta Epsilon","volume":"17 1","pages":"17-32"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89799122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Creating an Automatic Road Sign Inventory System using a Fully Deep Learning-based Approach 使用完全基于深度学习的方法创建自动道路标志库存系统
Pub Date : 2022-01-01 DOI: 10.5220/0011266100003277
Gabriele Galatolo, Matteo Papi, Andrea Spinelli, Guglielmo Giomi, A. Zedda, M. Calderisi
: Some road sections are a veritable forest of road signs: just think how many indications you can come across on an urban or extra-urban route, near a construction site or a road diversion. The automatic recognition of vertical traffic signs is an extremely useful task in the automotive industry for many practical applications, such as supporting the driver while driving with an in-car advisory system or the creation of a register of signals for a particular road section to speed up maintenance and replacement of installations. Recent developments in deep learning have brought huge progress in the image processing area, which triggered successful applications like traffic sign recognition (TSR). The TSR is a specific image processing task in which real traffic scenes (images or frames from videos taken from vehicle cameras in uncontrolled lighting and occlusion conditions) are processed in order to detect and recognize traffic signs within it. Traffic Sign Recognition is a very recent technology facilitated by the Vienna Convention on Road Signs and Signals of 1968: during that international meeting, it was decided to standardize traffic signs so that they could be recognised more easily abroad. Finally, this work summarizes our proposal of a practical pipeline for the development of an automatic traffic sign recognition software.
字体有些路段是名副其实的路标森林:想想看,在城市或城市外的路线上,在建筑工地附近或道路改道处,你会遇到多少路标。垂直交通标志的自动识别在汽车行业的许多实际应用中都是非常有用的任务,例如在驾驶时支持驾驶员使用车载咨询系统或为特定路段创建信号寄存器以加快维护和更换装置。深度学习的最新发展为图像处理领域带来了巨大的进步,引发了交通标志识别(TSR)等成功的应用。TSR是一项特定的图像处理任务,其中处理真实的交通场景(在不受控制的照明和遮挡条件下从车载摄像头拍摄的视频中的图像或帧),以检测和识别其中的交通标志。交通标志识别是1968年《维也纳道路标志和信号公约》推动的一项最新技术:在那次国际会议上,决定将交通标志标准化,以便在国外更容易识别。最后,本工作总结了我们提出的开发交通标志自动识别软件的实用管道。
{"title":"Creating an Automatic Road Sign Inventory System using a Fully Deep Learning-based Approach","authors":"Gabriele Galatolo, Matteo Papi, Andrea Spinelli, Guglielmo Giomi, A. Zedda, M. Calderisi","doi":"10.5220/0011266100003277","DOIUrl":"https://doi.org/10.5220/0011266100003277","url":null,"abstract":": Some road sections are a veritable forest of road signs: just think how many indications you can come across on an urban or extra-urban route, near a construction site or a road diversion. The automatic recognition of vertical traffic signs is an extremely useful task in the automotive industry for many practical applications, such as supporting the driver while driving with an in-car advisory system or the creation of a register of signals for a particular road section to speed up maintenance and replacement of installations. Recent developments in deep learning have brought huge progress in the image processing area, which triggered successful applications like traffic sign recognition (TSR). The TSR is a specific image processing task in which real traffic scenes (images or frames from videos taken from vehicle cameras in uncontrolled lighting and occlusion conditions) are processed in order to detect and recognize traffic signs within it. Traffic Sign Recognition is a very recent technology facilitated by the Vienna Convention on Road Signs and Signals of 1968: during that international meeting, it was decided to standardize traffic signs so that they could be recognised more easily abroad. Finally, this work summarizes our proposal of a practical pipeline for the development of an automatic traffic sign recognition software.","PeriodicalId":88612,"journal":{"name":"News. Phi Delta Epsilon","volume":"10 2 1","pages":"102-109"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78978222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Faster Converging Negative Sampling for the Graph Embedding Process in Community Detection and Link Prediction Tasks 社区检测和链路预测任务中更快收敛的负抽样图嵌入过程
Pub Date : 2022-01-01 DOI: 10.5220/0011142000003277
K. Loumponias, Andreas Kosmatopoulos, T. Tsikrika, S. Vrochidis, Y. Kompatsiaris
{"title":"A Faster Converging Negative Sampling for the Graph Embedding Process in Community Detection and Link Prediction Tasks","authors":"K. Loumponias, Andreas Kosmatopoulos, T. Tsikrika, S. Vrochidis, Y. Kompatsiaris","doi":"10.5220/0011142000003277","DOIUrl":"https://doi.org/10.5220/0011142000003277","url":null,"abstract":"","PeriodicalId":88612,"journal":{"name":"News. Phi Delta Epsilon","volume":"21 1","pages":"86-93"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84417632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating and Improving RoSELS for Road Surface Extraction from 3D Automotive LiDAR Point Cloud Sequences 基于三维汽车激光雷达点云序列的道路表面提取方法的评价与改进
Pub Date : 2022-01-01 DOI: 10.1007/978-3-031-37317-6_6
Dhvani Katkoria, Jaya Sreevalsan-Nair
{"title":"Evaluating and Improving RoSELS for Road Surface Extraction from 3D Automotive LiDAR Point Cloud Sequences","authors":"Dhvani Katkoria, Jaya Sreevalsan-Nair","doi":"10.1007/978-3-031-37317-6_6","DOIUrl":"https://doi.org/10.1007/978-3-031-37317-6_6","url":null,"abstract":"","PeriodicalId":88612,"journal":{"name":"News. Phi Delta Epsilon","volume":"58 1","pages":"98-120"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85776931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
News. Phi Delta Epsilon
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1