首页 > 最新文献

IEEE Transactions on Image Processing最新文献

英文 中文
Cross-Scope Spatial-Spectral Information Aggregation for Hyperspectral Image Super-Resolution 用于高光谱图像超分辨率的跨范围空间光谱信息聚合
IF 10.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-15 DOI: 10.1109/tip.2024.3468905
Shi Chen, Lefei Zhang, Liangpei Zhang
{"title":"Cross-Scope Spatial-Spectral Information Aggregation for Hyperspectral Image Super-Resolution","authors":"Shi Chen, Lefei Zhang, Liangpei Zhang","doi":"10.1109/tip.2024.3468905","DOIUrl":"https://doi.org/10.1109/tip.2024.3468905","url":null,"abstract":"","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142440010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-supervised subaction Parsing Network for Semi-supervised Action Quality Assessment 用于半监督行动质量评估的自监督子行动解析网络
IF 10.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-07 DOI: 10.1109/tip.2024.3468870
Kumie Gedamu, Yanli Ji, Yang Yang, Jie Shao, Heng Tao Shen
{"title":"Self-supervised subaction Parsing Network for Semi-supervised Action Quality Assessment","authors":"Kumie Gedamu, Yanli Ji, Yang Yang, Jie Shao, Heng Tao Shen","doi":"10.1109/tip.2024.3468870","DOIUrl":"https://doi.org/10.1109/tip.2024.3468870","url":null,"abstract":"","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142384344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalizable Deepfake Detection with Phase-Based Motion Analysis 利用基于相位的运动分析进行可通用的深度伪装检测
IF 10.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-20 DOI: 10.1109/tip.2024.3441821
Ekta Prashnani, Michael Goebel, B. S. Manjunath
{"title":"Generalizable Deepfake Detection with Phase-Based Motion Analysis","authors":"Ekta Prashnani, Michael Goebel, B. S. Manjunath","doi":"10.1109/tip.2024.3441821","DOIUrl":"https://doi.org/10.1109/tip.2024.3441821","url":null,"abstract":"","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142275356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recalling Unknowns without Losing Precision: An Effective Solution to Large Model-Guided Open World Object Detection 在不损失精确度的情况下恢复未知数据:大型模型引导的开放世界物体检测的有效解决方案
IF 10.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-18 DOI: 10.1109/tip.2024.3459589
Yulin He, Wei Chen, Siqi Wang, Tianrui Liu, Meng Wang
{"title":"Recalling Unknowns without Losing Precision: An Effective Solution to Large Model-Guided Open World Object Detection","authors":"Yulin He, Wei Chen, Siqi Wang, Tianrui Liu, Meng Wang","doi":"10.1109/tip.2024.3459589","DOIUrl":"https://doi.org/10.1109/tip.2024.3459589","url":null,"abstract":"","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142245415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HeightFormer: Explicit Height Modeling without Extra Data for Camera-only 3D Object Detection in Bird’s Eye View HeightFormer:明确的高度建模,无需额外数据,实现鸟瞰图中仅摄像头的 3D 物体检测
IF 10.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-09 DOI: 10.1109/tip.2024.3427701
Yiming Wu, Ruixiang Li, Zequn Qin, Xinhai Zhao, Xi Li
{"title":"HeightFormer: Explicit Height Modeling without Extra Data for Camera-only 3D Object Detection in Bird’s Eye View","authors":"Yiming Wu, Ruixiang Li, Zequn Qin, Xinhai Zhao, Xi Li","doi":"10.1109/tip.2024.3427701","DOIUrl":"https://doi.org/10.1109/tip.2024.3427701","url":null,"abstract":"","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142160432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Nonconvex Robust High-Order Tensor Completion Using Randomized Low-Rank Approximation 使用随机低库近似的非凸稳健高阶张量补全
IF 10.6 1区 计算机科学 Q1 Computer Science Pub Date : 2024-04-10 DOI: 10.1109/tip.2024.3385284
Wenjin Qin, Hailin Wang, Feng Zhang, Weijun Ma, Jianjun Wang, Tingwen Huang
{"title":"Nonconvex Robust High-Order Tensor Completion Using Randomized Low-Rank Approximation","authors":"Wenjin Qin, Hailin Wang, Feng Zhang, Weijun Ma, Jianjun Wang, Tingwen Huang","doi":"10.1109/tip.2024.3385284","DOIUrl":"https://doi.org/10.1109/tip.2024.3385284","url":null,"abstract":"","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140544990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Relationship-Incremental Scene Graph Generation by a Divide-and-Conquer Pipeline with Feature Adapter 通过带有特征适配器的分而治之流水线生成关系递增场景图
IF 10.6 1区 计算机科学 Q1 Computer Science Pub Date : 2024-04-08 DOI: 10.1109/tip.2024.3384096
Xuewei Li, Guangcong Zheng, Yunlong Yu, Naye Ji, Xi Li
{"title":"Relationship-Incremental Scene Graph Generation by a Divide-and-Conquer Pipeline with Feature Adapter","authors":"Xuewei Li, Guangcong Zheng, Yunlong Yu, Naye Ji, Xi Li","doi":"10.1109/tip.2024.3384096","DOIUrl":"https://doi.org/10.1109/tip.2024.3384096","url":null,"abstract":"","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140538492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Transparent Deep Image Aesthetics Assessment with Tag-based Content Descriptors. 利用基于标签的内容描述符实现透明的深度图像美学评估
IF 10.6 1区 计算机科学 Q1 Computer Science Pub Date : 2023-08-30 DOI: 10.1109/TIP.2023.3308852
Jingwen Hou, Weisi Lin, Yuming Fang, Haoning Wu, Chaofeng Chen, Liang Liao, Weide Liu

Deep learning approaches for Image Aesthetics Assessment (IAA) have shown promising results in recent years, but the internal mechanisms of these models remain unclear. Previous studies have demonstrated that image aesthetics can be predicted using semantic features, such as pre-trained object classification features. However, these semantic features are learned implicitly, and therefore, previous works have not elucidated what the semantic features are representing. In this work, we aim to create a more transparent deep learning framework for IAA by introducing explainable semantic features. To achieve this, we propose Tag-based Content Descriptors (TCDs), where each value in a TCD describes the relevance of an image to a human-readable tag that refers to a specific type of image content. This allows us to build IAA models from explicit descriptions of image contents. We first propose the explicit matching process to produce TCDs that adopt predefined tags to describe image contents. We show that a simple MLP-based IAA model with TCDs only based on predefined tags can achieve an SRCC of 0.767, which is comparable to most state-of-the-art methods. However, predefined tags may not be sufficient to describe all possible image contents that the model may encounter. Therefore, we further propose the implicit matching process to describe image contents that cannot be described by predefined tags. By integrating components obtained from the implicit matching process into TCDs, the IAA model further achieves an SRCC of 0.817, which significantly outperforms existing IAA methods. Both the explicit matching process and the implicit matching process are realized by the proposed TCD generator. To evaluate the performance of the proposed TCD generator in matching images with predefined tags, we also labeled 5101 images with photography-related tags to form a validation set. And experimental results show that the proposed TCD generator can meaningfully assign photography-related tags to images.

近年来,用于图像美学评估(IAA)的深度学习方法取得了可喜的成果,但这些模型的内部机制仍不清楚。以往的研究表明,图像美学可以通过语义特征(如预训练的对象分类特征)来预测。然而,这些语义特征都是隐式学习的,因此以往的研究并未阐明这些语义特征代表了什么。在这项工作中,我们旨在通过引入可解释的语义特征,为 IAA 创建一个更加透明的深度学习框架。为此,我们提出了基于标签的内容描述符(TCD),TCD 中的每个值都描述了图像与人类可读标签的相关性,该标签指的是特定类型的图像内容。这样,我们就能根据图像内容的明确描述建立 IAA 模型。我们首先提出了显式匹配流程,以生成采用预定义标签来描述图像内容的 TCD。我们的研究表明,一个简单的基于 MLP 的 IAA 模型,其 TCD 仅基于预定义标签,就能达到 0.767 的 SRCC,与大多数最先进的方法不相上下。然而,预定义标签可能不足以描述模型可能遇到的所有图像内容。因此,我们进一步提出了隐式匹配过程,以描述预定义标签无法描述的图像内容。通过将隐式匹配过程获得的组件集成到 TCD 中,IAA 模型的 SRCC 进一步达到了 0.817,明显优于现有的 IAA 方法。显式匹配过程和隐式匹配过程均由所提出的 TCD 生成器实现。为了评估所提出的 TCD 生成器在匹配带有预定义标签的图像方面的性能,我们还为 5101 幅图像标注了与摄影相关的标签,以形成验证集。实验结果表明,建议的 TCD 生成器可以为图像分配有意义的摄影相关标签。
{"title":"Towards Transparent Deep Image Aesthetics Assessment with Tag-based Content Descriptors.","authors":"Jingwen Hou, Weisi Lin, Yuming Fang, Haoning Wu, Chaofeng Chen, Liang Liao, Weide Liu","doi":"10.1109/TIP.2023.3308852","DOIUrl":"10.1109/TIP.2023.3308852","url":null,"abstract":"<p><p>Deep learning approaches for Image Aesthetics Assessment (IAA) have shown promising results in recent years, but the internal mechanisms of these models remain unclear. Previous studies have demonstrated that image aesthetics can be predicted using semantic features, such as pre-trained object classification features. However, these semantic features are learned implicitly, and therefore, previous works have not elucidated what the semantic features are representing. In this work, we aim to create a more transparent deep learning framework for IAA by introducing explainable semantic features. To achieve this, we propose Tag-based Content Descriptors (TCDs), where each value in a TCD describes the relevance of an image to a human-readable tag that refers to a specific type of image content. This allows us to build IAA models from explicit descriptions of image contents. We first propose the explicit matching process to produce TCDs that adopt predefined tags to describe image contents. We show that a simple MLP-based IAA model with TCDs only based on predefined tags can achieve an SRCC of 0.767, which is comparable to most state-of-the-art methods. However, predefined tags may not be sufficient to describe all possible image contents that the model may encounter. Therefore, we further propose the implicit matching process to describe image contents that cannot be described by predefined tags. By integrating components obtained from the implicit matching process into TCDs, the IAA model further achieves an SRCC of 0.817, which significantly outperforms existing IAA methods. Both the explicit matching process and the implicit matching process are realized by the proposed TCD generator. To evaluate the performance of the proposed TCD generator in matching images with predefined tags, we also labeled 5101 images with photography-related tags to form a validation set. And experimental results show that the proposed TCD generator can meaningfully assign photography-related tags to images.</p>","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2023-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10207498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Field-of-View IoU for Object Detection in 360° Images. 用于 360° 图像中物体检测的视场 IoU。
IF 10.6 1区 计算机科学 Q1 Computer Science Pub Date : 2023-07-21 DOI: 10.1109/TIP.2023.3296013
Miao Cao, Satoshi Ikehata, Kiyoharu Aizawa

360° cameras have gained popularity over the last few years. In this paper, we propose two fundamental techniques-Field-of-View IoU (FoV-IoU) and 360Augmentation for object detection in 360° images. Although most object detection neural networks designed for perspective images are applicable to 360° images in equirectangular projection (ERP) format, their performance deteriorates owing to the distortion in ERP images. Our method can be readily integrated with existing perspective object detectors and significantly improves the performance. The FoV-IoU computes the intersection-over-union of two Field-of-View bounding boxes in a spherical image which could be used for training, inference, and evaluation while 360Augmentation is a data augmentation technique specific to 360° object detection task which randomly rotates a spherical image and solves the bias due to the sphere-to-plane projection. We conduct extensive experiments on the 360° indoor dataset with different types of perspective object detectors and show the consistent effectiveness of our method.

360° 摄像机在过去几年中越来越受欢迎。在本文中,我们针对 360° 图像中的物体检测提出了两种基本技术--视场 IoU(FoV-IoU)和 360Augmentation。虽然大多数为透视图像设计的物体检测神经网络也适用于等角投影(ERP)格式的 360° 图像,但由于 ERP 图像的失真,它们的性能会下降。我们的方法可以很容易地与现有的透视物体检测器集成,并显著提高性能。FoV-IoU 计算的是球形图像中两个视场边界框的相交-重合,可用于训练、推理和评估;而 360Augmentation 则是一种专门针对 360° 物体检测任务的数据增强技术,可随机旋转球形图像,解决球面到平面投影造成的偏差。我们在 360° 室内数据集上使用不同类型的透视物体检测器进行了大量实验,结果表明我们的方法具有一致的有效性。
{"title":"Field-of-View IoU for Object Detection in 360° Images.","authors":"Miao Cao, Satoshi Ikehata, Kiyoharu Aizawa","doi":"10.1109/TIP.2023.3296013","DOIUrl":"10.1109/TIP.2023.3296013","url":null,"abstract":"<p><p>360° cameras have gained popularity over the last few years. In this paper, we propose two fundamental techniques-Field-of-View IoU (FoV-IoU) and 360Augmentation for object detection in 360° images. Although most object detection neural networks designed for perspective images are applicable to 360° images in equirectangular projection (ERP) format, their performance deteriorates owing to the distortion in ERP images. Our method can be readily integrated with existing perspective object detectors and significantly improves the performance. The FoV-IoU computes the intersection-over-union of two Field-of-View bounding boxes in a spherical image which could be used for training, inference, and evaluation while 360Augmentation is a data augmentation technique specific to 360° object detection task which randomly rotates a spherical image and solves the bias due to the sphere-to-plane projection. We conduct extensive experiments on the 360° indoor dataset with different types of perspective object detectors and show the consistent effectiveness of our method.</p>","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9848778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TGFuse: An Infrared and Visible Image Fusion Approach Based on Transformer and Generative Adversarial Network. TGFuse:基于变换器和生成对抗网络的红外与可见光图像融合方法。
IF 10.6 1区 计算机科学 Q1 Computer Science Pub Date : 2023-05-10 DOI: 10.1109/TIP.2023.3273451
Dongyu Rao, Tianyang Xu, Xiao-Jun Wu

The end-to-end image fusion framework has achieved promising performance, with dedicated convolutional networks aggregating the multi-modal local appearance. However, long-range dependencies are directly neglected in existing CNN fusion approaches, impeding balancing the entire image-level perception for complex scenario fusion. In this paper, therefore, we propose an infrared and visible image fusion algorithm based on the transformer module and adversarial learning. Inspired by the global interaction power, we use the transformer technique to learn the effective global fusion relations. In particular, shallow features extracted by CNN are interacted in the proposed transformer fusion module to refine the fusion relationship within the spatial scope and across channels simultaneously. Besides, adversarial learning is designed in the training process to improve the output discrimination via imposing competitive consistency from the inputs, reflecting the specific characteristics in infrared and visible images. The experimental performance demonstrates the effectiveness of the proposed modules, with superior improvement against the state-of-the-art, generalising a novel paradigm via transformer and adversarial learning in the fusion task.

端到端图像融合框架通过专用卷积网络聚合多模态局部外观,取得了可喜的性能。然而,现有的 CNN 融合方法直接忽略了长程依赖性,阻碍了复杂场景融合中整个图像级感知的平衡。因此,本文提出了一种基于变换器模块和对抗学习的红外与可见光图像融合算法。受全局交互能力的启发,我们利用变换器技术来学习有效的全局融合关系。其中,CNN 提取的浅层特征在所提出的变换器融合模块中进行交互,以同时完善空间范围内和跨信道的融合关系。此外,在训练过程中还设计了对抗学习,通过对输入施加竞争一致性来提高输出分辨能力,从而反映出红外图像和可见光图像的具体特征。实验结果证明了所提模块的有效性,与最先进的模块相比有了显著的提高,在融合任务中通过变换器和对抗学习推广了一种新的范式。
{"title":"TGFuse: An Infrared and Visible Image Fusion Approach Based on Transformer and Generative Adversarial Network.","authors":"Dongyu Rao, Tianyang Xu, Xiao-Jun Wu","doi":"10.1109/TIP.2023.3273451","DOIUrl":"10.1109/TIP.2023.3273451","url":null,"abstract":"<p><p>The end-to-end image fusion framework has achieved promising performance, with dedicated convolutional networks aggregating the multi-modal local appearance. However, long-range dependencies are directly neglected in existing CNN fusion approaches, impeding balancing the entire image-level perception for complex scenario fusion. In this paper, therefore, we propose an infrared and visible image fusion algorithm based on the transformer module and adversarial learning. Inspired by the global interaction power, we use the transformer technique to learn the effective global fusion relations. In particular, shallow features extracted by CNN are interacted in the proposed transformer fusion module to refine the fusion relationship within the spatial scope and across channels simultaneously. Besides, adversarial learning is designed in the training process to improve the output discrimination via imposing competitive consistency from the inputs, reflecting the specific characteristics in infrared and visible images. The experimental performance demonstrates the effectiveness of the proposed modules, with superior improvement against the state-of-the-art, generalising a novel paradigm via transformer and adversarial learning in the fusion task.</p>","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2023-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9443051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Image Processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1